problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_12274 | rasdani/github-patches | git_diff | wagtail__wagtail-11223 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Report pages performance regression
### Issue Summary
Various report pages have a performance regression in Wagtail 5.2, which I've tracked down to:
https://github.com/wagtail/wagtail/commit/7ba1afb8a402a09be5838a026523be78f08ea877
https://github.com/wagtail/wagtail/pull/10822
On a few sites we've upgraded to Wagtail 5.2 - performance in the Site History report has been significantly reduced:
Before:
<img width="1717" alt="Screenshot 2023-11-11 at 21 12 02" src="https://github.com/wagtail/wagtail/assets/177332/79650e6b-9c96-4d21-bbdf-23b98c862bf4">
After:
<img width="1716" alt="Screenshot 2023-11-11 at 21 13 09" src="https://github.com/wagtail/wagtail/assets/177332/e719e250-5c9c-4dc8-823b-1e1c3b40a74c">
<img width="900" alt="Screenshot 2023-11-11 at 21 13 19" src="https://github.com/wagtail/wagtail/assets/177332/5623467b-a0ca-4472-aa46-540ff568ac82">
### Steps to Reproduce
Find an existing Wagtail project with lots of pages, and log entries.
Check http://127.0.0.1:9000/admin/reports/site-history/ with the project running Wagtail 5.2 - page will probably be slow to load.
(Note: I did try and create a quick script to test this with Wagtail's starter project - but the performance of SQLite and a lack of a debug toolbar slowing things down made it a bit tricky!).
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.11 / any
- Django version: 4.2 / any
- Wagtail version: 5.2 / main
- Browser version: n/a
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/views/reports/base.py`
Content:
```
1 from django.utils.translation import gettext_lazy as _
2
3 from wagtail.admin.views.generic.models import IndexView
4
5
6 class ReportView(IndexView):
7 template_name = "wagtailadmin/reports/base_report.html"
8 title = ""
9 paginate_by = 50
10
11 def get_filtered_queryset(self):
12 return self.filter_queryset(self.get_queryset())
13
14 def decorate_paginated_queryset(self, object_list):
15 # A hook point to allow rewriting the object list after pagination has been applied
16 return object_list
17
18 def get(self, request, *args, **kwargs):
19 self.filters, self.object_list = self.get_filtered_queryset()
20 self.object_list = self.decorate_paginated_queryset(self.object_list)
21 context = self.get_context_data()
22 return self.render_to_response(context)
23
24 def get_context_data(self, *args, **kwargs):
25 context = super().get_context_data(*args, **kwargs)
26 context["title"] = self.title
27 return context
28
29
30 class PageReportView(ReportView):
31 template_name = "wagtailadmin/reports/base_page_report.html"
32 export_headings = {
33 "latest_revision_created_at": _("Updated"),
34 "status_string": _("Status"),
35 "content_type.model_class._meta.verbose_name.title": _("Type"),
36 }
37 list_export = [
38 "title",
39 "latest_revision_created_at",
40 "status_string",
41 "content_type.model_class._meta.verbose_name.title",
42 ]
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/admin/views/reports/base.py b/wagtail/admin/views/reports/base.py
--- a/wagtail/admin/views/reports/base.py
+++ b/wagtail/admin/views/reports/base.py
@@ -17,8 +17,12 @@
def get(self, request, *args, **kwargs):
self.filters, self.object_list = self.get_filtered_queryset()
- self.object_list = self.decorate_paginated_queryset(self.object_list)
context = self.get_context_data()
+ # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced
+ # list of objects
+ context["object_list"] = self.decorate_paginated_queryset(
+ context["object_list"]
+ )
return self.render_to_response(context)
def get_context_data(self, *args, **kwargs):
| {"golden_diff": "diff --git a/wagtail/admin/views/reports/base.py b/wagtail/admin/views/reports/base.py\n--- a/wagtail/admin/views/reports/base.py\n+++ b/wagtail/admin/views/reports/base.py\n@@ -17,8 +17,12 @@\n \n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n- self.object_list = self.decorate_paginated_queryset(self.object_list)\n context = self.get_context_data()\n+ # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced\n+ # list of objects\n+ context[\"object_list\"] = self.decorate_paginated_queryset(\n+ context[\"object_list\"]\n+ )\n return self.render_to_response(context)\n \n def get_context_data(self, *args, **kwargs):\n", "issue": "Report pages performance regression\n### Issue Summary\r\n\r\nVarious report pages have a performance regression in Wagtail 5.2, which I've tracked down to:\r\n\r\nhttps://github.com/wagtail/wagtail/commit/7ba1afb8a402a09be5838a026523be78f08ea877\r\nhttps://github.com/wagtail/wagtail/pull/10822\r\n\r\nOn a few sites we've upgraded to Wagtail 5.2 - performance in the Site History report has been significantly reduced:\r\n\r\nBefore:\r\n<img width=\"1717\" alt=\"Screenshot 2023-11-11 at 21 12 02\" src=\"https://github.com/wagtail/wagtail/assets/177332/79650e6b-9c96-4d21-bbdf-23b98c862bf4\">\r\n\r\nAfter:\r\n<img width=\"1716\" alt=\"Screenshot 2023-11-11 at 21 13 09\" src=\"https://github.com/wagtail/wagtail/assets/177332/e719e250-5c9c-4dc8-823b-1e1c3b40a74c\">\r\n<img width=\"900\" alt=\"Screenshot 2023-11-11 at 21 13 19\" src=\"https://github.com/wagtail/wagtail/assets/177332/5623467b-a0ca-4472-aa46-540ff568ac82\">\r\n\r\n### Steps to Reproduce\r\n\r\nFind an existing Wagtail project with lots of pages, and log entries.\r\n\r\nCheck http://127.0.0.1:9000/admin/reports/site-history/ with the project running Wagtail 5.2 - page will probably be slow to load.\r\n\r\n(Note: I did try and create a quick script to test this with Wagtail's starter project - but the performance of SQLite and a lack of a debug toolbar slowing things down made it a bit tricky!).\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n### Technical details\r\n\r\n- Python version: 3.11 / any\r\n- Django version: 4.2 / any\r\n- Wagtail version: 5.2 / main\r\n- Browser version: n/a\n", "before_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom wagtail.admin.views.generic.models import IndexView\n\n\nclass ReportView(IndexView):\n template_name = \"wagtailadmin/reports/base_report.html\"\n title = \"\"\n paginate_by = 50\n\n def get_filtered_queryset(self):\n return self.filter_queryset(self.get_queryset())\n\n def decorate_paginated_queryset(self, object_list):\n # A hook point to allow rewriting the object list after pagination has been applied\n return object_list\n\n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n self.object_list = self.decorate_paginated_queryset(self.object_list)\n context = self.get_context_data()\n return self.render_to_response(context)\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context[\"title\"] = self.title\n return context\n\n\nclass PageReportView(ReportView):\n template_name = \"wagtailadmin/reports/base_page_report.html\"\n export_headings = {\n \"latest_revision_created_at\": _(\"Updated\"),\n \"status_string\": _(\"Status\"),\n \"content_type.model_class._meta.verbose_name.title\": _(\"Type\"),\n }\n list_export = [\n \"title\",\n \"latest_revision_created_at\",\n \"status_string\",\n \"content_type.model_class._meta.verbose_name.title\",\n ]\n", "path": "wagtail/admin/views/reports/base.py"}], "after_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom wagtail.admin.views.generic.models import IndexView\n\n\nclass ReportView(IndexView):\n template_name = \"wagtailadmin/reports/base_report.html\"\n title = \"\"\n paginate_by = 50\n\n def get_filtered_queryset(self):\n return self.filter_queryset(self.get_queryset())\n\n def decorate_paginated_queryset(self, object_list):\n # A hook point to allow rewriting the object list after pagination has been applied\n return object_list\n\n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n context = self.get_context_data()\n # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced\n # list of objects\n context[\"object_list\"] = self.decorate_paginated_queryset(\n context[\"object_list\"]\n )\n return self.render_to_response(context)\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context[\"title\"] = self.title\n return context\n\n\nclass PageReportView(ReportView):\n template_name = \"wagtailadmin/reports/base_page_report.html\"\n export_headings = {\n \"latest_revision_created_at\": _(\"Updated\"),\n \"status_string\": _(\"Status\"),\n \"content_type.model_class._meta.verbose_name.title\": _(\"Type\"),\n }\n list_export = [\n \"title\",\n \"latest_revision_created_at\",\n \"status_string\",\n \"content_type.model_class._meta.verbose_name.title\",\n ]\n", "path": "wagtail/admin/views/reports/base.py"}]} | 1,214 | 186 |
gh_patches_debug_24294 | rasdani/github-patches | git_diff | cupy__cupy-6989 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception raised in `TCPStore.__del__` upon process termination
### Description
`__del__` should not perform syncronization when called during process termination.
```
Exception ignored in: <function TCPStore.__del__ at 0x7fb939be23a0>
Traceback (most recent call last):
File "/home/maehashi/Development/cupy/cupyx/distributed/_store.py", line 49, in __del__
File "/home/maehashi/Development/cupy/cupyx/distributed/_store.py", line 97, in stop
File "/home/maehashi/Development/cupy/cupyx/distributed/_store.py", line 31, in join
File "/home/maehashi/.pyenv/versions/3.8.1/lib/python3.8/multiprocessing/connection.py", line 251, in recv
ModuleNotFoundError: import of builtins halted; None in sys.modules
```
### To Reproduce
_No response_
### Installation
_No response_
### Environment
_No response_
### Additional Information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupyx/distributed/_store.py`
Content:
```
1 from ctypes import sizeof
2 import multiprocessing
3 import threading
4 import socket
5 import time
6
7 from cupyx.distributed import _klv_utils
8 from cupyx.distributed import _store_actions
9
10
11 _DEFAULT_HOST = '127.0.0.1'
12 _DEFAULT_PORT = 13333
13
14
15 class ExceptionAwareProcess(multiprocessing.Process):
16 def __init__(self, *args, **kwargs):
17 super().__init__(*args, **kwargs)
18 self._exception = None
19 self._parent_p, self._child_p = multiprocessing.Pipe()
20
21 def run(self):
22 try:
23 super().run()
24 self._child_p.send(None)
25 except Exception as e:
26 self._child_p.send(e)
27
28 def join(self):
29 super().join()
30 if self._parent_p.poll():
31 exception = self._parent_p.recv()
32 if exception is not None:
33 raise exception
34
35
36 class TCPStore:
37 # This is only used for initialization of nccl so we don't care
38 # too much about peformance
39 def __init__(self, world_size):
40 self.storage = {}
41 self._process = None
42 self._world_size = world_size
43 self._run = multiprocessing.Value('b', 1)
44 # For implementing a barrier
45 self._lock = threading.Lock()
46 self._current_barrier = None
47
48 def __del__(self):
49 self.stop()
50
51 def _set_process(self, process):
52 self._process = process
53
54 def _process_request(self, c_socket):
55 with c_socket:
56 # Receive in KLV format
57 action_bytes = c_socket.recv(sizeof(_klv_utils.action_t))
58 if len(action_bytes) > 0:
59 action_m = _klv_utils.action_t.from_buffer_copy(action_bytes)
60 if action_m.length > 256:
61 raise ValueError('Invalid length for message')
62 value = bytearray(action_m.value)[:action_m.length]
63 r = _store_actions.execute_action(action_m.action, value, self)
64 if r is not None:
65 c_socket.sendall(r.klv())
66
67 def _server_loop(self, host, port):
68 # This is for minimum info exchange during initialization
69 # a single connection allows to implement locking mechanics easily
70 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
71 s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
72 s.bind((host, port))
73 s.listen()
74 s.settimeout(0.5)
75 while self._run.value == 1:
76 try:
77 c_socket, addr = s.accept()
78 except socket.timeout:
79 continue
80
81 t = threading.Thread(
82 target=self._process_request,
83 args=(c_socket,), daemon=True)
84 t.start()
85
86 def run(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):
87 # Run the TCP store in a different process
88 p = ExceptionAwareProcess(
89 target=self._server_loop, args=(host, port))
90 p.start()
91 self._process = p
92
93 def stop(self):
94 if self._process is not None:
95 with self._run.get_lock():
96 self._run.value = 0
97 self._process.join()
98
99
100 class TCPStoreProxy:
101
102 MAX_NUM_RETRIES = 50
103 DELAY_FOR_RETRY = 0.5
104
105 def __init__(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):
106 self.host = host
107 self.port = port
108
109 def _send_recv(self, action):
110 # Retry several times in case the rank 0 has not established the
111 # main store yet
112 for i in range(TCPStoreProxy.MAX_NUM_RETRIES):
113 try:
114 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
115 # TODO retry connects
116 s.connect((self.host, self.port))
117 s.sendall(action.klv())
118 result_bytes = s.recv(sizeof(
119 _klv_utils.result_action_t))
120 if len(result_bytes) > 0:
121 result = _klv_utils.result_action_t.from_buffer_copy(
122 result_bytes)
123 value = bytearray(result.value)[:result.length]
124 if result.status == 0:
125 return action.decode_result(value)
126 else:
127 raise RuntimeError(value.decode('utf-8'))
128 except ConnectionRefusedError:
129 time.sleep(TCPStoreProxy.DELAY_FOR_RETRY)
130 raise RuntimeError('TCPStore is not available')
131
132 def __getitem__(self, key):
133 return self._send_recv(_store_actions.Get(key))
134
135 def __setitem__(self, key, value):
136 self._send_recv(_store_actions.Set(key, value))
137
138 def barrier(self):
139 # Barrier has special semantics
140 self._send_recv(_store_actions.Barrier())
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupyx/distributed/_store.py b/cupyx/distributed/_store.py
--- a/cupyx/distributed/_store.py
+++ b/cupyx/distributed/_store.py
@@ -1,3 +1,4 @@
+import atexit
from ctypes import sizeof
import multiprocessing
import threading
@@ -11,6 +12,14 @@
_DEFAULT_HOST = '127.0.0.1'
_DEFAULT_PORT = 13333
+_exit_mode = False
+
+
[email protected]
+def _exit():
+ global _exit_mode
+ _exit_mode = True
+
class ExceptionAwareProcess(multiprocessing.Process):
def __init__(self, *args, **kwargs):
@@ -46,7 +55,8 @@
self._current_barrier = None
def __del__(self):
- self.stop()
+ if not _exit_mode:
+ self.stop()
def _set_process(self, process):
self._process = process
@@ -91,6 +101,8 @@
self._process = p
def stop(self):
+ if _exit_mode:
+ return # Prevent shutdown errors
if self._process is not None:
with self._run.get_lock():
self._run.value = 0
| {"golden_diff": "diff --git a/cupyx/distributed/_store.py b/cupyx/distributed/_store.py\n--- a/cupyx/distributed/_store.py\n+++ b/cupyx/distributed/_store.py\n@@ -1,3 +1,4 @@\n+import atexit\n from ctypes import sizeof\n import multiprocessing\n import threading\n@@ -11,6 +12,14 @@\n _DEFAULT_HOST = '127.0.0.1'\n _DEFAULT_PORT = 13333\n \n+_exit_mode = False\n+\n+\[email protected]\n+def _exit():\n+ global _exit_mode\n+ _exit_mode = True\n+\n \n class ExceptionAwareProcess(multiprocessing.Process):\n def __init__(self, *args, **kwargs):\n@@ -46,7 +55,8 @@\n self._current_barrier = None\n \n def __del__(self):\n- self.stop()\n+ if not _exit_mode:\n+ self.stop()\n \n def _set_process(self, process):\n self._process = process\n@@ -91,6 +101,8 @@\n self._process = p\n \n def stop(self):\n+ if _exit_mode:\n+ return # Prevent shutdown errors\n if self._process is not None:\n with self._run.get_lock():\n self._run.value = 0\n", "issue": "Exception raised in `TCPStore.__del__` upon process termination\n### Description\n\n`__del__` should not perform syncronization when called during process termination.\r\n\r\n```\r\nException ignored in: <function TCPStore.__del__ at 0x7fb939be23a0>\r\nTraceback (most recent call last):\r\n File \"/home/maehashi/Development/cupy/cupyx/distributed/_store.py\", line 49, in __del__\r\n File \"/home/maehashi/Development/cupy/cupyx/distributed/_store.py\", line 97, in stop\r\n File \"/home/maehashi/Development/cupy/cupyx/distributed/_store.py\", line 31, in join\r\n File \"/home/maehashi/.pyenv/versions/3.8.1/lib/python3.8/multiprocessing/connection.py\", line 251, in recv\r\nModuleNotFoundError: import of builtins halted; None in sys.modules\r\n```\n\n### To Reproduce\n\n_No response_\n\n### Installation\n\n_No response_\n\n### Environment\n\n_No response_\n\n### Additional Information\n\n_No response_\n", "before_files": [{"content": "from ctypes import sizeof\nimport multiprocessing\nimport threading\nimport socket\nimport time\n\nfrom cupyx.distributed import _klv_utils\nfrom cupyx.distributed import _store_actions\n\n\n_DEFAULT_HOST = '127.0.0.1'\n_DEFAULT_PORT = 13333\n\n\nclass ExceptionAwareProcess(multiprocessing.Process):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._exception = None\n self._parent_p, self._child_p = multiprocessing.Pipe()\n\n def run(self):\n try:\n super().run()\n self._child_p.send(None)\n except Exception as e:\n self._child_p.send(e)\n\n def join(self):\n super().join()\n if self._parent_p.poll():\n exception = self._parent_p.recv()\n if exception is not None:\n raise exception\n\n\nclass TCPStore:\n # This is only used for initialization of nccl so we don't care\n # too much about peformance\n def __init__(self, world_size):\n self.storage = {}\n self._process = None\n self._world_size = world_size\n self._run = multiprocessing.Value('b', 1)\n # For implementing a barrier\n self._lock = threading.Lock()\n self._current_barrier = None\n\n def __del__(self):\n self.stop()\n\n def _set_process(self, process):\n self._process = process\n\n def _process_request(self, c_socket):\n with c_socket:\n # Receive in KLV format\n action_bytes = c_socket.recv(sizeof(_klv_utils.action_t))\n if len(action_bytes) > 0:\n action_m = _klv_utils.action_t.from_buffer_copy(action_bytes)\n if action_m.length > 256:\n raise ValueError('Invalid length for message')\n value = bytearray(action_m.value)[:action_m.length]\n r = _store_actions.execute_action(action_m.action, value, self)\n if r is not None:\n c_socket.sendall(r.klv())\n\n def _server_loop(self, host, port):\n # This is for minimum info exchange during initialization\n # a single connection allows to implement locking mechanics easily\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n s.bind((host, port))\n s.listen()\n s.settimeout(0.5)\n while self._run.value == 1:\n try:\n c_socket, addr = s.accept()\n except socket.timeout:\n continue\n\n t = threading.Thread(\n target=self._process_request,\n args=(c_socket,), daemon=True)\n t.start()\n\n def run(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):\n # Run the TCP store in a different process\n p = ExceptionAwareProcess(\n target=self._server_loop, args=(host, port))\n p.start()\n self._process = p\n\n def stop(self):\n if self._process is not None:\n with self._run.get_lock():\n self._run.value = 0\n self._process.join()\n\n\nclass TCPStoreProxy:\n\n MAX_NUM_RETRIES = 50\n DELAY_FOR_RETRY = 0.5\n\n def __init__(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):\n self.host = host\n self.port = port\n\n def _send_recv(self, action):\n # Retry several times in case the rank 0 has not established the\n # main store yet\n for i in range(TCPStoreProxy.MAX_NUM_RETRIES):\n try:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n # TODO retry connects\n s.connect((self.host, self.port))\n s.sendall(action.klv())\n result_bytes = s.recv(sizeof(\n _klv_utils.result_action_t))\n if len(result_bytes) > 0:\n result = _klv_utils.result_action_t.from_buffer_copy(\n result_bytes)\n value = bytearray(result.value)[:result.length]\n if result.status == 0:\n return action.decode_result(value)\n else:\n raise RuntimeError(value.decode('utf-8'))\n except ConnectionRefusedError:\n time.sleep(TCPStoreProxy.DELAY_FOR_RETRY)\n raise RuntimeError('TCPStore is not available')\n\n def __getitem__(self, key):\n return self._send_recv(_store_actions.Get(key))\n\n def __setitem__(self, key, value):\n self._send_recv(_store_actions.Set(key, value))\n\n def barrier(self):\n # Barrier has special semantics\n self._send_recv(_store_actions.Barrier())\n", "path": "cupyx/distributed/_store.py"}], "after_files": [{"content": "import atexit\nfrom ctypes import sizeof\nimport multiprocessing\nimport threading\nimport socket\nimport time\n\nfrom cupyx.distributed import _klv_utils\nfrom cupyx.distributed import _store_actions\n\n\n_DEFAULT_HOST = '127.0.0.1'\n_DEFAULT_PORT = 13333\n\n_exit_mode = False\n\n\[email protected]\ndef _exit():\n global _exit_mode\n _exit_mode = True\n\n\nclass ExceptionAwareProcess(multiprocessing.Process):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._exception = None\n self._parent_p, self._child_p = multiprocessing.Pipe()\n\n def run(self):\n try:\n super().run()\n self._child_p.send(None)\n except Exception as e:\n self._child_p.send(e)\n\n def join(self):\n super().join()\n if self._parent_p.poll():\n exception = self._parent_p.recv()\n if exception is not None:\n raise exception\n\n\nclass TCPStore:\n # This is only used for initialization of nccl so we don't care\n # too much about peformance\n def __init__(self, world_size):\n self.storage = {}\n self._process = None\n self._world_size = world_size\n self._run = multiprocessing.Value('b', 1)\n # For implementing a barrier\n self._lock = threading.Lock()\n self._current_barrier = None\n\n def __del__(self):\n if not _exit_mode:\n self.stop()\n\n def _set_process(self, process):\n self._process = process\n\n def _process_request(self, c_socket):\n with c_socket:\n # Receive in KLV format\n action_bytes = c_socket.recv(sizeof(_klv_utils.action_t))\n if len(action_bytes) > 0:\n action_m = _klv_utils.action_t.from_buffer_copy(action_bytes)\n if action_m.length > 256:\n raise ValueError('Invalid length for message')\n value = bytearray(action_m.value)[:action_m.length]\n r = _store_actions.execute_action(action_m.action, value, self)\n if r is not None:\n c_socket.sendall(r.klv())\n\n def _server_loop(self, host, port):\n # This is for minimum info exchange during initialization\n # a single connection allows to implement locking mechanics easily\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n s.bind((host, port))\n s.listen()\n s.settimeout(0.5)\n while self._run.value == 1:\n try:\n c_socket, addr = s.accept()\n except socket.timeout:\n continue\n\n t = threading.Thread(\n target=self._process_request,\n args=(c_socket,), daemon=True)\n t.start()\n\n def run(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):\n # Run the TCP store in a different process\n p = ExceptionAwareProcess(\n target=self._server_loop, args=(host, port))\n p.start()\n self._process = p\n\n def stop(self):\n if _exit_mode:\n return # Prevent shutdown errors\n if self._process is not None:\n with self._run.get_lock():\n self._run.value = 0\n self._process.join()\n\n\nclass TCPStoreProxy:\n\n MAX_NUM_RETRIES = 50\n DELAY_FOR_RETRY = 0.5\n\n def __init__(self, host=_DEFAULT_HOST, port=_DEFAULT_PORT):\n self.host = host\n self.port = port\n\n def _send_recv(self, action):\n # Retry several times in case the rank 0 has not established the\n # main store yet\n for i in range(TCPStoreProxy.MAX_NUM_RETRIES):\n try:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n # TODO retry connects\n s.connect((self.host, self.port))\n s.sendall(action.klv())\n result_bytes = s.recv(sizeof(\n _klv_utils.result_action_t))\n if len(result_bytes) > 0:\n result = _klv_utils.result_action_t.from_buffer_copy(\n result_bytes)\n value = bytearray(result.value)[:result.length]\n if result.status == 0:\n return action.decode_result(value)\n else:\n raise RuntimeError(value.decode('utf-8'))\n except ConnectionRefusedError:\n time.sleep(TCPStoreProxy.DELAY_FOR_RETRY)\n raise RuntimeError('TCPStore is not available')\n\n def __getitem__(self, key):\n return self._send_recv(_store_actions.Get(key))\n\n def __setitem__(self, key, value):\n self._send_recv(_store_actions.Set(key, value))\n\n def barrier(self):\n # Barrier has special semantics\n self._send_recv(_store_actions.Barrier())\n", "path": "cupyx/distributed/_store.py"}]} | 1,843 | 296 |
gh_patches_debug_49770 | rasdani/github-patches | git_diff | getsentry__sentry-17425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Event migration 9.1.2 -> 10
<!--
Do you want to ask a question? Are you looking for support? The Sentry message
board is the best place for getting support: https://forum.sentry.io
-->
## Important Details
How are you running Sentry?
* [X] On-Premise docker [Version 9.1.2]
* [ ] Saas (sentry.io)
* [ ] Other [briefly describe your environment]
## Description
I followed the migration guide, alongside all fixes and workaround and managed to get to the actual migration routine. Sentry tries to process all existing postgres events but fails to (for every event):
```
An error occured while trying to instert the following event: <sentry.eventstore.models.Event object at 0x7f2f08e552d0>
.----
insert() takes at least 8 arguments (8 given)
[...]
Event migration done. Migrated 0 of 197988 events.
```
## Steps to Reproduce
1. Have a 9.1.2 onpremise setup and have event data
2. Upgrade to 10 (dev-master), run `install.sh` etc.
### What you expected to happen
Migration scripts succeeds and I have all event data in the new version.
### Possible Solution
Error message suggests a syntax error?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/migrations/0024_auto_20191230_2052.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Generated by Django 1.9.13 on 2019-12-30 20:52
3 from __future__ import unicode_literals, print_function
4
5 import os
6 import types
7 from datetime import timedelta, datetime
8
9 from django.db import migrations
10 from django.utils import timezone
11
12 from sentry import options
13 from sentry.eventstore.models import Event as NewEvent
14
15
16 def backfill_eventstream(apps, schema_editor):
17 """
18 Inserts Postgres events into the eventstream if there are recent events in Postgres.
19
20 This is for open source users migrating from 9.x who want to keep their events.
21 If there are no recent events in Postgres, skip the backfill.
22 """
23 from sentry import eventstore, eventstream
24 from sentry.utils.query import RangeQuerySetWrapper
25
26 Event = apps.get_model("sentry", "Event")
27 Group = apps.get_model("sentry", "Group")
28 Project = apps.get_model("sentry", "Project")
29
30 # Kill switch to skip this migration
31 skip_backfill = os.environ.get("SENTRY_SKIP_EVENTS_BACKFILL_FOR_10", False)
32
33 # Use 90 day retention if the option has not been set or set to 0
34 DEFAULT_RETENTION = 90
35 retention_days = options.get("system.event-retention-days") or DEFAULT_RETENTION
36
37 def get_events(last_days):
38 to_date = timezone.now()
39 from_date = to_date - timedelta(days=last_days)
40 return Event.objects.filter(
41 datetime__gte=from_date, datetime__lte=to_date, group_id__isnull=False
42 )
43
44 def _attach_related(_events):
45 project_ids = set()
46 group_ids = set()
47 for event in _events:
48 project_ids.add(event.project_id)
49 group_ids.add(event.group_id)
50 projects = {p.id: p for p in Project.objects.filter(id__in=project_ids)}
51 groups = {g.id: g for g in Group.objects.filter(id__in=group_ids)}
52
53 for event in _events:
54 event.project = projects.get(event.project_id)
55 event.group = groups.get(event.group_id)
56 eventstore.bind_nodes(_events, "data")
57
58 if skip_backfill:
59 print("Skipping backfill.\n")
60 return
61
62 events = get_events(retention_days)
63 count = events.count()
64
65 if count == 0:
66 print("Nothing to do, skipping migration.\n")
67 return
68
69 print("Events to process: {}\n".format(count))
70
71 processed = 0
72 for e in RangeQuerySetWrapper(events, step=100, callbacks=(_attach_related,)):
73 event = NewEvent(
74 project_id=e.project_id, event_id=e.event_id, group_id=e.group_id, data=e.data.data
75 )
76 primary_hash = event.get_primary_hash()
77 if event.project is None or event.group is None:
78 print("Skipped {} as group or project information is invalid.\n".format(event))
79 continue
80
81 try:
82 eventstream.insert(
83 group=event.group,
84 event=event,
85 is_new=False,
86 is_regression=False,
87 is_new_group_environment=False,
88 primary_hash=primary_hash,
89 skip_consume=True,
90 )
91 processed += 1
92 except Exception as error:
93 print(
94 "An error occured while trying to instert the following event: {}\n.----\n{}".format(
95 event, error
96 )
97 )
98
99 print("Event migration done. Migrated {} of {} events.\n".format(processed, count))
100
101
102 class Migration(migrations.Migration):
103 # This flag is used to mark that a migration shouldn't be automatically run in
104 # production. We set this to True for operations that we think are risky and want
105 # someone from ops to run manually and monitor.
106 # General advice is that if in doubt, mark your migration as `is_dangerous`.
107 # Some things you should always mark as dangerous:
108 # - Adding indexes to large tables. These indexes should be created concurrently,
109 # unfortunately we can't run migrations outside of a transaction until Django
110 # 1.10. So until then these should be run manually.
111 # - Large data migrations. Typically we want these to be run manually by ops so that
112 # they can be monitored. Since data migrations will now hold a transaction open
113 # this is even more important.
114 # - Adding columns to highly active tables, even ones that are NULL.
115 is_dangerous = True
116
117 dependencies = [
118 ("sentry", "0023_hide_environment_none_20191126"),
119 ]
120
121 operations = [
122 migrations.RunPython(backfill_eventstream, reverse_code=migrations.RunPython.noop),
123 ]
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/migrations/0024_auto_20191230_2052.py b/src/sentry/migrations/0024_auto_20191230_2052.py
--- a/src/sentry/migrations/0024_auto_20191230_2052.py
+++ b/src/sentry/migrations/0024_auto_20191230_2052.py
@@ -86,6 +86,8 @@
is_regression=False,
is_new_group_environment=False,
primary_hash=primary_hash,
+ received_timestamp=event.data.get("received")
+ or float(event.datetime.strftime("%s")),
skip_consume=True,
)
processed += 1
| {"golden_diff": "diff --git a/src/sentry/migrations/0024_auto_20191230_2052.py b/src/sentry/migrations/0024_auto_20191230_2052.py\n--- a/src/sentry/migrations/0024_auto_20191230_2052.py\n+++ b/src/sentry/migrations/0024_auto_20191230_2052.py\n@@ -86,6 +86,8 @@\n is_regression=False,\n is_new_group_environment=False,\n primary_hash=primary_hash,\n+ received_timestamp=event.data.get(\"received\")\n+ or float(event.datetime.strftime(\"%s\")),\n skip_consume=True,\n )\n processed += 1\n", "issue": "Event migration 9.1.2 -> 10\n<!--\r\n\r\nDo you want to ask a question? Are you looking for support? The Sentry message\r\nboard is the best place for getting support: https://forum.sentry.io\r\n-->\r\n\r\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n* [X] On-Premise docker [Version 9.1.2]\r\n* [ ] Saas (sentry.io)\r\n* [ ] Other [briefly describe your environment]\r\n\r\n## Description\r\n\r\nI followed the migration guide, alongside all fixes and workaround and managed to get to the actual migration routine. Sentry tries to process all existing postgres events but fails to (for every event):\r\n\r\n```\r\nAn error occured while trying to instert the following event: <sentry.eventstore.models.Event object at 0x7f2f08e552d0>\r\n.----\r\ninsert() takes at least 8 arguments (8 given)\r\n[...]\r\nEvent migration done. Migrated 0 of 197988 events.\r\n```\r\n\r\n## Steps to Reproduce\r\n\r\n1. Have a 9.1.2 onpremise setup and have event data\r\n2. Upgrade to 10 (dev-master), run `install.sh` etc.\r\n\r\n### What you expected to happen\r\n\r\nMigration scripts succeeds and I have all event data in the new version.\r\n\r\n### Possible Solution\r\n\r\nError message suggests a syntax error?\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.13 on 2019-12-30 20:52\nfrom __future__ import unicode_literals, print_function\n\nimport os\nimport types\nfrom datetime import timedelta, datetime\n\nfrom django.db import migrations\nfrom django.utils import timezone\n\nfrom sentry import options\nfrom sentry.eventstore.models import Event as NewEvent\n\n\ndef backfill_eventstream(apps, schema_editor):\n \"\"\"\n Inserts Postgres events into the eventstream if there are recent events in Postgres.\n\n This is for open source users migrating from 9.x who want to keep their events.\n If there are no recent events in Postgres, skip the backfill.\n \"\"\"\n from sentry import eventstore, eventstream\n from sentry.utils.query import RangeQuerySetWrapper\n\n Event = apps.get_model(\"sentry\", \"Event\")\n Group = apps.get_model(\"sentry\", \"Group\")\n Project = apps.get_model(\"sentry\", \"Project\")\n\n # Kill switch to skip this migration\n skip_backfill = os.environ.get(\"SENTRY_SKIP_EVENTS_BACKFILL_FOR_10\", False)\n\n # Use 90 day retention if the option has not been set or set to 0\n DEFAULT_RETENTION = 90\n retention_days = options.get(\"system.event-retention-days\") or DEFAULT_RETENTION\n\n def get_events(last_days):\n to_date = timezone.now()\n from_date = to_date - timedelta(days=last_days)\n return Event.objects.filter(\n datetime__gte=from_date, datetime__lte=to_date, group_id__isnull=False\n )\n\n def _attach_related(_events):\n project_ids = set()\n group_ids = set()\n for event in _events:\n project_ids.add(event.project_id)\n group_ids.add(event.group_id)\n projects = {p.id: p for p in Project.objects.filter(id__in=project_ids)}\n groups = {g.id: g for g in Group.objects.filter(id__in=group_ids)}\n\n for event in _events:\n event.project = projects.get(event.project_id)\n event.group = groups.get(event.group_id)\n eventstore.bind_nodes(_events, \"data\")\n\n if skip_backfill:\n print(\"Skipping backfill.\\n\")\n return\n\n events = get_events(retention_days)\n count = events.count()\n\n if count == 0:\n print(\"Nothing to do, skipping migration.\\n\")\n return\n\n print(\"Events to process: {}\\n\".format(count))\n\n processed = 0\n for e in RangeQuerySetWrapper(events, step=100, callbacks=(_attach_related,)):\n event = NewEvent(\n project_id=e.project_id, event_id=e.event_id, group_id=e.group_id, data=e.data.data\n )\n primary_hash = event.get_primary_hash()\n if event.project is None or event.group is None:\n print(\"Skipped {} as group or project information is invalid.\\n\".format(event))\n continue\n\n try:\n eventstream.insert(\n group=event.group,\n event=event,\n is_new=False,\n is_regression=False,\n is_new_group_environment=False,\n primary_hash=primary_hash,\n skip_consume=True,\n )\n processed += 1\n except Exception as error:\n print(\n \"An error occured while trying to instert the following event: {}\\n.----\\n{}\".format(\n event, error\n )\n )\n\n print(\"Event migration done. Migrated {} of {} events.\\n\".format(processed, count))\n\n\nclass Migration(migrations.Migration):\n # This flag is used to mark that a migration shouldn't be automatically run in\n # production. We set this to True for operations that we think are risky and want\n # someone from ops to run manually and monitor.\n # General advice is that if in doubt, mark your migration as `is_dangerous`.\n # Some things you should always mark as dangerous:\n # - Adding indexes to large tables. These indexes should be created concurrently,\n # unfortunately we can't run migrations outside of a transaction until Django\n # 1.10. So until then these should be run manually.\n # - Large data migrations. Typically we want these to be run manually by ops so that\n # they can be monitored. Since data migrations will now hold a transaction open\n # this is even more important.\n # - Adding columns to highly active tables, even ones that are NULL.\n is_dangerous = True\n\n dependencies = [\n (\"sentry\", \"0023_hide_environment_none_20191126\"),\n ]\n\n operations = [\n migrations.RunPython(backfill_eventstream, reverse_code=migrations.RunPython.noop),\n ]\n", "path": "src/sentry/migrations/0024_auto_20191230_2052.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.13 on 2019-12-30 20:52\nfrom __future__ import unicode_literals, print_function\n\nimport os\nimport types\nfrom datetime import timedelta, datetime\n\nfrom django.db import migrations\nfrom django.utils import timezone\n\nfrom sentry import options\nfrom sentry.eventstore.models import Event as NewEvent\n\n\ndef backfill_eventstream(apps, schema_editor):\n \"\"\"\n Inserts Postgres events into the eventstream if there are recent events in Postgres.\n\n This is for open source users migrating from 9.x who want to keep their events.\n If there are no recent events in Postgres, skip the backfill.\n \"\"\"\n from sentry import eventstore, eventstream\n from sentry.utils.query import RangeQuerySetWrapper\n\n Event = apps.get_model(\"sentry\", \"Event\")\n Group = apps.get_model(\"sentry\", \"Group\")\n Project = apps.get_model(\"sentry\", \"Project\")\n\n # Kill switch to skip this migration\n skip_backfill = os.environ.get(\"SENTRY_SKIP_EVENTS_BACKFILL_FOR_10\", False)\n\n # Use 90 day retention if the option has not been set or set to 0\n DEFAULT_RETENTION = 90\n retention_days = options.get(\"system.event-retention-days\") or DEFAULT_RETENTION\n\n def get_events(last_days):\n to_date = timezone.now()\n from_date = to_date - timedelta(days=last_days)\n return Event.objects.filter(\n datetime__gte=from_date, datetime__lte=to_date, group_id__isnull=False\n )\n\n def _attach_related(_events):\n project_ids = set()\n group_ids = set()\n for event in _events:\n project_ids.add(event.project_id)\n group_ids.add(event.group_id)\n projects = {p.id: p for p in Project.objects.filter(id__in=project_ids)}\n groups = {g.id: g for g in Group.objects.filter(id__in=group_ids)}\n\n for event in _events:\n event.project = projects.get(event.project_id)\n event.group = groups.get(event.group_id)\n eventstore.bind_nodes(_events, \"data\")\n\n if skip_backfill:\n print(\"Skipping backfill.\\n\")\n return\n\n events = get_events(retention_days)\n count = events.count()\n\n if count == 0:\n print(\"Nothing to do, skipping migration.\\n\")\n return\n\n print(\"Events to process: {}\\n\".format(count))\n\n processed = 0\n for e in RangeQuerySetWrapper(events, step=100, callbacks=(_attach_related,)):\n event = NewEvent(\n project_id=e.project_id, event_id=e.event_id, group_id=e.group_id, data=e.data.data\n )\n primary_hash = event.get_primary_hash()\n if event.project is None or event.group is None:\n print(\"Skipped {} as group or project information is invalid.\\n\".format(event))\n continue\n\n try:\n eventstream.insert(\n group=event.group,\n event=event,\n is_new=False,\n is_regression=False,\n is_new_group_environment=False,\n primary_hash=primary_hash,\n received_timestamp=event.data.get(\"received\")\n or float(event.datetime.strftime(\"%s\")),\n skip_consume=True,\n )\n processed += 1\n except Exception as error:\n print(\n \"An error occured while trying to instert the following event: {}\\n.----\\n{}\".format(\n event, error\n )\n )\n\n print(\"Event migration done. Migrated {} of {} events.\\n\".format(processed, count))\n\n\nclass Migration(migrations.Migration):\n # This flag is used to mark that a migration shouldn't be automatically run in\n # production. We set this to True for operations that we think are risky and want\n # someone from ops to run manually and monitor.\n # General advice is that if in doubt, mark your migration as `is_dangerous`.\n # Some things you should always mark as dangerous:\n # - Adding indexes to large tables. These indexes should be created concurrently,\n # unfortunately we can't run migrations outside of a transaction until Django\n # 1.10. So until then these should be run manually.\n # - Large data migrations. Typically we want these to be run manually by ops so that\n # they can be monitored. Since data migrations will now hold a transaction open\n # this is even more important.\n # - Adding columns to highly active tables, even ones that are NULL.\n is_dangerous = True\n\n dependencies = [\n (\"sentry\", \"0023_hide_environment_none_20191126\"),\n ]\n\n operations = [\n migrations.RunPython(backfill_eventstream, reverse_code=migrations.RunPython.noop),\n ]\n", "path": "src/sentry/migrations/0024_auto_20191230_2052.py"}]} | 1,894 | 181 |
gh_patches_debug_38940 | rasdani/github-patches | git_diff | streamlink__streamlink-205 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
picarto updated streamlink no longer works
Hey guys picarto no longer works because they said they updated the player so html5 can be default soon.
when you run the program it says found matching plugin picarto for url https:// https://picarto.tv/picknamehere
then the it says error: no stream on this URL: https://picarto.tv/picknamehere.
thanks guys for the awesome program hopefully it gets solved soon!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/picarto.py`
Content:
```
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import http
5 from streamlink.stream import RTMPStream
6
7 API_CHANNEL_INFO = "https://picarto.tv/process/channel"
8 RTMP_URL = "rtmp://{}:1935/play/"
9 RTMP_PLAYPATH = "golive+{}?token={}"
10
11 _url_re = re.compile(r"""
12 https?://(\w+\.)?picarto\.tv/[^&?/]
13 """, re.VERBOSE)
14
15 _channel_casing_re = re.compile(r"""
16 <script>placeStreamChannel(Flash)?\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\);</script>
17 """, re.VERBOSE)
18
19
20 class Picarto(Plugin):
21 @classmethod
22 def can_handle_url(self, url):
23 return _url_re.match(url)
24
25 def _get_streams(self):
26 page_res = http.get(self.url)
27 match = _channel_casing_re.search(page_res.text)
28
29 if not match:
30 return {}
31
32 channel = match.group("channel")
33 visibility = match.group("visibility")
34
35 channel_server_res = http.post(API_CHANNEL_INFO, data={
36 "loadbalancinginfo": channel
37 })
38
39 streams = {}
40 streams["live"] = RTMPStream(self.session, {
41 "rtmp": RTMP_URL.format(channel_server_res.text),
42 "playpath": RTMP_PLAYPATH.format(channel, visibility),
43 "pageUrl": self.url,
44 "live": True
45 })
46 return streams
47
48 __plugin__ = Picarto
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py
--- a/src/streamlink/plugins/picarto.py
+++ b/src/streamlink/plugins/picarto.py
@@ -2,47 +2,69 @@
from streamlink.plugin import Plugin
from streamlink.plugin.api import http
+from streamlink.stream import HLSStream
from streamlink.stream import RTMPStream
API_CHANNEL_INFO = "https://picarto.tv/process/channel"
RTMP_URL = "rtmp://{}:1935/play/"
RTMP_PLAYPATH = "golive+{}?token={}"
+HLS_URL = "https://{}/hls/{}/index.m3u8?token={}"
_url_re = re.compile(r"""
https?://(\w+\.)?picarto\.tv/[^&?/]
""", re.VERBOSE)
+# placeStream(channel, playerID, product, offlineImage, online, token, tech)
_channel_casing_re = re.compile(r"""
- <script>placeStreamChannel(Flash)?\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\);</script>
+ <script>\s*placeStream\s*\((.*?)\);?\s*</script>
""", re.VERBOSE)
class Picarto(Plugin):
@classmethod
- def can_handle_url(self, url):
- return _url_re.match(url)
+ def can_handle_url(cls, url):
+ return _url_re.match(url) is not None
+
+ @staticmethod
+ def _get_stream_arguments(page):
+ match = _channel_casing_re.search(page.text)
+ if not match:
+ raise ValueError
+
+ # transform the arguments
+ channel, player_id, product, offline_image, online, visibility, is_flash = \
+ map(lambda a: a.strip("' \""), match.group(1).split(","))
+ player_id, product, offline_image, online, is_flash = \
+ map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])
+
+ return channel, player_id, product, offline_image, online, visibility, is_flash
def _get_streams(self):
- page_res = http.get(self.url)
- match = _channel_casing_re.search(page_res.text)
+ page = http.get(self.url)
- if not match:
- return {}
+ try:
+ channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)
+ except ValueError:
+ return
- channel = match.group("channel")
- visibility = match.group("visibility")
+ if not online:
+ self.logger.error("This stream is currently offline")
+ return
channel_server_res = http.post(API_CHANNEL_INFO, data={
"loadbalancinginfo": channel
})
- streams = {}
- streams["live"] = RTMPStream(self.session, {
- "rtmp": RTMP_URL.format(channel_server_res.text),
- "playpath": RTMP_PLAYPATH.format(channel, visibility),
- "pageUrl": self.url,
- "live": True
- })
- return streams
+ if is_flash:
+ return {"live": RTMPStream(self.session, {
+ "rtmp": RTMP_URL.format(channel_server_res.text),
+ "playpath": RTMP_PLAYPATH.format(channel, visibility),
+ "pageUrl": self.url,
+ "live": True
+ })}
+ else:
+ return HLSStream.parse_variant_playlist(self.session,
+ HLS_URL.format(channel_server_res.text, channel, visibility),
+ verify=False)
__plugin__ = Picarto
| {"golden_diff": "diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py\n--- a/src/streamlink/plugins/picarto.py\n+++ b/src/streamlink/plugins/picarto.py\n@@ -2,47 +2,69 @@\n \n from streamlink.plugin import Plugin\n from streamlink.plugin.api import http\n+from streamlink.stream import HLSStream\n from streamlink.stream import RTMPStream\n \n API_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\n RTMP_URL = \"rtmp://{}:1935/play/\"\n RTMP_PLAYPATH = \"golive+{}?token={}\"\n+HLS_URL = \"https://{}/hls/{}/index.m3u8?token={}\"\n \n _url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n \"\"\", re.VERBOSE)\n \n+# placeStream(channel, playerID, product, offlineImage, online, token, tech)\n _channel_casing_re = re.compile(r\"\"\"\n- <script>placeStreamChannel(Flash)?\\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\\);</script>\n+ <script>\\s*placeStream\\s*\\((.*?)\\);?\\s*</script>\n \"\"\", re.VERBOSE)\n \n \n class Picarto(Plugin):\n @classmethod\n- def can_handle_url(self, url):\n- return _url_re.match(url)\n+ def can_handle_url(cls, url):\n+ return _url_re.match(url) is not None\n+\n+ @staticmethod\n+ def _get_stream_arguments(page):\n+ match = _channel_casing_re.search(page.text)\n+ if not match:\n+ raise ValueError\n+\n+ # transform the arguments\n+ channel, player_id, product, offline_image, online, visibility, is_flash = \\\n+ map(lambda a: a.strip(\"' \\\"\"), match.group(1).split(\",\"))\n+ player_id, product, offline_image, online, is_flash = \\\n+ map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])\n+\n+ return channel, player_id, product, offline_image, online, visibility, is_flash\n \n def _get_streams(self):\n- page_res = http.get(self.url)\n- match = _channel_casing_re.search(page_res.text)\n+ page = http.get(self.url)\n \n- if not match:\n- return {}\n+ try:\n+ channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)\n+ except ValueError:\n+ return\n \n- channel = match.group(\"channel\")\n- visibility = match.group(\"visibility\")\n+ if not online:\n+ self.logger.error(\"This stream is currently offline\")\n+ return\n \n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n \n- streams = {}\n- streams[\"live\"] = RTMPStream(self.session, {\n- \"rtmp\": RTMP_URL.format(channel_server_res.text),\n- \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n- \"pageUrl\": self.url,\n- \"live\": True\n- })\n- return streams\n+ if is_flash:\n+ return {\"live\": RTMPStream(self.session, {\n+ \"rtmp\": RTMP_URL.format(channel_server_res.text),\n+ \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n+ \"pageUrl\": self.url,\n+ \"live\": True\n+ })}\n+ else:\n+ return HLSStream.parse_variant_playlist(self.session,\n+ HLS_URL.format(channel_server_res.text, channel, visibility),\n+ verify=False)\n \n __plugin__ = Picarto\n", "issue": "picarto updated streamlink no longer works\nHey guys picarto no longer works because they said they updated the player so html5 can be default soon.\r\nwhen you run the program it says found matching plugin picarto for url https:// https://picarto.tv/picknamehere\r\nthen the it says error: no stream on this URL: https://picarto.tv/picknamehere.\r\nthanks guys for the awesome program hopefully it gets solved soon!\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import RTMPStream\n\nAPI_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\nRTMP_URL = \"rtmp://{}:1935/play/\"\nRTMP_PLAYPATH = \"golive+{}?token={}\"\n\n_url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n\"\"\", re.VERBOSE)\n\n_channel_casing_re = re.compile(r\"\"\"\n <script>placeStreamChannel(Flash)?\\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\\);</script>\n\"\"\", re.VERBOSE)\n\n\nclass Picarto(Plugin):\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n page_res = http.get(self.url)\n match = _channel_casing_re.search(page_res.text)\n\n if not match:\n return {}\n\n channel = match.group(\"channel\")\n visibility = match.group(\"visibility\")\n\n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n\n streams = {}\n streams[\"live\"] = RTMPStream(self.session, {\n \"rtmp\": RTMP_URL.format(channel_server_res.text),\n \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n \"pageUrl\": self.url,\n \"live\": True\n })\n return streams\n\n__plugin__ = Picarto\n", "path": "src/streamlink/plugins/picarto.py"}], "after_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\nfrom streamlink.stream import RTMPStream\n\nAPI_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\nRTMP_URL = \"rtmp://{}:1935/play/\"\nRTMP_PLAYPATH = \"golive+{}?token={}\"\nHLS_URL = \"https://{}/hls/{}/index.m3u8?token={}\"\n\n_url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n\"\"\", re.VERBOSE)\n\n# placeStream(channel, playerID, product, offlineImage, online, token, tech)\n_channel_casing_re = re.compile(r\"\"\"\n <script>\\s*placeStream\\s*\\((.*?)\\);?\\s*</script>\n\"\"\", re.VERBOSE)\n\n\nclass Picarto(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url) is not None\n\n @staticmethod\n def _get_stream_arguments(page):\n match = _channel_casing_re.search(page.text)\n if not match:\n raise ValueError\n\n # transform the arguments\n channel, player_id, product, offline_image, online, visibility, is_flash = \\\n map(lambda a: a.strip(\"' \\\"\"), match.group(1).split(\",\"))\n player_id, product, offline_image, online, is_flash = \\\n map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])\n\n return channel, player_id, product, offline_image, online, visibility, is_flash\n\n def _get_streams(self):\n page = http.get(self.url)\n\n try:\n channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)\n except ValueError:\n return\n\n if not online:\n self.logger.error(\"This stream is currently offline\")\n return\n\n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n\n if is_flash:\n return {\"live\": RTMPStream(self.session, {\n \"rtmp\": RTMP_URL.format(channel_server_res.text),\n \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n \"pageUrl\": self.url,\n \"live\": True\n })}\n else:\n return HLSStream.parse_variant_playlist(self.session,\n HLS_URL.format(channel_server_res.text, channel, visibility),\n verify=False)\n\n__plugin__ = Picarto\n", "path": "src/streamlink/plugins/picarto.py"}]} | 796 | 826 |
gh_patches_debug_4348 | rasdani/github-patches | git_diff | pwndbg__pwndbg-747 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bad unsigned casting
### Description
`pwndbg.memory.u` returns signed integers (with minus `-` sign).
### Steps to reproduce
```c
#include <stdio.h>
#include <stdint.h>
int main(int argc, char const *argv[])
{
uint64_t x = 0xb60ad86e8fb52ea8;
printf("%p\n", &x);
getc(stdin);
return 0;
}
```
```
clang bad_u.c -g -o bad_u
gdb ./bad_u
pwndbg> x/xg 0x7fffffffab18
0x7fffffffab18: 0xb60ad86e8fb52ea8
pwndbg> python-interactive
>>> pwndbg.memory.u(0x7fffffffab18)
-5329209239670542680
```
Idk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).
### My setup
```
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
pwndbg: dev branch
```
Bad unsigned casting
### Description
`pwndbg.memory.u` returns signed integers (with minus `-` sign).
### Steps to reproduce
```c
#include <stdio.h>
#include <stdint.h>
int main(int argc, char const *argv[])
{
uint64_t x = 0xb60ad86e8fb52ea8;
printf("%p\n", &x);
getc(stdin);
return 0;
}
```
```
clang bad_u.c -g -o bad_u
gdb ./bad_u
pwndbg> x/xg 0x7fffffffab18
0x7fffffffab18: 0xb60ad86e8fb52ea8
pwndbg> python-interactive
>>> pwndbg.memory.u(0x7fffffffab18)
-5329209239670542680
```
Idk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).
### My setup
```
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
pwndbg: dev branch
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/inthook.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 This hook is necessary for compatibility with Python2.7 versions of GDB
5 since they cannot directly cast to integer a gdb.Value object that is
6 not already an integer type.
7 """
8 from __future__ import absolute_import
9 from __future__ import division
10 from __future__ import print_function
11 from __future__ import unicode_literals
12
13 import enum
14 import os
15
16 import gdb
17 import six
18 from future.utils import with_metaclass
19
20 import pwndbg.typeinfo
21
22 if six.PY2:
23 import __builtin__ as builtins
24 else:
25 import builtins
26
27 _int = builtins.int
28
29
30 # We need this class to get isinstance(7, xint) to return True
31 class IsAnInt(type):
32 def __instancecheck__(self, other):
33 return isinstance(other, _int)
34
35
36 class xint(with_metaclass(IsAnInt, builtins.int)):
37 def __new__(cls, value, *a, **kw):
38 if isinstance(value, gdb.Value):
39 if pwndbg.typeinfo.is_pointer(value):
40 value = value.cast(pwndbg.typeinfo.size_t)
41 else:
42 value = value.cast(pwndbg.typeinfo.ssize_t)
43
44 elif isinstance(value, gdb.Symbol):
45 symbol = value
46 value = symbol.value()
47 if symbol.is_function:
48 value = value.cast(pwndbg.typeinfo.size_t)
49
50 elif not isinstance(value, (six.string_types, six.integer_types)) \
51 or isinstance(cls, enum.EnumMeta):
52 # without check for EnumMeta math operations with enums were failing e.g.:
53 # pwndbg> py import re; flags = 1 | re.MULTILINE
54 return _int.__new__(cls, value, *a, **kw)
55
56 return _int(_int(value, *a, **kw))
57
58 # Do not hook 'int' if we are just generating documentation
59 if os.environ.get('SPHINX', None) is None:
60 builtins.int = xint
61 globals()['int'] = xint
62 if six.PY3:
63 builtins.long = xint
64 globals()['long'] = xint
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/inthook.py b/pwndbg/inthook.py
--- a/pwndbg/inthook.py
+++ b/pwndbg/inthook.py
@@ -39,7 +39,7 @@
if pwndbg.typeinfo.is_pointer(value):
value = value.cast(pwndbg.typeinfo.size_t)
else:
- value = value.cast(pwndbg.typeinfo.ssize_t)
+ return _int.__new__(cls, value, *a, **kw)
elif isinstance(value, gdb.Symbol):
symbol = value
| {"golden_diff": "diff --git a/pwndbg/inthook.py b/pwndbg/inthook.py\n--- a/pwndbg/inthook.py\n+++ b/pwndbg/inthook.py\n@@ -39,7 +39,7 @@\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n- value = value.cast(pwndbg.typeinfo.ssize_t)\n+ return _int.__new__(cls, value, *a, **kw)\n \n elif isinstance(value, gdb.Symbol):\n symbol = value\n", "issue": "Bad unsigned casting\n### Description\r\n\r\n`pwndbg.memory.u` returns signed integers (with minus `-` sign).\r\n\r\n### Steps to reproduce\r\n\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\nint main(int argc, char const *argv[])\r\n{\r\n uint64_t x = 0xb60ad86e8fb52ea8;\r\n printf(\"%p\\n\", &x);\r\n getc(stdin);\r\n return 0;\r\n}\r\n```\r\n\r\n```\r\nclang bad_u.c -g -o bad_u\r\ngdb ./bad_u\r\n\r\npwndbg> x/xg 0x7fffffffab18\r\n0x7fffffffab18:\t0xb60ad86e8fb52ea8\r\npwndbg> python-interactive \r\n>>> pwndbg.memory.u(0x7fffffffab18)\r\n-5329209239670542680\r\n```\r\n\r\nIdk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).\r\n\r\n### My setup\r\n\r\n```\r\nGNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git\r\npython: 3.6.9 (default, Nov 7 2019, 10:44:02)\r\npwndbg: dev branch\r\n```\nBad unsigned casting\n### Description\r\n\r\n`pwndbg.memory.u` returns signed integers (with minus `-` sign).\r\n\r\n### Steps to reproduce\r\n\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\nint main(int argc, char const *argv[])\r\n{\r\n uint64_t x = 0xb60ad86e8fb52ea8;\r\n printf(\"%p\\n\", &x);\r\n getc(stdin);\r\n return 0;\r\n}\r\n```\r\n\r\n```\r\nclang bad_u.c -g -o bad_u\r\ngdb ./bad_u\r\n\r\npwndbg> x/xg 0x7fffffffab18\r\n0x7fffffffab18:\t0xb60ad86e8fb52ea8\r\npwndbg> python-interactive \r\n>>> pwndbg.memory.u(0x7fffffffab18)\r\n-5329209239670542680\r\n```\r\n\r\nIdk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).\r\n\r\n### My setup\r\n\r\n```\r\nGNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git\r\npython: 3.6.9 (default, Nov 7 2019, 10:44:02)\r\npwndbg: dev branch\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nThis hook is necessary for compatibility with Python2.7 versions of GDB\nsince they cannot directly cast to integer a gdb.Value object that is\nnot already an integer type.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport enum\nimport os\n\nimport gdb\nimport six\nfrom future.utils import with_metaclass\n\nimport pwndbg.typeinfo\n\nif six.PY2:\n import __builtin__ as builtins\nelse:\n import builtins\n\n_int = builtins.int\n\n\n# We need this class to get isinstance(7, xint) to return True\nclass IsAnInt(type):\n def __instancecheck__(self, other):\n return isinstance(other, _int)\n\n\nclass xint(with_metaclass(IsAnInt, builtins.int)):\n def __new__(cls, value, *a, **kw):\n if isinstance(value, gdb.Value):\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n value = value.cast(pwndbg.typeinfo.ssize_t)\n\n elif isinstance(value, gdb.Symbol):\n symbol = value\n value = symbol.value()\n if symbol.is_function:\n value = value.cast(pwndbg.typeinfo.size_t)\n\n elif not isinstance(value, (six.string_types, six.integer_types)) \\\n or isinstance(cls, enum.EnumMeta):\n # without check for EnumMeta math operations with enums were failing e.g.:\n # pwndbg> py import re; flags = 1 | re.MULTILINE\n return _int.__new__(cls, value, *a, **kw)\n\n return _int(_int(value, *a, **kw))\n\n# Do not hook 'int' if we are just generating documentation\nif os.environ.get('SPHINX', None) is None:\n builtins.int = xint\n globals()['int'] = xint\n if six.PY3:\n builtins.long = xint\n globals()['long'] = xint\n", "path": "pwndbg/inthook.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nThis hook is necessary for compatibility with Python2.7 versions of GDB\nsince they cannot directly cast to integer a gdb.Value object that is\nnot already an integer type.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport enum\nimport os\n\nimport gdb\nimport six\nfrom future.utils import with_metaclass\n\nimport pwndbg.typeinfo\n\nif six.PY2:\n import __builtin__ as builtins\nelse:\n import builtins\n\n_int = builtins.int\n\n\n# We need this class to get isinstance(7, xint) to return True\nclass IsAnInt(type):\n def __instancecheck__(self, other):\n return isinstance(other, _int)\n\n\nclass xint(with_metaclass(IsAnInt, builtins.int)):\n def __new__(cls, value, *a, **kw):\n if isinstance(value, gdb.Value):\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n return _int.__new__(cls, value, *a, **kw)\n\n elif isinstance(value, gdb.Symbol):\n symbol = value\n value = symbol.value()\n if symbol.is_function:\n value = value.cast(pwndbg.typeinfo.size_t)\n\n elif not isinstance(value, (six.string_types, six.integer_types)) \\\n or isinstance(cls, enum.EnumMeta):\n # without check for EnumMeta math operations with enums were failing e.g.:\n # pwndbg> py import re; flags = 1 | re.MULTILINE\n return _int.__new__(cls, value, *a, **kw)\n\n return _int(_int(value, *a, **kw))\n\n# Do not hook 'int' if we are just generating documentation\nif os.environ.get('SPHINX', None) is None:\n builtins.int = xint\n globals()['int'] = xint\n if six.PY3:\n builtins.long = xint\n globals()['long'] = xint\n", "path": "pwndbg/inthook.py"}]} | 1,488 | 126 |
gh_patches_debug_2300 | rasdani/github-patches | git_diff | pytorch__torchdynamo-1012 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dynamo WONT CONVERT for is_fx_tracing()
Probably the same as #1009. Repro:
```
import torchdynamo
from torch.fx._symbolic_trace import is_fx_tracing
def my_compiler(gm, inputs):
return gm.forward
@torchdynamo.optimize(my_compiler)
def fn(x, y):
if is_fx_tracing():
return x
else:
return y
fn(1, 2)
```
returns
```
torchdynamo.convert_frame: [ERROR] WON'T CONVERT fn /private/home/suo/scratch/test.py line 8
due to:
Traceback (most recent call last):
File "/raid/suo/torchdynamo/torchdynamo/variables/tensor.py", line 258, in create
assert (
AssertionError: torch.* op returned non-Tensor bool call_function <function is_fx_tracing at 0x7f08b681e700>
from user code:
File "/private/home/suo/scratch/test.py", line 10, in fn
if is_fx_tracing():
Set torchdynamo.config.verbose=True for more information
==========
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchdynamo/config.py`
Content:
```
1 import logging
2 import os
3 import sys
4 from os.path import abspath
5 from os.path import dirname
6 from types import ModuleType
7
8 import torch
9
10 try:
11 import torch._prims
12 import torch._refs
13
14 HAS_REFS_PRIMS = True
15 except ImportError:
16 HAS_REFS_PRIMS = False
17
18
19 class AccessLimitingConfig(ModuleType):
20 # log level (levels print what it says + all levels listed below it)
21 # DEBUG print full traces <-- lowest level + print tracing of every instruction
22 # INFO print compiled functions + graphs
23 # WARN print warnings (including graph breaks)
24 # ERROR print exceptions (and what user code was being processed when it occurred)
25 log_level = logging.WARNING
26 # Verbose will print full stack traces on warnings and errors
27 verbose = False
28
29 # verify the correctness of optimized backend
30 verify_correctness = False
31
32 # need this many ops to create an FX graph
33 minimum_call_count = 1
34
35 # turn on/off DCE pass
36 dead_code_elimination = True
37
38 # disable (for a function) when cache reaches this size
39 cache_size_limit = 64
40
41 # specializing int/float by default
42 specialize_int_float = True
43
44 # Assume these functions return constants
45 constant_functions = {
46 torch.jit.is_scripting: False,
47 torch.jit.is_tracing: False,
48 torch._C._get_tracing_state: None,
49 }
50
51 # root folder of the project
52 base_dir = dirname(dirname(abspath(__file__)))
53
54 # don't specialize on shapes and strides and put shape ops in graph
55 dynamic_shapes = os.environ.get("TORCHDYNAMO_DYNAMIC_SHAPES") == "1"
56
57 # Set this to False to assume nn.Modules() contents are immutable (similar assumption as freezing)
58 guard_nn_modules = False
59
60 # Run the FX graph as it is created to get better type information
61 dynamic_propagation = True
62
63 # Run the FX graph with FakeTensors
64 fake_tensor_propagation = True
65
66 # run FX normalization passes in optimizer
67 normalize_ir = True
68
69 # If a tensor subclass type is in this set, torchdynamo will inline the
70 # __torch_function__ logic of the subclass.
71 traceable_tensor_subclasses = set()
72
73 # Raise torchdynamo internal assertions
74 raise_on_assertion_error = False
75
76 # Propagate backend exceptions up to torchdynamo.optimize
77 raise_on_backend_error = True
78
79 # If a PyTorch module is in this allowlist, torchdynamo will be allowed
80 # to inline objects from it or its children.
81 skipfiles_inline_module_allowlist = {torch.nn, torch.distributions}
82 if HAS_REFS_PRIMS:
83 skipfiles_inline_module_allowlist |= {
84 torch._refs,
85 torch._prims,
86 torch._decomp,
87 }
88
89 # If a string representing a PyTorch module is in this ignorelist,
90 # the `allowed_functions.is_allowed` function will not consider it
91 # when creating a list of PyTorch functions that will appear in
92 # FX IR.
93 allowed_functions_module_string_ignorelist = {
94 "torch.distributions",
95 "torch.testing",
96 "torch._refs",
97 "torch._prims",
98 "torch._decomp",
99 }
100
101 # Compiler compilation debug info
102 # 0: Nothing printed out when compilation fails
103 # 1: Dump the graph out to repro.py if compilation fails
104 # 2: Dumps the graph out to minify_repro.py with a minifier if compilation fails
105 # 3: Always dumps the last graph ran out to minify_repro.py, useful for segfaults/irrecoverable errors
106 repro_level = int(os.environ.get("COMPILER_REPRO_LEVEL", 0))
107
108 # Not all backends support scalars. Some calls on torch.Tensor (like .item()) return a scalar type.
109 # When this flag is set to False, we introduce a graph break instead of capturing.
110 capture_scalar_outputs = False
111
112 def __setattr__(self, name, value):
113 if sys.version_info > (3, 8):
114 assert hasattr(
115 self, name
116 ), f"Trying to set {name} - this value does not exist in torchdynamo.config"
117 object.__setattr__(self, name, value)
118
119 def __delattr__(self, name):
120 if sys.version_info > (3, 8):
121 assert hasattr(
122 self, name
123 ), f"Trying to del {name} - this value does not exist in torchdynamo.config"
124 object.__delattr__(self, name)
125
126
127 sys.modules[__name__] = AccessLimitingConfig("config")
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchdynamo/config.py b/torchdynamo/config.py
--- a/torchdynamo/config.py
+++ b/torchdynamo/config.py
@@ -46,6 +46,8 @@
torch.jit.is_scripting: False,
torch.jit.is_tracing: False,
torch._C._get_tracing_state: None,
+ torch.fx._symbolic_trace.is_fx_tracing: False,
+ torch.onnx.is_in_onnx_export: False,
}
# root folder of the project
| {"golden_diff": "diff --git a/torchdynamo/config.py b/torchdynamo/config.py\n--- a/torchdynamo/config.py\n+++ b/torchdynamo/config.py\n@@ -46,6 +46,8 @@\n torch.jit.is_scripting: False,\n torch.jit.is_tracing: False,\n torch._C._get_tracing_state: None,\n+ torch.fx._symbolic_trace.is_fx_tracing: False,\n+ torch.onnx.is_in_onnx_export: False,\n }\n \n # root folder of the project\n", "issue": "Dynamo WONT CONVERT for is_fx_tracing()\nProbably the same as #1009. Repro:\r\n```\r\nimport torchdynamo\r\nfrom torch.fx._symbolic_trace import is_fx_tracing\r\n\r\ndef my_compiler(gm, inputs):\r\n return gm.forward\r\n\r\[email protected](my_compiler)\r\ndef fn(x, y):\r\n if is_fx_tracing():\r\n return x\r\n else:\r\n return y\r\n\r\nfn(1, 2)\r\n```\r\nreturns\r\n```\r\ntorchdynamo.convert_frame: [ERROR] WON'T CONVERT fn /private/home/suo/scratch/test.py line 8\r\ndue to:\r\nTraceback (most recent call last):\r\n File \"/raid/suo/torchdynamo/torchdynamo/variables/tensor.py\", line 258, in create\r\n assert (\r\nAssertionError: torch.* op returned non-Tensor bool call_function <function is_fx_tracing at 0x7f08b681e700>\r\n\r\nfrom user code:\r\n File \"/private/home/suo/scratch/test.py\", line 10, in fn\r\n if is_fx_tracing():\r\n\r\nSet torchdynamo.config.verbose=True for more information\r\n==========\r\n```\n", "before_files": [{"content": "import logging\nimport os\nimport sys\nfrom os.path import abspath\nfrom os.path import dirname\nfrom types import ModuleType\n\nimport torch\n\ntry:\n import torch._prims\n import torch._refs\n\n HAS_REFS_PRIMS = True\nexcept ImportError:\n HAS_REFS_PRIMS = False\n\n\nclass AccessLimitingConfig(ModuleType):\n # log level (levels print what it says + all levels listed below it)\n # DEBUG print full traces <-- lowest level + print tracing of every instruction\n # INFO print compiled functions + graphs\n # WARN print warnings (including graph breaks)\n # ERROR print exceptions (and what user code was being processed when it occurred)\n log_level = logging.WARNING\n # Verbose will print full stack traces on warnings and errors\n verbose = False\n\n # verify the correctness of optimized backend\n verify_correctness = False\n\n # need this many ops to create an FX graph\n minimum_call_count = 1\n\n # turn on/off DCE pass\n dead_code_elimination = True\n\n # disable (for a function) when cache reaches this size\n cache_size_limit = 64\n\n # specializing int/float by default\n specialize_int_float = True\n\n # Assume these functions return constants\n constant_functions = {\n torch.jit.is_scripting: False,\n torch.jit.is_tracing: False,\n torch._C._get_tracing_state: None,\n }\n\n # root folder of the project\n base_dir = dirname(dirname(abspath(__file__)))\n\n # don't specialize on shapes and strides and put shape ops in graph\n dynamic_shapes = os.environ.get(\"TORCHDYNAMO_DYNAMIC_SHAPES\") == \"1\"\n\n # Set this to False to assume nn.Modules() contents are immutable (similar assumption as freezing)\n guard_nn_modules = False\n\n # Run the FX graph as it is created to get better type information\n dynamic_propagation = True\n\n # Run the FX graph with FakeTensors\n fake_tensor_propagation = True\n\n # run FX normalization passes in optimizer\n normalize_ir = True\n\n # If a tensor subclass type is in this set, torchdynamo will inline the\n # __torch_function__ logic of the subclass.\n traceable_tensor_subclasses = set()\n\n # Raise torchdynamo internal assertions\n raise_on_assertion_error = False\n\n # Propagate backend exceptions up to torchdynamo.optimize\n raise_on_backend_error = True\n\n # If a PyTorch module is in this allowlist, torchdynamo will be allowed\n # to inline objects from it or its children.\n skipfiles_inline_module_allowlist = {torch.nn, torch.distributions}\n if HAS_REFS_PRIMS:\n skipfiles_inline_module_allowlist |= {\n torch._refs,\n torch._prims,\n torch._decomp,\n }\n\n # If a string representing a PyTorch module is in this ignorelist,\n # the `allowed_functions.is_allowed` function will not consider it\n # when creating a list of PyTorch functions that will appear in\n # FX IR.\n allowed_functions_module_string_ignorelist = {\n \"torch.distributions\",\n \"torch.testing\",\n \"torch._refs\",\n \"torch._prims\",\n \"torch._decomp\",\n }\n\n # Compiler compilation debug info\n # 0: Nothing printed out when compilation fails\n # 1: Dump the graph out to repro.py if compilation fails\n # 2: Dumps the graph out to minify_repro.py with a minifier if compilation fails\n # 3: Always dumps the last graph ran out to minify_repro.py, useful for segfaults/irrecoverable errors\n repro_level = int(os.environ.get(\"COMPILER_REPRO_LEVEL\", 0))\n\n # Not all backends support scalars. Some calls on torch.Tensor (like .item()) return a scalar type.\n # When this flag is set to False, we introduce a graph break instead of capturing.\n capture_scalar_outputs = False\n\n def __setattr__(self, name, value):\n if sys.version_info > (3, 8):\n assert hasattr(\n self, name\n ), f\"Trying to set {name} - this value does not exist in torchdynamo.config\"\n object.__setattr__(self, name, value)\n\n def __delattr__(self, name):\n if sys.version_info > (3, 8):\n assert hasattr(\n self, name\n ), f\"Trying to del {name} - this value does not exist in torchdynamo.config\"\n object.__delattr__(self, name)\n\n\nsys.modules[__name__] = AccessLimitingConfig(\"config\")\n", "path": "torchdynamo/config.py"}], "after_files": [{"content": "import logging\nimport os\nimport sys\nfrom os.path import abspath\nfrom os.path import dirname\nfrom types import ModuleType\n\nimport torch\n\ntry:\n import torch._prims\n import torch._refs\n\n HAS_REFS_PRIMS = True\nexcept ImportError:\n HAS_REFS_PRIMS = False\n\n\nclass AccessLimitingConfig(ModuleType):\n # log level (levels print what it says + all levels listed below it)\n # DEBUG print full traces <-- lowest level + print tracing of every instruction\n # INFO print compiled functions + graphs\n # WARN print warnings (including graph breaks)\n # ERROR print exceptions (and what user code was being processed when it occurred)\n log_level = logging.WARNING\n # Verbose will print full stack traces on warnings and errors\n verbose = False\n\n # verify the correctness of optimized backend\n verify_correctness = False\n\n # need this many ops to create an FX graph\n minimum_call_count = 1\n\n # turn on/off DCE pass\n dead_code_elimination = True\n\n # disable (for a function) when cache reaches this size\n cache_size_limit = 64\n\n # specializing int/float by default\n specialize_int_float = True\n\n # Assume these functions return constants\n constant_functions = {\n torch.jit.is_scripting: False,\n torch.jit.is_tracing: False,\n torch._C._get_tracing_state: None,\n torch.fx._symbolic_trace.is_fx_tracing: False,\n torch.onnx.is_in_onnx_export: False,\n }\n\n # root folder of the project\n base_dir = dirname(dirname(abspath(__file__)))\n\n # don't specialize on shapes and strides and put shape ops in graph\n dynamic_shapes = os.environ.get(\"TORCHDYNAMO_DYNAMIC_SHAPES\") == \"1\"\n\n # Set this to False to assume nn.Modules() contents are immutable (similar assumption as freezing)\n guard_nn_modules = False\n\n # Run the FX graph as it is created to get better type information\n dynamic_propagation = True\n\n # Run the FX graph with FakeTensors\n fake_tensor_propagation = True\n\n # run FX normalization passes in optimizer\n normalize_ir = True\n\n # If a tensor subclass type is in this set, torchdynamo will inline the\n # __torch_function__ logic of the subclass.\n traceable_tensor_subclasses = set()\n\n # Raise torchdynamo internal assertions\n raise_on_assertion_error = False\n\n # Propagate backend exceptions up to torchdynamo.optimize\n raise_on_backend_error = True\n\n # If a PyTorch module is in this allowlist, torchdynamo will be allowed\n # to inline objects from it or its children.\n skipfiles_inline_module_allowlist = {torch.nn, torch.distributions}\n if HAS_REFS_PRIMS:\n skipfiles_inline_module_allowlist |= {\n torch._refs,\n torch._prims,\n torch._decomp,\n }\n\n # If a string representing a PyTorch module is in this ignorelist,\n # the `allowed_functions.is_allowed` function will not consider it\n # when creating a list of PyTorch functions that will appear in\n # FX IR.\n allowed_functions_module_string_ignorelist = {\n \"torch.distributions\",\n \"torch.testing\",\n \"torch._refs\",\n \"torch._prims\",\n \"torch._decomp\",\n }\n\n # Compiler compilation debug info\n # 0: Nothing printed out when compilation fails\n # 1: Dump the graph out to repro.py if compilation fails\n # 2: Dumps the graph out to minify_repro.py with a minifier if compilation fails\n # 3: Always dumps the last graph ran out to minify_repro.py, useful for segfaults/irrecoverable errors\n repro_level = int(os.environ.get(\"COMPILER_REPRO_LEVEL\", 0))\n\n # Not all backends support scalars. Some calls on torch.Tensor (like .item()) return a scalar type.\n # When this flag is set to False, we introduce a graph break instead of capturing.\n capture_scalar_outputs = False\n\n def __setattr__(self, name, value):\n if sys.version_info > (3, 8):\n assert hasattr(\n self, name\n ), f\"Trying to set {name} - this value does not exist in torchdynamo.config\"\n object.__setattr__(self, name, value)\n\n def __delattr__(self, name):\n if sys.version_info > (3, 8):\n assert hasattr(\n self, name\n ), f\"Trying to del {name} - this value does not exist in torchdynamo.config\"\n object.__delattr__(self, name)\n\n\nsys.modules[__name__] = AccessLimitingConfig(\"config\")\n", "path": "torchdynamo/config.py"}]} | 1,825 | 119 |
gh_patches_debug_11149 | rasdani/github-patches | git_diff | open-mmlab__mmocr-285 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LineStrParser: separator behaviour
I've a question regarding this snippet of code:
https://github.com/open-mmlab/mmocr/blob/01d8d63be945882fb2d9eaca5e1c1b39cb45f274/mmocr/datasets/utils/parser.py#L33-L36
Is there a particular reason to use these 4 lines of code instead of simply `line_str = line_str.split(self.separator)`?
I'm asking this because for my own use case I have:
- a TSV file with `filename` and `text` as keys for text recognition task
- some blank spaces in `filename` e.g. `my cropped image.png`
Hence, LineStrParser is configured as follows:
```python
parser=dict(
type='LineStrParser',
keys=['filename', 'text'],
keys_idx=[0, 1],
separator='\t'))
```
but with the 4-lines code snippet, the line parsing fails. Instead, with simply `line_str = line_str.split(self.separator)` everything works well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmocr/datasets/utils/parser.py`
Content:
```
1 import json
2
3 from mmocr.datasets.builder import PARSERS
4
5
6 @PARSERS.register_module()
7 class LineStrParser:
8 """Parse string of one line in annotation file to dict format.
9
10 Args:
11 keys (list[str]): Keys in result dict.
12 keys_idx (list[int]): Value index in sub-string list
13 for each key above.
14 separator (str): Separator to separate string to list of sub-string.
15 """
16
17 def __init__(self,
18 keys=['filename', 'text'],
19 keys_idx=[0, 1],
20 separator=' '):
21 assert isinstance(keys, list)
22 assert isinstance(keys_idx, list)
23 assert isinstance(separator, str)
24 assert len(keys) > 0
25 assert len(keys) == len(keys_idx)
26 self.keys = keys
27 self.keys_idx = keys_idx
28 self.separator = separator
29
30 def get_item(self, data_ret, index):
31 map_index = index % len(data_ret)
32 line_str = data_ret[map_index]
33 for split_key in self.separator:
34 if split_key != ' ':
35 line_str = line_str.replace(split_key, ' ')
36 line_str = line_str.split()
37 if len(line_str) <= max(self.keys_idx):
38 raise Exception(
39 f'key index: {max(self.keys_idx)} out of range: {line_str}')
40
41 line_info = {}
42 for i, key in enumerate(self.keys):
43 line_info[key] = line_str[self.keys_idx[i]]
44 return line_info
45
46
47 @PARSERS.register_module()
48 class LineJsonParser:
49 """Parse json-string of one line in annotation file to dict format.
50
51 Args:
52 keys (list[str]): Keys in both json-string and result dict.
53 """
54
55 def __init__(self, keys=[], **kwargs):
56 assert isinstance(keys, list)
57 assert len(keys) > 0
58 self.keys = keys
59
60 def get_item(self, data_ret, index):
61 map_index = index % len(data_ret)
62 line_json_obj = json.loads(data_ret[map_index])
63 line_info = {}
64 for key in self.keys:
65 if key not in line_json_obj:
66 raise Exception(f'key {key} not in line json {line_json_obj}')
67 line_info[key] = line_json_obj[key]
68
69 return line_info
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmocr/datasets/utils/parser.py b/mmocr/datasets/utils/parser.py
--- a/mmocr/datasets/utils/parser.py
+++ b/mmocr/datasets/utils/parser.py
@@ -30,10 +30,7 @@
def get_item(self, data_ret, index):
map_index = index % len(data_ret)
line_str = data_ret[map_index]
- for split_key in self.separator:
- if split_key != ' ':
- line_str = line_str.replace(split_key, ' ')
- line_str = line_str.split()
+ line_str = line_str.split(self.separator)
if len(line_str) <= max(self.keys_idx):
raise Exception(
f'key index: {max(self.keys_idx)} out of range: {line_str}')
| {"golden_diff": "diff --git a/mmocr/datasets/utils/parser.py b/mmocr/datasets/utils/parser.py\n--- a/mmocr/datasets/utils/parser.py\n+++ b/mmocr/datasets/utils/parser.py\n@@ -30,10 +30,7 @@\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n- for split_key in self.separator:\n- if split_key != ' ':\n- line_str = line_str.replace(split_key, ' ')\n- line_str = line_str.split()\n+ line_str = line_str.split(self.separator)\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n", "issue": "LineStrParser: separator behaviour\nI've a question regarding this snippet of code:\r\nhttps://github.com/open-mmlab/mmocr/blob/01d8d63be945882fb2d9eaca5e1c1b39cb45f274/mmocr/datasets/utils/parser.py#L33-L36\r\n\r\nIs there a particular reason to use these 4 lines of code instead of simply `line_str = line_str.split(self.separator)`?\r\n\r\nI'm asking this because for my own use case I have:\r\n- a TSV file with `filename` and `text` as keys for text recognition task\r\n- some blank spaces in `filename` e.g. `my cropped image.png`\r\n \r\nHence, LineStrParser is configured as follows:\r\n```python\r\nparser=dict(\r\n type='LineStrParser',\r\n keys=['filename', 'text'],\r\n keys_idx=[0, 1],\r\n separator='\\t'))\r\n```\r\nbut with the 4-lines code snippet, the line parsing fails. Instead, with simply `line_str = line_str.split(self.separator)` everything works well.\n", "before_files": [{"content": "import json\n\nfrom mmocr.datasets.builder import PARSERS\n\n\[email protected]_module()\nclass LineStrParser:\n \"\"\"Parse string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in result dict.\n keys_idx (list[int]): Value index in sub-string list\n for each key above.\n separator (str): Separator to separate string to list of sub-string.\n \"\"\"\n\n def __init__(self,\n keys=['filename', 'text'],\n keys_idx=[0, 1],\n separator=' '):\n assert isinstance(keys, list)\n assert isinstance(keys_idx, list)\n assert isinstance(separator, str)\n assert len(keys) > 0\n assert len(keys) == len(keys_idx)\n self.keys = keys\n self.keys_idx = keys_idx\n self.separator = separator\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n for split_key in self.separator:\n if split_key != ' ':\n line_str = line_str.replace(split_key, ' ')\n line_str = line_str.split()\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n\n line_info = {}\n for i, key in enumerate(self.keys):\n line_info[key] = line_str[self.keys_idx[i]]\n return line_info\n\n\[email protected]_module()\nclass LineJsonParser:\n \"\"\"Parse json-string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in both json-string and result dict.\n \"\"\"\n\n def __init__(self, keys=[], **kwargs):\n assert isinstance(keys, list)\n assert len(keys) > 0\n self.keys = keys\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_json_obj = json.loads(data_ret[map_index])\n line_info = {}\n for key in self.keys:\n if key not in line_json_obj:\n raise Exception(f'key {key} not in line json {line_json_obj}')\n line_info[key] = line_json_obj[key]\n\n return line_info\n", "path": "mmocr/datasets/utils/parser.py"}], "after_files": [{"content": "import json\n\nfrom mmocr.datasets.builder import PARSERS\n\n\[email protected]_module()\nclass LineStrParser:\n \"\"\"Parse string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in result dict.\n keys_idx (list[int]): Value index in sub-string list\n for each key above.\n separator (str): Separator to separate string to list of sub-string.\n \"\"\"\n\n def __init__(self,\n keys=['filename', 'text'],\n keys_idx=[0, 1],\n separator=' '):\n assert isinstance(keys, list)\n assert isinstance(keys_idx, list)\n assert isinstance(separator, str)\n assert len(keys) > 0\n assert len(keys) == len(keys_idx)\n self.keys = keys\n self.keys_idx = keys_idx\n self.separator = separator\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n line_str = line_str.split(self.separator)\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n\n line_info = {}\n for i, key in enumerate(self.keys):\n line_info[key] = line_str[self.keys_idx[i]]\n return line_info\n\n\[email protected]_module()\nclass LineJsonParser:\n \"\"\"Parse json-string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in both json-string and result dict.\n \"\"\"\n\n def __init__(self, keys=[], **kwargs):\n assert isinstance(keys, list)\n assert len(keys) > 0\n self.keys = keys\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_json_obj = json.loads(data_ret[map_index])\n line_info = {}\n for key in self.keys:\n if key not in line_json_obj:\n raise Exception(f'key {key} not in line json {line_json_obj}')\n line_info[key] = line_json_obj[key]\n\n return line_info\n", "path": "mmocr/datasets/utils/parser.py"}]} | 1,129 | 171 |
gh_patches_debug_52858 | rasdani/github-patches | git_diff | getsentry__sentry-540 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
celery 3.0 causes import error (cannot import abbrtools from celery.utils)
Release of celery 3.0 causes an import error at runtime upon any request.
This is the stack trace:
```
ImportError: cannot import name abbrtask
Error handling request
Traceback (most recent call last):
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 107, in handle_request
for item in respiter:
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/raven/middleware.py", line 28, in __call__
for event in self.application(environ, start_response):
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 241, in __call__
response = self.get_response(request)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py", line 179, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py", line 224, in handle_uncaught_exception
if resolver.urlconf_module is None:
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/urlresolvers.py", line 323, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/sentry/conf/urls.py", line 19, in <module>
admin.autodiscover()
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 29, in autodiscover
import_module('%s.admin' % app)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/guzru/dev/django14/lib/python2.7/site-packages/djcelery/admin.py", line 19, in <module>
from celery.utils import abbrtask
ImportError: cannot import name abbrtask
```
Requirements line for celery should become:
celery>=2.5.3,<3.0.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 Sentry
4 ======
5
6 Sentry is a realtime event logging and aggregation platform. It specializes
7 in monitoring errors and extracting all the information needed to do a proper
8 post-mortem without any of the hassle of the standard user feedback loop.
9
10 Sentry is a Server
11 ------------------
12
13 The Sentry package, at its core, is just a simple server and web UI. It will
14 handle authentication clients (such as `Raven <https://github.com/dcramer/raven>`_)
15 and all of the logic behind storage and aggregation.
16
17 That said, Sentry is not limited to Python. The primary implementation is in
18 Python, but it contains a full API for sending events from any language, in
19 any application.
20
21 :copyright: (c) 2011-2012 by the Sentry Team, see AUTHORS for more details.
22 :license: BSD, see LICENSE for more details.
23 """
24
25 from setuptools import setup, find_packages
26
27 # Hack to prevent stupid "TypeError: 'NoneType' object is not callable" error
28 # in multiprocessing/util.py _exit_function when running `python
29 # setup.py test` (see
30 # http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
31 try:
32 import multiprocessing
33 except ImportError:
34 pass
35
36 tests_require = [
37 'django-nose==1.1',
38 'eventlet==0.9.16',
39 'nose==1.1.2',
40 'nydus==0.8.2',
41 'mock==0.8.0',
42 'pyflakes',
43 'pep8',
44 'redis',
45 'unittest2',
46 ]
47
48
49 install_requires = [
50 'cssutils>=0.9.9',
51 'BeautifulSoup>=3.2.1',
52 'django-celery>=2.5.5,<3.0',
53 'django-crispy-forms>=1.1.4',
54 'Django>=1.2,<1.5',
55 'django-indexer>=0.3.0',
56 'django-paging>=0.2.4',
57 'django-picklefield>=0.2.0',
58 'django-templatetag-sugar>=0.1.0',
59 'gunicorn>=0.13.4',
60 'logan>=0.3.1',
61 'pynliner>=0.4.0',
62 'python-dateutil>=1.5.0,<2.0.0',
63 'pytz>=2011n',
64 'raven>=2.0.0',
65 'simplejson>=2.3.0,<2.5.0',
66 'South>=0.7',
67 'httpagentparser>=1.0.5'
68 ]
69
70 dependency_links = [
71 'https://github.com/dcramer/pyflakes/tarball/master#egg=pyflakes',
72 ]
73
74 setup(
75 name='sentry',
76 version='4.8.1',
77 author='David Cramer',
78 author_email='[email protected]',
79 url='http://github.com/dcramer/sentry',
80 description='A realtime logging and aggregation server.',
81 long_description=__doc__,
82 packages=find_packages(exclude=['tests']),
83 zip_safe=False,
84 install_requires=install_requires,
85 tests_require=tests_require,
86 extras_require={'test': tests_require},
87 dependency_links=dependency_links,
88 test_suite='runtests.runtests',
89 license='BSD',
90 include_package_data=True,
91 entry_points={
92 'console_scripts': [
93 'sentry = sentry.utils.runner:main',
94 ],
95 },
96 classifiers=[
97 'Framework :: Django',
98 'Intended Audience :: Developers',
99 'Intended Audience :: System Administrators',
100 'Operating System :: OS Independent',
101 'Topic :: Software Development'
102 ],
103 )
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,6 +50,7 @@
'cssutils>=0.9.9',
'BeautifulSoup>=3.2.1',
'django-celery>=2.5.5,<3.0',
+ 'celery>=2.5.3,<3.0',
'django-crispy-forms>=1.1.4',
'Django>=1.2,<1.5',
'django-indexer>=0.3.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,6 +50,7 @@\n 'cssutils>=0.9.9',\n 'BeautifulSoup>=3.2.1',\n 'django-celery>=2.5.5,<3.0',\n+ 'celery>=2.5.3,<3.0',\n 'django-crispy-forms>=1.1.4',\n 'Django>=1.2,<1.5',\n 'django-indexer>=0.3.0',\n", "issue": "celery 3.0 causes import error (cannot import abbrtools from celery.utils)\nRelease of celery 3.0 causes an import error at runtime upon any request.\n\nThis is the stack trace:\n\n```\nImportError: cannot import name abbrtask\nError handling request\nTraceback (most recent call last):\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/gunicorn/workers/sync.py\", line 107, in handle_request\n for item in respiter:\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/raven/middleware.py\", line 28, in __call__\n for event in self.application(environ, start_response):\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/wsgi.py\", line 241, in __call__\n response = self.get_response(request)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py\", line 179, in get_response\n response = self.handle_uncaught_exception(request, resolver, sys.exc_info())\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/handlers/base.py\", line 224, in handle_uncaught_exception\n if resolver.urlconf_module is None:\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/core/urlresolvers.py\", line 323, in urlconf_module\n self._urlconf_module = import_module(self.urlconf_name)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py\", line 35, in import_module\n __import__(name)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/sentry/conf/urls.py\", line 19, in <module>\n admin.autodiscover()\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/contrib/admin/__init__.py\", line 29, in autodiscover\n import_module('%s.admin' % app)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/django/utils/importlib.py\", line 35, in import_module\n __import__(name)\n File \"/Users/guzru/dev/django14/lib/python2.7/site-packages/djcelery/admin.py\", line 19, in <module>\n from celery.utils import abbrtask\nImportError: cannot import name abbrtask\n```\n\nRequirements line for celery should become:\n\ncelery>=2.5.3,<3.0.0\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nSentry\n======\n\nSentry is a realtime event logging and aggregation platform. It specializes\nin monitoring errors and extracting all the information needed to do a proper\npost-mortem without any of the hassle of the standard user feedback loop.\n\nSentry is a Server\n------------------\n\nThe Sentry package, at its core, is just a simple server and web UI. It will\nhandle authentication clients (such as `Raven <https://github.com/dcramer/raven>`_)\nand all of the logic behind storage and aggregation.\n\nThat said, Sentry is not limited to Python. The primary implementation is in\nPython, but it contains a full API for sending events from any language, in\nany application.\n\n:copyright: (c) 2011-2012 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\n# Hack to prevent stupid \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when running `python\n# setup.py test` (see\n# http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing\nexcept ImportError:\n pass\n\ntests_require = [\n 'django-nose==1.1',\n 'eventlet==0.9.16',\n 'nose==1.1.2',\n 'nydus==0.8.2',\n 'mock==0.8.0',\n 'pyflakes',\n 'pep8',\n 'redis',\n 'unittest2',\n]\n\n\ninstall_requires = [\n 'cssutils>=0.9.9',\n 'BeautifulSoup>=3.2.1',\n 'django-celery>=2.5.5,<3.0',\n 'django-crispy-forms>=1.1.4',\n 'Django>=1.2,<1.5',\n 'django-indexer>=0.3.0',\n 'django-paging>=0.2.4',\n 'django-picklefield>=0.2.0',\n 'django-templatetag-sugar>=0.1.0',\n 'gunicorn>=0.13.4',\n 'logan>=0.3.1',\n 'pynliner>=0.4.0',\n 'python-dateutil>=1.5.0,<2.0.0',\n 'pytz>=2011n',\n 'raven>=2.0.0',\n 'simplejson>=2.3.0,<2.5.0',\n 'South>=0.7',\n 'httpagentparser>=1.0.5'\n]\n\ndependency_links = [\n 'https://github.com/dcramer/pyflakes/tarball/master#egg=pyflakes',\n]\n\nsetup(\n name='sentry',\n version='4.8.1',\n author='David Cramer',\n author_email='[email protected]',\n url='http://github.com/dcramer/sentry',\n description='A realtime logging and aggregation server.',\n long_description=__doc__,\n packages=find_packages(exclude=['tests']),\n zip_safe=False,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require={'test': tests_require},\n dependency_links=dependency_links,\n test_suite='runtests.runtests',\n license='BSD',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'sentry = sentry.utils.runner:main',\n ],\n },\n classifiers=[\n 'Framework :: Django',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development'\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nSentry\n======\n\nSentry is a realtime event logging and aggregation platform. It specializes\nin monitoring errors and extracting all the information needed to do a proper\npost-mortem without any of the hassle of the standard user feedback loop.\n\nSentry is a Server\n------------------\n\nThe Sentry package, at its core, is just a simple server and web UI. It will\nhandle authentication clients (such as `Raven <https://github.com/dcramer/raven>`_)\nand all of the logic behind storage and aggregation.\n\nThat said, Sentry is not limited to Python. The primary implementation is in\nPython, but it contains a full API for sending events from any language, in\nany application.\n\n:copyright: (c) 2011-2012 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\n# Hack to prevent stupid \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when running `python\n# setup.py test` (see\n# http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing\nexcept ImportError:\n pass\n\ntests_require = [\n 'django-nose==1.1',\n 'eventlet==0.9.16',\n 'nose==1.1.2',\n 'nydus==0.8.2',\n 'mock==0.8.0',\n 'pyflakes',\n 'pep8',\n 'redis',\n 'unittest2',\n]\n\n\ninstall_requires = [\n 'cssutils>=0.9.9',\n 'BeautifulSoup>=3.2.1',\n 'django-celery>=2.5.5,<3.0',\n 'celery>=2.5.3,<3.0',\n 'django-crispy-forms>=1.1.4',\n 'Django>=1.2,<1.5',\n 'django-indexer>=0.3.0',\n 'django-paging>=0.2.4',\n 'django-picklefield>=0.2.0',\n 'django-templatetag-sugar>=0.1.0',\n 'gunicorn>=0.13.4',\n 'logan>=0.3.1',\n 'pynliner>=0.4.0',\n 'python-dateutil>=1.5.0,<2.0.0',\n 'pytz>=2011n',\n 'raven>=2.0.0',\n 'simplejson>=2.3.0,<2.5.0',\n 'South>=0.7',\n 'httpagentparser>=1.0.5'\n]\n\ndependency_links = [\n 'https://github.com/dcramer/pyflakes/tarball/master#egg=pyflakes',\n]\n\nsetup(\n name='sentry',\n version='4.8.1',\n author='David Cramer',\n author_email='[email protected]',\n url='http://github.com/dcramer/sentry',\n description='A realtime logging and aggregation server.',\n long_description=__doc__,\n packages=find_packages(exclude=['tests']),\n zip_safe=False,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require={'test': tests_require},\n dependency_links=dependency_links,\n test_suite='runtests.runtests',\n license='BSD',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'sentry = sentry.utils.runner:main',\n ],\n },\n classifiers=[\n 'Framework :: Django',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development'\n ],\n)\n", "path": "setup.py"}]} | 1,897 | 127 |
gh_patches_debug_22167 | rasdani/github-patches | git_diff | cupy__cupy-3159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
compatibility issue of `erfinv` and `erfcinv`
For `erfinv`, the valid domain is [-1, 1], and at the boundary -1 and +1 the values are -Inf and +Inf, respectively. But outside the boundary, the values are NaN in SciPy: see [here](https://github.com/scipy/scipy/blob/59347ae8b86bcc92c339efe213128f64ab6df98c/scipy/special/cephes/ndtri.c#L146-L149) (the `ndtri` function is the underlying workhorse).
Reproducer:
```python
>>> from cupyx.scipy.special import erfinv
>>> import cupy as cp
>>>
>>> a = (cp.arange(6) + 1).reshape(2,3)
>>> a
array([[1, 2, 3],
[4, 5, 6]])
>>> erfinv(a)
array([[inf, inf, inf],
[inf, inf, inf]])
>>>
>>> import scipy.special as scp
>>> scp.erfinv(cp.asnumpy(a))
array([[inf, nan, nan],
[nan, nan, nan]])
```
Reproducer 2:
```bash
$ pytest -v tests/cupyx_tests/scipy_tests/special_tests/test_erf.py
========================================================================= test session starts =========================================================================
platform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.12.0 -- /home/leofang/miniconda3/envs/cupy_dev/bin/python
cachedir: .pytest_cache
rootdir: /home/leofang/cupy, inifile: setup.cfg
collected 10 items
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erf PASSED [ 10%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfc PASSED [ 20%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcinv FAILED [ 30%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcx PASSED [ 40%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfinv FAILED [ 50%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erf PASSED [ 60%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfc PASSED [ 70%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcinv FAILED [ 80%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcx PASSED [ 90%]
tests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfinv FAILED [100%]
=============================================================== 4 failed, 6 passed, 1 warning in 0.74s ================================================================
```
I am a bit surprised to learn this, as the CI doesn't seem to complain at all, so it is likely the behavior is changed in recent SciPy? (I'm using v1.4.1 btw.)
The fix should be simple: just add another `else if` branch handling the out of boundary behavior to the ufunc here : https://github.com/cupy/cupy/blob/84343ce8a87d34928abef65d8930ba590189f43f/cupyx/scipy/special/erf.py#L37-L43
I have not dug into `erfcinv` but presumably the source of error is similar.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupyx/scipy/special/erf.py`
Content:
```
1 from cupy import core
2
3
4 erf = core.create_ufunc(
5 'cupyx_scipy_erf', ('f->f', 'd->d'),
6 'out0 = erf(in0)',
7 doc='''Error function.
8
9 .. seealso:: :meth:`scipy.special.erf`
10
11 ''')
12
13
14 erfc = core.create_ufunc(
15 'cupyx_scipy_erfc', ('f->f', 'd->d'),
16 'out0 = erfc(in0)',
17 doc='''Complementary error function.
18
19 .. seealso:: :meth:`scipy.special.erfc`
20
21 ''')
22
23
24 erfcx = core.create_ufunc(
25 'cupyx_scipy_erfcx', ('f->f', 'd->d'),
26 'out0 = erfcx(in0)',
27 doc='''Scaled complementary error function.
28
29 .. seealso:: :meth:`scipy.special.erfcx`
30
31 ''')
32
33
34 erfinv = core.create_ufunc(
35 'cupyx_scipy_erfinv', ('f->f', 'd->d'),
36 '''
37 if (in0 < -1) {
38 out0 = -1.0 / 0.0;
39 } else if (in0 > 1) {
40 out0 = 1.0 / 0.0;
41 } else {
42 out0 = erfinv(in0);
43 }
44 ''',
45 doc='''Inverse function of error function.
46
47 .. seealso:: :meth:`scipy.special.erfinv`
48
49 ''')
50
51
52 erfcinv = core.create_ufunc(
53 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),
54 '''
55 if (in0 < 0) {
56 out0 = 1.0 / 0.0;
57 } else if (in0 > 2) {
58 out0 = -1.0 / 0.0;
59 } else {
60 out0 = erfcinv(in0);
61 }
62 ''',
63 doc='''Inverse function of complementary error function.
64
65 .. seealso:: :meth:`scipy.special.erfcinv`
66
67 ''')
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupyx/scipy/special/erf.py b/cupyx/scipy/special/erf.py
--- a/cupyx/scipy/special/erf.py
+++ b/cupyx/scipy/special/erf.py
@@ -33,35 +33,27 @@
erfinv = core.create_ufunc(
'cupyx_scipy_erfinv', ('f->f', 'd->d'),
- '''
- if (in0 < -1) {
- out0 = -1.0 / 0.0;
- } else if (in0 > 1) {
- out0 = 1.0 / 0.0;
- } else {
- out0 = erfinv(in0);
- }
- ''',
+ 'out0 = erfinv(in0);',
doc='''Inverse function of error function.
.. seealso:: :meth:`scipy.special.erfinv`
+ .. note::
+ The behavior close to (and outside) the domain follows that of
+ SciPy v1.4.0+.
+
''')
erfcinv = core.create_ufunc(
'cupyx_scipy_erfcinv', ('f->f', 'd->d'),
- '''
- if (in0 < 0) {
- out0 = 1.0 / 0.0;
- } else if (in0 > 2) {
- out0 = -1.0 / 0.0;
- } else {
- out0 = erfcinv(in0);
- }
- ''',
+ 'out0 = erfcinv(in0);',
doc='''Inverse function of complementary error function.
.. seealso:: :meth:`scipy.special.erfcinv`
+ .. note::
+ The behavior close to (and outside) the domain follows that of
+ SciPy v1.4.0+.
+
''')
| {"golden_diff": "diff --git a/cupyx/scipy/special/erf.py b/cupyx/scipy/special/erf.py\n--- a/cupyx/scipy/special/erf.py\n+++ b/cupyx/scipy/special/erf.py\n@@ -33,35 +33,27 @@\n \n erfinv = core.create_ufunc(\n 'cupyx_scipy_erfinv', ('f->f', 'd->d'),\n- '''\n- if (in0 < -1) {\n- out0 = -1.0 / 0.0;\n- } else if (in0 > 1) {\n- out0 = 1.0 / 0.0;\n- } else {\n- out0 = erfinv(in0);\n- }\n- ''',\n+ 'out0 = erfinv(in0);',\n doc='''Inverse function of error function.\n \n .. seealso:: :meth:`scipy.special.erfinv`\n \n+ .. note::\n+ The behavior close to (and outside) the domain follows that of\n+ SciPy v1.4.0+.\n+\n ''')\n \n \n erfcinv = core.create_ufunc(\n 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),\n- '''\n- if (in0 < 0) {\n- out0 = 1.0 / 0.0;\n- } else if (in0 > 2) {\n- out0 = -1.0 / 0.0;\n- } else {\n- out0 = erfcinv(in0);\n- }\n- ''',\n+ 'out0 = erfcinv(in0);',\n doc='''Inverse function of complementary error function.\n \n .. seealso:: :meth:`scipy.special.erfcinv`\n \n+ .. note::\n+ The behavior close to (and outside) the domain follows that of\n+ SciPy v1.4.0+.\n+\n ''')\n", "issue": "compatibility issue of `erfinv` and `erfcinv` \nFor `erfinv`, the valid domain is [-1, 1], and at the boundary -1 and +1 the values are -Inf and +Inf, respectively. But outside the boundary, the values are NaN in SciPy: see [here](https://github.com/scipy/scipy/blob/59347ae8b86bcc92c339efe213128f64ab6df98c/scipy/special/cephes/ndtri.c#L146-L149) (the `ndtri` function is the underlying workhorse).\r\n\r\nReproducer:\r\n```python\r\n>>> from cupyx.scipy.special import erfinv\r\n>>> import cupy as cp\r\n>>> \r\n>>> a = (cp.arange(6) + 1).reshape(2,3)\r\n>>> a\r\narray([[1, 2, 3],\r\n [4, 5, 6]])\r\n>>> erfinv(a)\r\narray([[inf, inf, inf],\r\n [inf, inf, inf]])\r\n>>>\r\n>>> import scipy.special as scp\r\n>>> scp.erfinv(cp.asnumpy(a))\r\narray([[inf, nan, nan],\r\n [nan, nan, nan]])\r\n```\r\n\r\nReproducer 2:\r\n```bash\r\n$ pytest -v tests/cupyx_tests/scipy_tests/special_tests/test_erf.py\r\n========================================================================= test session starts =========================================================================\r\nplatform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.12.0 -- /home/leofang/miniconda3/envs/cupy_dev/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/leofang/cupy, inifile: setup.cfg\r\ncollected 10 items \r\n\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erf PASSED [ 10%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfc PASSED [ 20%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcinv FAILED [ 30%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfcx PASSED [ 40%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestSpecial::test_erfinv FAILED [ 50%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erf PASSED [ 60%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfc PASSED [ 70%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcinv FAILED [ 80%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfcx PASSED [ 90%]\r\ntests/cupyx_tests/scipy_tests/special_tests/test_erf.py::TestFusionSpecial::test_erfinv FAILED [100%]\r\n\r\n=============================================================== 4 failed, 6 passed, 1 warning in 0.74s ================================================================\r\n```\r\n\r\nI am a bit surprised to learn this, as the CI doesn't seem to complain at all, so it is likely the behavior is changed in recent SciPy? (I'm using v1.4.1 btw.) \r\n\r\nThe fix should be simple: just add another `else if` branch handling the out of boundary behavior to the ufunc here : https://github.com/cupy/cupy/blob/84343ce8a87d34928abef65d8930ba590189f43f/cupyx/scipy/special/erf.py#L37-L43\r\n\r\nI have not dug into `erfcinv` but presumably the source of error is similar. \n", "before_files": [{"content": "from cupy import core\n\n\nerf = core.create_ufunc(\n 'cupyx_scipy_erf', ('f->f', 'd->d'),\n 'out0 = erf(in0)',\n doc='''Error function.\n\n .. seealso:: :meth:`scipy.special.erf`\n\n ''')\n\n\nerfc = core.create_ufunc(\n 'cupyx_scipy_erfc', ('f->f', 'd->d'),\n 'out0 = erfc(in0)',\n doc='''Complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfc`\n\n ''')\n\n\nerfcx = core.create_ufunc(\n 'cupyx_scipy_erfcx', ('f->f', 'd->d'),\n 'out0 = erfcx(in0)',\n doc='''Scaled complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfcx`\n\n ''')\n\n\nerfinv = core.create_ufunc(\n 'cupyx_scipy_erfinv', ('f->f', 'd->d'),\n '''\n if (in0 < -1) {\n out0 = -1.0 / 0.0;\n } else if (in0 > 1) {\n out0 = 1.0 / 0.0;\n } else {\n out0 = erfinv(in0);\n }\n ''',\n doc='''Inverse function of error function.\n\n .. seealso:: :meth:`scipy.special.erfinv`\n\n ''')\n\n\nerfcinv = core.create_ufunc(\n 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),\n '''\n if (in0 < 0) {\n out0 = 1.0 / 0.0;\n } else if (in0 > 2) {\n out0 = -1.0 / 0.0;\n } else {\n out0 = erfcinv(in0);\n }\n ''',\n doc='''Inverse function of complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfcinv`\n\n ''')\n", "path": "cupyx/scipy/special/erf.py"}], "after_files": [{"content": "from cupy import core\n\n\nerf = core.create_ufunc(\n 'cupyx_scipy_erf', ('f->f', 'd->d'),\n 'out0 = erf(in0)',\n doc='''Error function.\n\n .. seealso:: :meth:`scipy.special.erf`\n\n ''')\n\n\nerfc = core.create_ufunc(\n 'cupyx_scipy_erfc', ('f->f', 'd->d'),\n 'out0 = erfc(in0)',\n doc='''Complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfc`\n\n ''')\n\n\nerfcx = core.create_ufunc(\n 'cupyx_scipy_erfcx', ('f->f', 'd->d'),\n 'out0 = erfcx(in0)',\n doc='''Scaled complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfcx`\n\n ''')\n\n\nerfinv = core.create_ufunc(\n 'cupyx_scipy_erfinv', ('f->f', 'd->d'),\n 'out0 = erfinv(in0);',\n doc='''Inverse function of error function.\n\n .. seealso:: :meth:`scipy.special.erfinv`\n\n .. note::\n The behavior close to (and outside) the domain follows that of\n SciPy v1.4.0+.\n\n ''')\n\n\nerfcinv = core.create_ufunc(\n 'cupyx_scipy_erfcinv', ('f->f', 'd->d'),\n 'out0 = erfcinv(in0);',\n doc='''Inverse function of complementary error function.\n\n .. seealso:: :meth:`scipy.special.erfcinv`\n\n .. note::\n The behavior close to (and outside) the domain follows that of\n SciPy v1.4.0+.\n\n ''')\n", "path": "cupyx/scipy/special/erf.py"}]} | 1,777 | 447 |
gh_patches_debug_17996 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-768 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong hook executed when a tilde-suffixed file of the same name exists
- Cookiecutter version: 1.4.0
- Template project url: https://github.com/thorgate/django-project-template
- Python version: 3.4
- Operating System: Ubuntu 15.10 wily
### Description:
When using gedit or some other text editor that pollutes the directory with backup files ending with a tilde, cookiecutter mistakes that for the "real" hook it should run. This resulted in cookiecutter running a ridiculously outdated version of my pre-gen hook.
The obvious solution is to just remove `pre_gen_project.py~`, which works, but I believe ideally cookiecutter shouldn't be running it in the first place.
### What I've run:
```
gedit django-template/hooks/pre_gen_project.py
cookiecutter django-template
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/hooks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 cookiecutter.hooks
5 ------------------
6
7 Functions for discovering and executing various cookiecutter hooks.
8 """
9
10 import io
11 import logging
12 import os
13 import subprocess
14 import sys
15 import tempfile
16
17 from jinja2 import Template
18
19 from cookiecutter import utils
20 from .exceptions import FailedHookException
21
22
23 _HOOKS = [
24 'pre_gen_project',
25 'post_gen_project',
26 # TODO: other hooks should be listed here
27 ]
28 EXIT_SUCCESS = 0
29
30
31 def find_hooks():
32 """
33 Must be called with the project template as the current working directory.
34 Returns a dict of all hook scripts provided.
35 Dict's key will be the hook/script's name, without extension, while
36 values will be the absolute path to the script.
37 Missing scripts will not be included in the returned dict.
38 """
39 hooks_dir = 'hooks'
40 r = {}
41 logging.debug('hooks_dir is {0}'.format(hooks_dir))
42 if not os.path.isdir(hooks_dir):
43 logging.debug('No hooks/ dir in template_dir')
44 return r
45 for f in os.listdir(hooks_dir):
46 basename = os.path.splitext(os.path.basename(f))[0]
47 if basename in _HOOKS:
48 r[basename] = os.path.abspath(os.path.join(hooks_dir, f))
49 return r
50
51
52 def run_script(script_path, cwd='.'):
53 """
54 Executes a script from a working directory.
55
56 :param script_path: Absolute path to the script to run.
57 :param cwd: The directory to run the script from.
58 """
59 run_thru_shell = sys.platform.startswith('win')
60 if script_path.endswith('.py'):
61 script_command = [sys.executable, script_path]
62 else:
63 script_command = [script_path]
64
65 utils.make_executable(script_path)
66
67 proc = subprocess.Popen(
68 script_command,
69 shell=run_thru_shell,
70 cwd=cwd
71 )
72 exit_status = proc.wait()
73 if exit_status != EXIT_SUCCESS:
74 raise FailedHookException(
75 "Hook script failed (exit status: %d)" % exit_status)
76
77
78 def run_script_with_context(script_path, cwd, context):
79 """
80 Executes a script after rendering with it Jinja.
81
82 :param script_path: Absolute path to the script to run.
83 :param cwd: The directory to run the script from.
84 :param context: Cookiecutter project template context.
85 """
86 _, extension = os.path.splitext(script_path)
87
88 contents = io.open(script_path, 'r', encoding='utf-8').read()
89
90 with tempfile.NamedTemporaryFile(
91 delete=False,
92 mode='wb',
93 suffix=extension
94 ) as temp:
95 output = Template(contents).render(**context)
96 temp.write(output.encode('utf-8'))
97
98 run_script(temp.name, cwd)
99
100
101 def run_hook(hook_name, project_dir, context):
102 """
103 Try to find and execute a hook from the specified project directory.
104
105 :param hook_name: The hook to execute.
106 :param project_dir: The directory to execute the script from.
107 :param context: Cookiecutter project context.
108 """
109 script = find_hooks().get(hook_name)
110 if script is None:
111 logging.debug('No hooks found')
112 return
113 run_script_with_context(script, project_dir, context)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py
--- a/cookiecutter/hooks.py
+++ b/cookiecutter/hooks.py
@@ -37,16 +37,20 @@
Missing scripts will not be included in the returned dict.
"""
hooks_dir = 'hooks'
- r = {}
+ hooks = {}
logging.debug('hooks_dir is {0}'.format(hooks_dir))
+
if not os.path.isdir(hooks_dir):
logging.debug('No hooks/ dir in template_dir')
- return r
+ return hooks
+
for f in os.listdir(hooks_dir):
- basename = os.path.splitext(os.path.basename(f))[0]
- if basename in _HOOKS:
- r[basename] = os.path.abspath(os.path.join(hooks_dir, f))
- return r
+ filename = os.path.basename(f)
+ basename = os.path.splitext(filename)[0]
+
+ if basename in _HOOKS and not filename.endswith('~'):
+ hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))
+ return hooks
def run_script(script_path, cwd='.'):
| {"golden_diff": "diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py\n--- a/cookiecutter/hooks.py\n+++ b/cookiecutter/hooks.py\n@@ -37,16 +37,20 @@\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n- r = {}\n+ hooks = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n+\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n- return r\n+ return hooks\n+\n for f in os.listdir(hooks_dir):\n- basename = os.path.splitext(os.path.basename(f))[0]\n- if basename in _HOOKS:\n- r[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n- return r\n+ filename = os.path.basename(f)\n+ basename = os.path.splitext(filename)[0]\n+\n+ if basename in _HOOKS and not filename.endswith('~'):\n+ hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n+ return hooks\n \n \n def run_script(script_path, cwd='.'):\n", "issue": "Wrong hook executed when a tilde-suffixed file of the same name exists\n- Cookiecutter version: 1.4.0\n- Template project url: https://github.com/thorgate/django-project-template\n- Python version: 3.4\n- Operating System: Ubuntu 15.10 wily\n### Description:\n\nWhen using gedit or some other text editor that pollutes the directory with backup files ending with a tilde, cookiecutter mistakes that for the \"real\" hook it should run. This resulted in cookiecutter running a ridiculously outdated version of my pre-gen hook.\n\nThe obvious solution is to just remove `pre_gen_project.py~`, which works, but I believe ideally cookiecutter shouldn't be running it in the first place.\n### What I've run:\n\n```\ngedit django-template/hooks/pre_gen_project.py\ncookiecutter django-template\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.hooks\n------------------\n\nFunctions for discovering and executing various cookiecutter hooks.\n\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n # TODO: other hooks should be listed here\n]\nEXIT_SUCCESS = 0\n\n\ndef find_hooks():\n \"\"\"\n Must be called with the project template as the current working directory.\n Returns a dict of all hook scripts provided.\n Dict's key will be the hook/script's name, without extension, while\n values will be the absolute path to the script.\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n r = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n return r\n for f in os.listdir(hooks_dir):\n basename = os.path.splitext(os.path.basename(f))[0]\n if basename in _HOOKS:\n r[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n return r\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"\n Executes a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"\n Executes a script after rendering with it Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hooks().get(hook_name)\n if script is None:\n logging.debug('No hooks found')\n return\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.hooks\n------------------\n\nFunctions for discovering and executing various cookiecutter hooks.\n\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n # TODO: other hooks should be listed here\n]\nEXIT_SUCCESS = 0\n\n\ndef find_hooks():\n \"\"\"\n Must be called with the project template as the current working directory.\n Returns a dict of all hook scripts provided.\n Dict's key will be the hook/script's name, without extension, while\n values will be the absolute path to the script.\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n hooks = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n return hooks\n\n for f in os.listdir(hooks_dir):\n filename = os.path.basename(f)\n basename = os.path.splitext(filename)[0]\n\n if basename in _HOOKS and not filename.endswith('~'):\n hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n return hooks\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"\n Executes a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"\n Executes a script after rendering with it Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hooks().get(hook_name)\n if script is None:\n logging.debug('No hooks found')\n return\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}]} | 1,402 | 256 |
gh_patches_debug_11047 | rasdani/github-patches | git_diff | astronomer__astro-sdk-62 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to load_file using parquet
Version: Astro 0.2.0
Python: 3.8, 3.9
Astro is unable to run the task `load_file` with a parquet file.
It raises the following exception:
```
Traceback (most recent call last):
File "pyarrow/io.pxi", line 1511, in pyarrow.lib.get_native_file
File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/util.py", line 99, in _stringify_path
raise TypeError("not a path-like object")
TypeError: not a path-like object
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pyarrow/io.pxi", line 1517, in pyarrow.lib.get_native_file
File "pyarrow/io.pxi", line 729, in pyarrow.lib.PythonFile.__cinit__
TypeError: binary file expected, got text file
warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/astro/sql/operators/agnostic_load_file.py`
Content:
```
1 """
2 Copyright Astronomer, Inc.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 """
16
17 import os
18 from typing import Union
19 from urllib.parse import urlparse
20
21 import pandas as pd
22 from airflow.hooks.base import BaseHook
23 from airflow.models import BaseOperator
24 from smart_open import open
25
26 from astro.sql.table import Table, TempTable, create_table_name
27 from astro.utils.cloud_storage_creds import gcs_client, s3fs_creds
28 from astro.utils.load_dataframe import move_dataframe_to_sql
29 from astro.utils.schema_util import get_schema
30 from astro.utils.task_id_helper import get_task_id
31
32
33 class AgnosticLoadFile(BaseOperator):
34 """Load S3/local table to postgres/snowflake database.
35
36 :param path: File path.
37 :type path: str
38 :param output_table_name: Name of table to create.
39 :type output_table_name: str
40 :param file_conn_id: Airflow connection id of input file (optional)
41 :type file_conn_id: str
42 :param output_conn_id: Database connection id.
43 :type output_conn_id: str
44 """
45
46 def __init__(
47 self,
48 path,
49 output_table: Union[TempTable, Table],
50 file_conn_id="",
51 chunksize=None,
52 **kwargs,
53 ) -> None:
54 super().__init__(**kwargs)
55 self.output_table: Union[TempTable, Table] = output_table
56 self.path = path
57 self.chunksize = chunksize
58 self.file_conn_id = file_conn_id
59 self.kwargs = kwargs
60 self.output_table = output_table
61
62 def execute(self, context):
63 """Loads csv/parquet table from local/S3/GCS with Pandas.
64
65 Infers SQL database type based on connection then loads table to db.
66 """
67
68 # Read file with Pandas load method based on `file_type` (S3 or local).
69 df = self._load_dataframe(self.path)
70
71 # Retrieve conn type
72 conn = BaseHook.get_connection(self.output_table.conn_id)
73 if type(self.output_table) == TempTable:
74 self.output_table = self.output_table.to_table(
75 create_table_name(context=context), get_schema()
76 )
77 else:
78 self.output_table.schema = self.output_table.schema or get_schema()
79 move_dataframe_to_sql(
80 output_table_name=self.output_table.table_name,
81 conn_id=self.output_table.conn_id,
82 database=self.output_table.database,
83 warehouse=self.output_table.warehouse,
84 schema=self.output_table.schema,
85 df=df,
86 conn_type=conn.conn_type,
87 user=conn.login,
88 )
89 self.log.info(f"returning table {self.output_table}")
90 return self.output_table
91
92 @staticmethod
93 def validate_path(path):
94 """Validate a URL or local file path"""
95 try:
96 result = urlparse(path)
97 return all([result.scheme, result.netloc]) or os.path.isfile(path)
98 except:
99 return False
100
101 def _load_dataframe(self, path):
102 """Read file with Pandas.
103
104 Select method based on `file_type` (S3 or local).
105 """
106
107 if not AgnosticLoadFile.validate_path(path):
108 raise ValueError("Invalid path: {}".format(path))
109
110 file_type = path.split(".")[-1]
111 transport_params = {
112 "s3": s3fs_creds,
113 "gs": gcs_client,
114 "": lambda: None,
115 }[urlparse(path).scheme]()
116 deserialiser = {
117 "parquet": pd.read_parquet,
118 "csv": pd.read_csv,
119 "json": pd.read_json,
120 "ndjson": pd.read_json,
121 }
122 deserialiser_params = {"ndjson": {"lines": True}}
123 with open(path, transport_params=transport_params) as stream:
124 return deserialiser[file_type](
125 stream, **deserialiser_params.get(file_type, {})
126 )
127
128
129 def load_file(
130 path,
131 output_table=None,
132 file_conn_id=None,
133 task_id=None,
134 **kwargs,
135 ):
136 """Convert AgnosticLoadFile into a function.
137
138 Returns an XComArg object.
139
140 :param path: File path.
141 :type path: str
142 :param output_table: Table to create
143 :type output_table: Table
144 :param file_conn_id: Airflow connection id of input file (optional)
145 :type file_conn_id: str
146 :param task_id: task id, optional.
147 :type task_id: str
148 """
149
150 task_id = task_id if task_id is not None else get_task_id("load_file", path)
151
152 return AgnosticLoadFile(
153 task_id=task_id,
154 path=path,
155 output_table=output_table,
156 file_conn_id=file_conn_id,
157 **kwargs,
158 ).output
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/astro/sql/operators/agnostic_load_file.py b/src/astro/sql/operators/agnostic_load_file.py
--- a/src/astro/sql/operators/agnostic_load_file.py
+++ b/src/astro/sql/operators/agnostic_load_file.py
@@ -119,8 +119,11 @@
"json": pd.read_json,
"ndjson": pd.read_json,
}
+ mode = {"parquet": "rb"}
deserialiser_params = {"ndjson": {"lines": True}}
- with open(path, transport_params=transport_params) as stream:
+ with open(
+ path, mode=mode.get(file_type, "r"), transport_params=transport_params
+ ) as stream:
return deserialiser[file_type](
stream, **deserialiser_params.get(file_type, {})
)
| {"golden_diff": "diff --git a/src/astro/sql/operators/agnostic_load_file.py b/src/astro/sql/operators/agnostic_load_file.py\n--- a/src/astro/sql/operators/agnostic_load_file.py\n+++ b/src/astro/sql/operators/agnostic_load_file.py\n@@ -119,8 +119,11 @@\n \"json\": pd.read_json,\n \"ndjson\": pd.read_json,\n }\n+ mode = {\"parquet\": \"rb\"}\n deserialiser_params = {\"ndjson\": {\"lines\": True}}\n- with open(path, transport_params=transport_params) as stream:\n+ with open(\n+ path, mode=mode.get(file_type, \"r\"), transport_params=transport_params\n+ ) as stream:\n return deserialiser[file_type](\n stream, **deserialiser_params.get(file_type, {})\n )\n", "issue": "Unable to load_file using parquet\nVersion: Astro 0.2.0\r\nPython: 3.8, 3.9\r\n\r\nAstro is unable to run the task `load_file` with a parquet file.\r\n\r\nIt raises the following exception:\r\n```\r\n Traceback (most recent call last):\r\n File \"pyarrow/io.pxi\", line 1511, in pyarrow.lib.get_native_file\r\n File \"/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/util.py\", line 99, in _stringify_path\r\n raise TypeError(\"not a path-like object\")\r\n TypeError: not a path-like object\r\n \r\n During handling of the above exception, another exception occurred:\r\n \r\n Traceback (most recent call last):\r\n File \"pyarrow/io.pxi\", line 1517, in pyarrow.lib.get_native_file\r\n File \"pyarrow/io.pxi\", line 729, in pyarrow.lib.PythonFile.__cinit__\r\n TypeError: binary file expected, got text file\r\n \r\n warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Astronomer, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\"\"\"\n\nimport os\nfrom typing import Union\nfrom urllib.parse import urlparse\n\nimport pandas as pd\nfrom airflow.hooks.base import BaseHook\nfrom airflow.models import BaseOperator\nfrom smart_open import open\n\nfrom astro.sql.table import Table, TempTable, create_table_name\nfrom astro.utils.cloud_storage_creds import gcs_client, s3fs_creds\nfrom astro.utils.load_dataframe import move_dataframe_to_sql\nfrom astro.utils.schema_util import get_schema\nfrom astro.utils.task_id_helper import get_task_id\n\n\nclass AgnosticLoadFile(BaseOperator):\n \"\"\"Load S3/local table to postgres/snowflake database.\n\n :param path: File path.\n :type path: str\n :param output_table_name: Name of table to create.\n :type output_table_name: str\n :param file_conn_id: Airflow connection id of input file (optional)\n :type file_conn_id: str\n :param output_conn_id: Database connection id.\n :type output_conn_id: str\n \"\"\"\n\n def __init__(\n self,\n path,\n output_table: Union[TempTable, Table],\n file_conn_id=\"\",\n chunksize=None,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.output_table: Union[TempTable, Table] = output_table\n self.path = path\n self.chunksize = chunksize\n self.file_conn_id = file_conn_id\n self.kwargs = kwargs\n self.output_table = output_table\n\n def execute(self, context):\n \"\"\"Loads csv/parquet table from local/S3/GCS with Pandas.\n\n Infers SQL database type based on connection then loads table to db.\n \"\"\"\n\n # Read file with Pandas load method based on `file_type` (S3 or local).\n df = self._load_dataframe(self.path)\n\n # Retrieve conn type\n conn = BaseHook.get_connection(self.output_table.conn_id)\n if type(self.output_table) == TempTable:\n self.output_table = self.output_table.to_table(\n create_table_name(context=context), get_schema()\n )\n else:\n self.output_table.schema = self.output_table.schema or get_schema()\n move_dataframe_to_sql(\n output_table_name=self.output_table.table_name,\n conn_id=self.output_table.conn_id,\n database=self.output_table.database,\n warehouse=self.output_table.warehouse,\n schema=self.output_table.schema,\n df=df,\n conn_type=conn.conn_type,\n user=conn.login,\n )\n self.log.info(f\"returning table {self.output_table}\")\n return self.output_table\n\n @staticmethod\n def validate_path(path):\n \"\"\"Validate a URL or local file path\"\"\"\n try:\n result = urlparse(path)\n return all([result.scheme, result.netloc]) or os.path.isfile(path)\n except:\n return False\n\n def _load_dataframe(self, path):\n \"\"\"Read file with Pandas.\n\n Select method based on `file_type` (S3 or local).\n \"\"\"\n\n if not AgnosticLoadFile.validate_path(path):\n raise ValueError(\"Invalid path: {}\".format(path))\n\n file_type = path.split(\".\")[-1]\n transport_params = {\n \"s3\": s3fs_creds,\n \"gs\": gcs_client,\n \"\": lambda: None,\n }[urlparse(path).scheme]()\n deserialiser = {\n \"parquet\": pd.read_parquet,\n \"csv\": pd.read_csv,\n \"json\": pd.read_json,\n \"ndjson\": pd.read_json,\n }\n deserialiser_params = {\"ndjson\": {\"lines\": True}}\n with open(path, transport_params=transport_params) as stream:\n return deserialiser[file_type](\n stream, **deserialiser_params.get(file_type, {})\n )\n\n\ndef load_file(\n path,\n output_table=None,\n file_conn_id=None,\n task_id=None,\n **kwargs,\n):\n \"\"\"Convert AgnosticLoadFile into a function.\n\n Returns an XComArg object.\n\n :param path: File path.\n :type path: str\n :param output_table: Table to create\n :type output_table: Table\n :param file_conn_id: Airflow connection id of input file (optional)\n :type file_conn_id: str\n :param task_id: task id, optional.\n :type task_id: str\n \"\"\"\n\n task_id = task_id if task_id is not None else get_task_id(\"load_file\", path)\n\n return AgnosticLoadFile(\n task_id=task_id,\n path=path,\n output_table=output_table,\n file_conn_id=file_conn_id,\n **kwargs,\n ).output\n", "path": "src/astro/sql/operators/agnostic_load_file.py"}], "after_files": [{"content": "\"\"\"\nCopyright Astronomer, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\"\"\"\n\nimport os\nfrom typing import Union\nfrom urllib.parse import urlparse\n\nimport pandas as pd\nfrom airflow.hooks.base import BaseHook\nfrom airflow.models import BaseOperator\nfrom smart_open import open\n\nfrom astro.sql.table import Table, TempTable, create_table_name\nfrom astro.utils.cloud_storage_creds import gcs_client, s3fs_creds\nfrom astro.utils.load_dataframe import move_dataframe_to_sql\nfrom astro.utils.schema_util import get_schema\nfrom astro.utils.task_id_helper import get_task_id\n\n\nclass AgnosticLoadFile(BaseOperator):\n \"\"\"Load S3/local table to postgres/snowflake database.\n\n :param path: File path.\n :type path: str\n :param output_table_name: Name of table to create.\n :type output_table_name: str\n :param file_conn_id: Airflow connection id of input file (optional)\n :type file_conn_id: str\n :param output_conn_id: Database connection id.\n :type output_conn_id: str\n \"\"\"\n\n def __init__(\n self,\n path,\n output_table: Union[TempTable, Table],\n file_conn_id=\"\",\n chunksize=None,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.output_table: Union[TempTable, Table] = output_table\n self.path = path\n self.chunksize = chunksize\n self.file_conn_id = file_conn_id\n self.kwargs = kwargs\n self.output_table = output_table\n\n def execute(self, context):\n \"\"\"Loads csv/parquet table from local/S3/GCS with Pandas.\n\n Infers SQL database type based on connection then loads table to db.\n \"\"\"\n\n # Read file with Pandas load method based on `file_type` (S3 or local).\n df = self._load_dataframe(self.path)\n\n # Retrieve conn type\n conn = BaseHook.get_connection(self.output_table.conn_id)\n if type(self.output_table) == TempTable:\n self.output_table = self.output_table.to_table(\n create_table_name(context=context), get_schema()\n )\n else:\n self.output_table.schema = self.output_table.schema or get_schema()\n move_dataframe_to_sql(\n output_table_name=self.output_table.table_name,\n conn_id=self.output_table.conn_id,\n database=self.output_table.database,\n warehouse=self.output_table.warehouse,\n schema=self.output_table.schema,\n df=df,\n conn_type=conn.conn_type,\n user=conn.login,\n )\n self.log.info(f\"returning table {self.output_table}\")\n return self.output_table\n\n @staticmethod\n def validate_path(path):\n \"\"\"Validate a URL or local file path\"\"\"\n try:\n result = urlparse(path)\n return all([result.scheme, result.netloc]) or os.path.isfile(path)\n except:\n return False\n\n def _load_dataframe(self, path):\n \"\"\"Read file with Pandas.\n\n Select method based on `file_type` (S3 or local).\n \"\"\"\n\n if not AgnosticLoadFile.validate_path(path):\n raise ValueError(\"Invalid path: {}\".format(path))\n\n file_type = path.split(\".\")[-1]\n transport_params = {\n \"s3\": s3fs_creds,\n \"gs\": gcs_client,\n \"\": lambda: None,\n }[urlparse(path).scheme]()\n deserialiser = {\n \"parquet\": pd.read_parquet,\n \"csv\": pd.read_csv,\n \"json\": pd.read_json,\n \"ndjson\": pd.read_json,\n }\n mode = {\"parquet\": \"rb\"}\n deserialiser_params = {\"ndjson\": {\"lines\": True}}\n with open(\n path, mode=mode.get(file_type, \"r\"), transport_params=transport_params\n ) as stream:\n return deserialiser[file_type](\n stream, **deserialiser_params.get(file_type, {})\n )\n\n\ndef load_file(\n path,\n output_table=None,\n file_conn_id=None,\n task_id=None,\n **kwargs,\n):\n \"\"\"Convert AgnosticLoadFile into a function.\n\n Returns an XComArg object.\n\n :param path: File path.\n :type path: str\n :param output_table: Table to create\n :type output_table: Table\n :param file_conn_id: Airflow connection id of input file (optional)\n :type file_conn_id: str\n :param task_id: task id, optional.\n :type task_id: str\n \"\"\"\n\n task_id = task_id if task_id is not None else get_task_id(\"load_file\", path)\n\n return AgnosticLoadFile(\n task_id=task_id,\n path=path,\n output_table=output_table,\n file_conn_id=file_conn_id,\n **kwargs,\n ).output\n", "path": "src/astro/sql/operators/agnostic_load_file.py"}]} | 2,012 | 181 |
gh_patches_debug_56255 | rasdani/github-patches | git_diff | litestar-org__litestar-1377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/events/emitter.py`
Content:
```
1 from __future__ import annotations
2
3 from abc import ABC, abstractmethod
4 from asyncio import CancelledError, Queue, Task, create_task
5 from collections import defaultdict
6 from contextlib import suppress
7 from typing import TYPE_CHECKING, Any, DefaultDict, Sequence
8
9 import sniffio
10
11 from starlite.exceptions import ImproperlyConfiguredException
12
13 __all__ = ("BaseEventEmitterBackend", "SimpleEventEmitter")
14
15
16 if TYPE_CHECKING:
17 from starlite.events.listener import EventListener
18
19
20 class BaseEventEmitterBackend(ABC):
21 """Abstract class used to define event emitter backends."""
22
23 __slots__ = ("listeners",)
24
25 listeners: DefaultDict[str, set[EventListener]]
26
27 def __init__(self, listeners: Sequence[EventListener]):
28 """Create an event emitter instance.
29
30 Args:
31 listeners: A list of listeners.
32 """
33 self.listeners = defaultdict(set)
34 for listener in listeners:
35 for event_id in listener.event_ids:
36 self.listeners[event_id].add(listener)
37
38 @abstractmethod
39 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover
40 """Emit an event to all attached listeners.
41
42 Args:
43 event_id: The ID of the event to emit, e.g 'my_event'.
44 *args: args to pass to the listener(s).
45 **kwargs: kwargs to pass to the listener(s)
46
47 Returns:
48 None
49 """
50 raise NotImplementedError("not implemented")
51
52 @abstractmethod
53 async def on_startup(self) -> None: # pragma: no cover
54 """Hook called on application startup, used to establish connection or perform other async operations.
55
56 Returns:
57 None
58 """
59 raise NotImplementedError("not implemented")
60
61 @abstractmethod
62 async def on_shutdown(self) -> None: # pragma: no cover
63 """Hook called on application shutdown, used to perform cleanup.
64
65 Returns:
66 None
67 """
68 raise NotImplementedError("not implemented")
69
70
71 class SimpleEventEmitter(BaseEventEmitterBackend):
72 """Event emitter the works only in the current process"""
73
74 __slots__ = ("_queue", "_worker_task")
75
76 _worker_task: Task | None
77
78 def __init__(self, listeners: Sequence[EventListener]):
79 """Create an event emitter instance.
80
81 Args:
82 listeners: A list of listeners.
83 """
84 super().__init__(listeners=listeners)
85 self._queue: Queue | None = None
86 self._worker_task = None
87
88 async def _worker(self) -> None:
89 """Worker that runs in a separate task and continuously pulls events from asyncio queue.
90
91 Returns:
92 None
93 """
94 while self._queue:
95 fn, args, kwargs = await self._queue.get()
96 await fn(*args, *kwargs)
97 self._queue.task_done()
98
99 async def on_startup(self) -> None:
100 """Hook called on application startup, used to establish connection or perform other async operations.
101
102 Returns:
103 None
104 """
105 if sniffio.current_async_library() != "asyncio":
106 return
107
108 self._queue = Queue()
109 self._worker_task = create_task(self._worker())
110
111 async def on_shutdown(self) -> None:
112 """Hook called on application shutdown, used to perform cleanup.
113
114 Returns:
115 None
116 """
117
118 if self._queue:
119 await self._queue.join()
120
121 if self._worker_task:
122 self._worker_task.cancel()
123 with suppress(CancelledError):
124 await self._worker_task
125
126 self._worker_task = None
127 self._queue = None
128
129 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:
130 """Emit an event to all attached listeners.
131
132 Args:
133 event_id: The ID of the event to emit, e.g 'my_event'.
134 *args: args to pass to the listener(s).
135 **kwargs: kwargs to pass to the listener(s)
136
137 Returns:
138 None
139 """
140 if not (self._worker_task and self._queue):
141 if sniffio.current_async_library() != "asyncio":
142 raise ImproperlyConfiguredException("{type(self).__name__} only supports 'asyncio' based event loops")
143
144 raise ImproperlyConfiguredException("Worker not running")
145
146 if listeners := self.listeners.get(event_id):
147 for listener in listeners:
148 self._queue.put_nowait((listener.fn, args, kwargs))
149 return
150 raise ImproperlyConfiguredException(f"no event listeners are registered for event ID: {event_id}")
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/events/emitter.py b/starlite/events/emitter.py
--- a/starlite/events/emitter.py
+++ b/starlite/events/emitter.py
@@ -93,7 +93,7 @@
"""
while self._queue:
fn, args, kwargs = await self._queue.get()
- await fn(*args, *kwargs)
+ await fn(*args, **kwargs)
self._queue.task_done()
async def on_startup(self) -> None:
| {"golden_diff": "diff --git a/starlite/events/emitter.py b/starlite/events/emitter.py\n--- a/starlite/events/emitter.py\n+++ b/starlite/events/emitter.py\n@@ -93,7 +93,7 @@\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n- await fn(*args, *kwargs)\n+ await fn(*args, **kwargs)\n self._queue.task_done()\n \n async def on_startup(self) -> None:\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom asyncio import CancelledError, Queue, Task, create_task\nfrom collections import defaultdict\nfrom contextlib import suppress\nfrom typing import TYPE_CHECKING, Any, DefaultDict, Sequence\n\nimport sniffio\n\nfrom starlite.exceptions import ImproperlyConfiguredException\n\n__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n\n\nif TYPE_CHECKING:\n from starlite.events.listener import EventListener\n\n\nclass BaseEventEmitterBackend(ABC):\n \"\"\"Abstract class used to define event emitter backends.\"\"\"\n\n __slots__ = (\"listeners\",)\n\n listeners: DefaultDict[str, set[EventListener]]\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n self.listeners = defaultdict(set)\n for listener in listeners:\n for event_id in listener.event_ids:\n self.listeners[event_id].add(listener)\n\n @abstractmethod\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_startup(self) -> None: # pragma: no cover\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_shutdown(self) -> None: # pragma: no cover\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n\nclass SimpleEventEmitter(BaseEventEmitterBackend):\n \"\"\"Event emitter the works only in the current process\"\"\"\n\n __slots__ = (\"_queue\", \"_worker_task\")\n\n _worker_task: Task | None\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n super().__init__(listeners=listeners)\n self._queue: Queue | None = None\n self._worker_task = None\n\n async def _worker(self) -> None:\n \"\"\"Worker that runs in a separate task and continuously pulls events from asyncio queue.\n\n Returns:\n None\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n await fn(*args, *kwargs)\n self._queue.task_done()\n\n async def on_startup(self) -> None:\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n if sniffio.current_async_library() != \"asyncio\":\n return\n\n self._queue = Queue()\n self._worker_task = create_task(self._worker())\n\n async def on_shutdown(self) -> None:\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n\n if self._queue:\n await self._queue.join()\n\n if self._worker_task:\n self._worker_task.cancel()\n with suppress(CancelledError):\n await self._worker_task\n\n self._worker_task = None\n self._queue = None\n\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n if not (self._worker_task and self._queue):\n if sniffio.current_async_library() != \"asyncio\":\n raise ImproperlyConfiguredException(\"{type(self).__name__} only supports 'asyncio' based event loops\")\n\n raise ImproperlyConfiguredException(\"Worker not running\")\n\n if listeners := self.listeners.get(event_id):\n for listener in listeners:\n self._queue.put_nowait((listener.fn, args, kwargs))\n return\n raise ImproperlyConfiguredException(f\"no event listeners are registered for event ID: {event_id}\")\n", "path": "starlite/events/emitter.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom asyncio import CancelledError, Queue, Task, create_task\nfrom collections import defaultdict\nfrom contextlib import suppress\nfrom typing import TYPE_CHECKING, Any, DefaultDict, Sequence\n\nimport sniffio\n\nfrom starlite.exceptions import ImproperlyConfiguredException\n\n__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n\n\nif TYPE_CHECKING:\n from starlite.events.listener import EventListener\n\n\nclass BaseEventEmitterBackend(ABC):\n \"\"\"Abstract class used to define event emitter backends.\"\"\"\n\n __slots__ = (\"listeners\",)\n\n listeners: DefaultDict[str, set[EventListener]]\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n self.listeners = defaultdict(set)\n for listener in listeners:\n for event_id in listener.event_ids:\n self.listeners[event_id].add(listener)\n\n @abstractmethod\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_startup(self) -> None: # pragma: no cover\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_shutdown(self) -> None: # pragma: no cover\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n\nclass SimpleEventEmitter(BaseEventEmitterBackend):\n \"\"\"Event emitter the works only in the current process\"\"\"\n\n __slots__ = (\"_queue\", \"_worker_task\")\n\n _worker_task: Task | None\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n super().__init__(listeners=listeners)\n self._queue: Queue | None = None\n self._worker_task = None\n\n async def _worker(self) -> None:\n \"\"\"Worker that runs in a separate task and continuously pulls events from asyncio queue.\n\n Returns:\n None\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n await fn(*args, **kwargs)\n self._queue.task_done()\n\n async def on_startup(self) -> None:\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n if sniffio.current_async_library() != \"asyncio\":\n return\n\n self._queue = Queue()\n self._worker_task = create_task(self._worker())\n\n async def on_shutdown(self) -> None:\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n\n if self._queue:\n await self._queue.join()\n\n if self._worker_task:\n self._worker_task.cancel()\n with suppress(CancelledError):\n await self._worker_task\n\n self._worker_task = None\n self._queue = None\n\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n if not (self._worker_task and self._queue):\n if sniffio.current_async_library() != \"asyncio\":\n raise ImproperlyConfiguredException(\"{type(self).__name__} only supports 'asyncio' based event loops\")\n\n raise ImproperlyConfiguredException(\"Worker not running\")\n\n if listeners := self.listeners.get(event_id):\n for listener in listeners:\n self._queue.put_nowait((listener.fn, args, kwargs))\n return\n raise ImproperlyConfiguredException(f\"no event listeners are registered for event ID: {event_id}\")\n", "path": "starlite/events/emitter.py"}]} | 1,767 | 107 |
gh_patches_debug_26955 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3411 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BMO Harris Bank
https://branchlocator.bmoharris.com/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/bmo_harris.py`
Content:
```
1 import html
2 import json
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 class BMOHarrisSpider(scrapy.Spider):
10 name = "bmo-harris"
11 item_attributes = { 'brand': "BMO Harris Bank" }
12 allowed_domains = ["branches.bmoharris.com"]
13 download_delay = 0.5
14 start_urls = (
15 'https://branches.bmoharris.com/',
16 )
17
18 def parse_store(self, response):
19 properties = {
20 'addr_full': response.xpath('//meta[@property="business:contact_data:street_address"]/@content').extract_first(),
21 'phone': response.xpath('//meta[@property="business:contact_data:phone_number"]/@content').extract_first(),
22 'city': response.xpath('//meta[@property="business:contact_data:locality"]/@content').extract_first(),
23 'state': response.xpath('//meta[@property="business:contact_data:region"]/@content').extract_first(),
24 'postcode': response.xpath('//meta[@property="business:contact_data:postal_code"]/@content').extract_first(),
25 'country': response.xpath('//meta[@property="business:contact_data:country_name"]/@content').extract_first(),
26 'ref': response.url,
27 'website': response.url,
28 'lat': response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first(),
29 'lon': response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first(),
30 }
31
32 yield GeojsonPointItem(**properties)
33
34 def parse(self, response):
35 # Step into hierarchy of place
36 for url in response.xpath("//div[@class='itemlist']/p/a/@href").extract():
37 yield scrapy.Request(response.urljoin(url))
38
39 # Look for links to stores
40 for url in response.xpath("//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href").extract():
41 yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/bmo_harris.py b/locations/spiders/bmo_harris.py
--- a/locations/spiders/bmo_harris.py
+++ b/locations/spiders/bmo_harris.py
@@ -7,13 +7,14 @@
class BMOHarrisSpider(scrapy.Spider):
- name = "bmo-harris"
- item_attributes = { 'brand': "BMO Harris Bank" }
+ name = "bmo_harris"
+ item_attributes = {'brand': "BMO Harris Bank", 'brand_wikidata': "Q4835981"}
allowed_domains = ["branches.bmoharris.com"]
download_delay = 0.5
start_urls = (
'https://branches.bmoharris.com/',
)
+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
def parse_store(self, response):
properties = {
@@ -33,9 +34,9 @@
def parse(self, response):
# Step into hierarchy of place
- for url in response.xpath("//div[@class='itemlist']/p/a/@href").extract():
+ for url in response.xpath("//ul[@class='itemlist']/li/a/@href").extract():
yield scrapy.Request(response.urljoin(url))
# Look for links to stores
- for url in response.xpath("//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href").extract():
+ for url in response.xpath("//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href").extract():
yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
| {"golden_diff": "diff --git a/locations/spiders/bmo_harris.py b/locations/spiders/bmo_harris.py\n--- a/locations/spiders/bmo_harris.py\n+++ b/locations/spiders/bmo_harris.py\n@@ -7,13 +7,14 @@\n \n \n class BMOHarrisSpider(scrapy.Spider):\n- name = \"bmo-harris\"\n- item_attributes = { 'brand': \"BMO Harris Bank\" }\n+ name = \"bmo_harris\"\n+ item_attributes = {'brand': \"BMO Harris Bank\", 'brand_wikidata': \"Q4835981\"}\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'\n \n def parse_store(self, response):\n properties = {\n@@ -33,9 +34,9 @@\n \n def parse(self, response):\n # Step into hierarchy of place\n- for url in response.xpath(\"//div[@class='itemlist']/p/a/@href\").extract():\n+ for url in response.xpath(\"//ul[@class='itemlist']/li/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n \n # Look for links to stores\n- for url in response.xpath(\"//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href\").extract():\n+ for url in response.xpath(\"//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "issue": "BMO Harris Bank\nhttps://branchlocator.bmoharris.com/\n", "before_files": [{"content": "import html\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass BMOHarrisSpider(scrapy.Spider):\n name = \"bmo-harris\"\n item_attributes = { 'brand': \"BMO Harris Bank\" }\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n\n def parse_store(self, response):\n properties = {\n 'addr_full': response.xpath('//meta[@property=\"business:contact_data:street_address\"]/@content').extract_first(),\n 'phone': response.xpath('//meta[@property=\"business:contact_data:phone_number\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@property=\"business:contact_data:locality\"]/@content').extract_first(),\n 'state': response.xpath('//meta[@property=\"business:contact_data:region\"]/@content').extract_first(),\n 'postcode': response.xpath('//meta[@property=\"business:contact_data:postal_code\"]/@content').extract_first(),\n 'country': response.xpath('//meta[@property=\"business:contact_data:country_name\"]/@content').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n 'lat': response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first(),\n 'lon': response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first(),\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n # Step into hierarchy of place\n for url in response.xpath(\"//div[@class='itemlist']/p/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n\n # Look for links to stores\n for url in response.xpath(\"//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "path": "locations/spiders/bmo_harris.py"}], "after_files": [{"content": "import html\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass BMOHarrisSpider(scrapy.Spider):\n name = \"bmo_harris\"\n item_attributes = {'brand': \"BMO Harris Bank\", 'brand_wikidata': \"Q4835981\"}\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'\n\n def parse_store(self, response):\n properties = {\n 'addr_full': response.xpath('//meta[@property=\"business:contact_data:street_address\"]/@content').extract_first(),\n 'phone': response.xpath('//meta[@property=\"business:contact_data:phone_number\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@property=\"business:contact_data:locality\"]/@content').extract_first(),\n 'state': response.xpath('//meta[@property=\"business:contact_data:region\"]/@content').extract_first(),\n 'postcode': response.xpath('//meta[@property=\"business:contact_data:postal_code\"]/@content').extract_first(),\n 'country': response.xpath('//meta[@property=\"business:contact_data:country_name\"]/@content').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n 'lat': response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first(),\n 'lon': response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first(),\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n # Step into hierarchy of place\n for url in response.xpath(\"//ul[@class='itemlist']/li/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n\n # Look for links to stores\n for url in response.xpath(\"//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "path": "locations/spiders/bmo_harris.py"}]} | 787 | 421 |
gh_patches_debug_5298 | rasdani/github-patches | git_diff | pyca__cryptography-2845 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_ModuleWithDeprecations doesn't handle patching properly.
`_ModuleWithDeprecations` catches `__getattr__` and `__setattr__` to patch through to the underlying module, but does not intercept `__delattr__`. That means that if you're using something like `mock.patch`, the mock successfully lands in place, but cannot be removed: the mock was applied to the underlying module, but the delete comes from the proxy.
Should be easily fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/utils.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 import binascii
9 import inspect
10 import struct
11 import sys
12 import warnings
13
14
15 # the functions deprecated in 1.0 are on an arbitrarily extended deprecation
16 # cycle and should not be removed until we agree on when that cycle ends.
17 DeprecatedIn10 = DeprecationWarning
18 DeprecatedIn12 = DeprecationWarning
19
20
21 def read_only_property(name):
22 return property(lambda self: getattr(self, name))
23
24
25 def register_interface(iface):
26 def register_decorator(klass):
27 verify_interface(iface, klass)
28 iface.register(klass)
29 return klass
30 return register_decorator
31
32
33 if hasattr(int, "from_bytes"):
34 int_from_bytes = int.from_bytes
35 else:
36 def int_from_bytes(data, byteorder, signed=False):
37 assert byteorder == 'big'
38 assert not signed
39
40 if len(data) % 4 != 0:
41 data = (b'\x00' * (4 - (len(data) % 4))) + data
42
43 result = 0
44
45 while len(data) > 0:
46 digit, = struct.unpack('>I', data[:4])
47 result = (result << 32) + digit
48 # TODO: this is quadratic in the length of data
49 data = data[4:]
50
51 return result
52
53
54 def int_to_bytes(integer, length=None):
55 hex_string = '%x' % integer
56 if length is None:
57 n = len(hex_string)
58 else:
59 n = length * 2
60 return binascii.unhexlify(hex_string.zfill(n + (n & 1)))
61
62
63 class InterfaceNotImplemented(Exception):
64 pass
65
66
67 if hasattr(inspect, "signature"):
68 signature = inspect.signature
69 else:
70 signature = inspect.getargspec
71
72
73 def verify_interface(iface, klass):
74 for method in iface.__abstractmethods__:
75 if not hasattr(klass, method):
76 raise InterfaceNotImplemented(
77 "{0} is missing a {1!r} method".format(klass, method)
78 )
79 if isinstance(getattr(iface, method), abc.abstractproperty):
80 # Can't properly verify these yet.
81 continue
82 sig = signature(getattr(iface, method))
83 actual = signature(getattr(klass, method))
84 if sig != actual:
85 raise InterfaceNotImplemented(
86 "{0}.{1}'s signature differs from the expected. Expected: "
87 "{2!r}. Received: {3!r}".format(
88 klass, method, sig, actual
89 )
90 )
91
92
93 if sys.version_info >= (2, 7):
94 def bit_length(x):
95 return x.bit_length()
96 else:
97 def bit_length(x):
98 return len(bin(x)) - (2 + (x <= 0))
99
100
101 class _DeprecatedValue(object):
102 def __init__(self, value, message, warning_class):
103 self.value = value
104 self.message = message
105 self.warning_class = warning_class
106
107
108 class _ModuleWithDeprecations(object):
109 def __init__(self, module):
110 self.__dict__["_module"] = module
111
112 def __getattr__(self, attr):
113 obj = getattr(self._module, attr)
114 if isinstance(obj, _DeprecatedValue):
115 warnings.warn(obj.message, obj.warning_class, stacklevel=2)
116 obj = obj.value
117 return obj
118
119 def __setattr__(self, attr, value):
120 setattr(self._module, attr, value)
121
122 def __dir__(self):
123 return ["_module"] + dir(self._module)
124
125
126 def deprecated(value, module_name, message, warning_class):
127 module = sys.modules[module_name]
128 if not isinstance(module, _ModuleWithDeprecations):
129 sys.modules[module_name] = module = _ModuleWithDeprecations(module)
130 return _DeprecatedValue(value, message, warning_class)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py
--- a/src/cryptography/utils.py
+++ b/src/cryptography/utils.py
@@ -119,6 +119,13 @@
def __setattr__(self, attr, value):
setattr(self._module, attr, value)
+ def __delattr__(self, attr):
+ obj = getattr(self._module, attr)
+ if isinstance(obj, _DeprecatedValue):
+ warnings.warn(obj.message, obj.warning_class, stacklevel=2)
+
+ delattr(self._module, attr)
+
def __dir__(self):
return ["_module"] + dir(self._module)
| {"golden_diff": "diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py\n--- a/src/cryptography/utils.py\n+++ b/src/cryptography/utils.py\n@@ -119,6 +119,13 @@\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n \n+ def __delattr__(self, attr):\n+ obj = getattr(self._module, attr)\n+ if isinstance(obj, _DeprecatedValue):\n+ warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n+\n+ delattr(self._module, attr)\n+\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n", "issue": "_ModuleWithDeprecations doesn't handle patching properly.\n`_ModuleWithDeprecations` catches `__getattr__` and `__setattr__` to patch through to the underlying module, but does not intercept `__delattr__`. That means that if you're using something like `mock.patch`, the mock successfully lands in place, but cannot be removed: the mock was applied to the underlying module, but the delete comes from the proxy.\n\nShould be easily fixed.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport struct\nimport sys\nimport warnings\n\n\n# the functions deprecated in 1.0 are on an arbitrarily extended deprecation\n# cycle and should not be removed until we agree on when that cycle ends.\nDeprecatedIn10 = DeprecationWarning\nDeprecatedIn12 = DeprecationWarning\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n # TODO: this is quadratic in the length of data\n data = data[4:]\n\n return result\n\n\ndef int_to_bytes(integer, length=None):\n hex_string = '%x' % integer\n if length is None:\n n = len(hex_string)\n else:\n n = length * 2\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\nif hasattr(inspect, \"signature\"):\n signature = inspect.signature\nelse:\n signature = inspect.getargspec\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n sig = signature(getattr(iface, method))\n actual = signature(getattr(klass, method))\n if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, sig, actual\n )\n )\n\n\nif sys.version_info >= (2, 7):\n def bit_length(x):\n return x.bit_length()\nelse:\n def bit_length(x):\n return len(bin(x)) - (2 + (x <= 0))\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = module = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n", "path": "src/cryptography/utils.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport struct\nimport sys\nimport warnings\n\n\n# the functions deprecated in 1.0 are on an arbitrarily extended deprecation\n# cycle and should not be removed until we agree on when that cycle ends.\nDeprecatedIn10 = DeprecationWarning\nDeprecatedIn12 = DeprecationWarning\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n # TODO: this is quadratic in the length of data\n data = data[4:]\n\n return result\n\n\ndef int_to_bytes(integer, length=None):\n hex_string = '%x' % integer\n if length is None:\n n = len(hex_string)\n else:\n n = length * 2\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\nif hasattr(inspect, \"signature\"):\n signature = inspect.signature\nelse:\n signature = inspect.getargspec\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n sig = signature(getattr(iface, method))\n actual = signature(getattr(klass, method))\n if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, sig, actual\n )\n )\n\n\nif sys.version_info >= (2, 7):\n def bit_length(x):\n return x.bit_length()\nelse:\n def bit_length(x):\n return len(bin(x)) - (2 + (x <= 0))\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __delattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n\n delattr(self._module, attr)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = module = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n", "path": "src/cryptography/utils.py"}]} | 1,524 | 148 |
gh_patches_debug_6466 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1417 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Return HTTP errors in proper format
Proposer: Eric Brehault
Seconder:
# Motivation
When a page does not exist, or has an error, or is not allowed for the user, Plone returns the appropriate HTTP error (404, 500, ...), and the response is a human readable page, properly skinned, which nice for the user.
And if the requested resource is not a page (an image, a JS file, an AJAX call, etc.), Plone also returns this human readable page.
It is useless because the page will not be displayed, and it produces many problems:
- the response is very heavy,
- it involves a lot of processing (Plone will render an entire page for nothing),
- for AJAX call, the response cannot be easily interperted,
- it might produce a cascade of errors (for instance: the regular response is not supposed to be rendered via Diazo, as it is not an HTML page, but the error is rendered by Diazo, and it might produce another error).
# Proposed solution
We could display the human readable error page only if the current request `HTTP_ACCEPT` parameter contains `text/html`, in other cases, we would just return a simple JSON error reponse.
# Proposed implementation
Test the `HTTP_ACCEPT` value in `Products/CMFPlone/skins/plone_templates/standard_error_message.py`, and call the existing template or make a JSON response accordingly.
# Risks
No identified risks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/skins/plone_templates/standard_error_message.py`
Content:
```
1 ## Script (Python) "standard_error_message"
2 ##bind container=container
3 ##bind context=context
4 ##bind namespace=
5 ##bind script=script
6 ##bind subpath=traverse_subpath
7 ##parameters=**kwargs
8 ##title=Dispatches to relevant error view
9
10 ## by default we handle everything in 1 PageTemplate.
11 # you could easily check for the error_type and
12 # dispatch to an appropriate PageTemplate.
13
14 # Check if the object is traversable, if not it might be a view, get its parent
15 # because we need to render the error on an actual content object
16 from AccessControl import Unauthorized
17 try:
18 while not hasattr(context.aq_explicit, 'restrictedTraverse'):
19 context = context.aq_parent
20 except (Unauthorized, AttributeError):
21 context = context.portal_url.getPortalObject()
22
23 error_type = kwargs.get('error_type', None)
24 error_message = kwargs.get('error_message', None)
25 error_log_url = kwargs.get('error_log_url', None)
26 error_tb = kwargs.get('error_tb', None)
27 error_traceback = kwargs.get('error_traceback', None)
28 error_value = kwargs.get('error_value', None)
29
30 if error_log_url:
31 error_log_id = error_log_url.split('?id=')[1]
32 else:
33 error_log_id = None
34
35
36 no_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}
37 error_page = context.default_error_message(
38 error_type=error_type,
39 error_message=error_message,
40 error_tb=error_tb,
41 error_value=error_value,
42 error_log_url=error_log_url,
43 error_log_id=error_log_id,
44 no_portlets=True,
45 actions=no_actions)
46
47 return error_page
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/skins/plone_templates/standard_error_message.py b/Products/CMFPlone/skins/plone_templates/standard_error_message.py
--- a/Products/CMFPlone/skins/plone_templates/standard_error_message.py
+++ b/Products/CMFPlone/skins/plone_templates/standard_error_message.py
@@ -27,6 +27,10 @@
error_traceback = kwargs.get('error_traceback', None)
error_value = kwargs.get('error_value', None)
+if "text/html" not in context.REQUEST.getHeader('Accept', ''):
+ context.REQUEST.RESPONSE.setHeader("Content-Type", "application/json")
+ return '{"error_type": "{0:s}"}'.format(error_type)
+
if error_log_url:
error_log_id = error_log_url.split('?id=')[1]
else:
| {"golden_diff": "diff --git a/Products/CMFPlone/skins/plone_templates/standard_error_message.py b/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n--- a/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n+++ b/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n@@ -27,6 +27,10 @@\n error_traceback = kwargs.get('error_traceback', None)\n error_value = kwargs.get('error_value', None)\n \n+if \"text/html\" not in context.REQUEST.getHeader('Accept', ''):\n+ context.REQUEST.RESPONSE.setHeader(\"Content-Type\", \"application/json\")\n+ return '{\"error_type\": \"{0:s}\"}'.format(error_type)\n+\n if error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\n else:\n", "issue": "Return HTTP errors in proper format\nProposer: Eric Brehault\nSeconder:\n# Motivation\n\nWhen a page does not exist, or has an error, or is not allowed for the user, Plone returns the appropriate HTTP error (404, 500, ...), and the response is a human readable page, properly skinned, which nice for the user.\nAnd if the requested resource is not a page (an image, a JS file, an AJAX call, etc.), Plone also returns this human readable page.\nIt is useless because the page will not be displayed, and it produces many problems:\n- the response is very heavy,\n- it involves a lot of processing (Plone will render an entire page for nothing),\n- for AJAX call, the response cannot be easily interperted,\n- it might produce a cascade of errors (for instance: the regular response is not supposed to be rendered via Diazo, as it is not an HTML page, but the error is rendered by Diazo, and it might produce another error).\n# Proposed solution\n\nWe could display the human readable error page only if the current request `HTTP_ACCEPT` parameter contains `text/html`, in other cases, we would just return a simple JSON error reponse.\n# Proposed implementation\n\nTest the `HTTP_ACCEPT` value in `Products/CMFPlone/skins/plone_templates/standard_error_message.py`, and call the existing template or make a JSON response accordingly.\n# Risks\n\nNo identified risks.\n\n", "before_files": [{"content": "## Script (Python) \"standard_error_message\"\n##bind container=container\n##bind context=context\n##bind namespace=\n##bind script=script\n##bind subpath=traverse_subpath\n##parameters=**kwargs\n##title=Dispatches to relevant error view\n\n## by default we handle everything in 1 PageTemplate.\n# you could easily check for the error_type and\n# dispatch to an appropriate PageTemplate.\n\n# Check if the object is traversable, if not it might be a view, get its parent\n# because we need to render the error on an actual content object\nfrom AccessControl import Unauthorized\ntry:\n while not hasattr(context.aq_explicit, 'restrictedTraverse'):\n context = context.aq_parent\nexcept (Unauthorized, AttributeError):\n context = context.portal_url.getPortalObject()\n\nerror_type = kwargs.get('error_type', None)\nerror_message = kwargs.get('error_message', None)\nerror_log_url = kwargs.get('error_log_url', None)\nerror_tb = kwargs.get('error_tb', None)\nerror_traceback = kwargs.get('error_traceback', None)\nerror_value = kwargs.get('error_value', None)\n\nif error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\nelse:\n error_log_id = None\n\n\nno_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}\nerror_page = context.default_error_message(\n error_type=error_type,\n error_message=error_message,\n error_tb=error_tb,\n error_value=error_value,\n error_log_url=error_log_url,\n error_log_id=error_log_id,\n no_portlets=True,\n actions=no_actions)\n\nreturn error_page\n", "path": "Products/CMFPlone/skins/plone_templates/standard_error_message.py"}], "after_files": [{"content": "## Script (Python) \"standard_error_message\"\n##bind container=container\n##bind context=context\n##bind namespace=\n##bind script=script\n##bind subpath=traverse_subpath\n##parameters=**kwargs\n##title=Dispatches to relevant error view\n\n## by default we handle everything in 1 PageTemplate.\n# you could easily check for the error_type and\n# dispatch to an appropriate PageTemplate.\n\n# Check if the object is traversable, if not it might be a view, get its parent\n# because we need to render the error on an actual content object\nfrom AccessControl import Unauthorized\ntry:\n while not hasattr(context.aq_explicit, 'restrictedTraverse'):\n context = context.aq_parent\nexcept (Unauthorized, AttributeError):\n context = context.portal_url.getPortalObject()\n\nerror_type = kwargs.get('error_type', None)\nerror_message = kwargs.get('error_message', None)\nerror_log_url = kwargs.get('error_log_url', None)\nerror_tb = kwargs.get('error_tb', None)\nerror_traceback = kwargs.get('error_traceback', None)\nerror_value = kwargs.get('error_value', None)\n\nif \"text/html\" not in context.REQUEST.getHeader('Accept', ''):\n context.REQUEST.RESPONSE.setHeader(\"Content-Type\", \"application/json\")\n return '{\"error_type\": \"{0:s}\"}'.format(error_type)\n\nif error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\nelse:\n error_log_id = None\n\n\nno_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}\nerror_page = context.default_error_message(\n error_type=error_type,\n error_message=error_message,\n error_tb=error_tb,\n error_value=error_value,\n error_log_url=error_log_url,\n error_log_id=error_log_id,\n no_portlets=True,\n actions=no_actions)\n\nreturn error_page\n", "path": "Products/CMFPlone/skins/plone_templates/standard_error_message.py"}]} | 1,034 | 190 |
gh_patches_debug_3520 | rasdani/github-patches | git_diff | encode__uvicorn-1328 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No `python_requires` defined
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
It seems that no `python_requires` is defined for the `uvicorn` package, which in turn results in the latest version being installed in a Python 3.6 (CI) environment (that subsequently fails).
If `python_requires` were defined to restrict the package to supported versions of the interpreter, I would have got an older version (that supported `py36`) instead.
### Steps to reproduce the bug
In a `py36` environment
```
pip install uvicorn
# Run uvicorn
# ...
```
### Expected behavior
An older version is installed that works.
### Actual behavior
`uvicorn` errors out, says `py36` is unsupported.
### Debugging material
_No response_
### Environment
CPython 3.6
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 path = os.path.join(package, "__init__.py")
15 init_py = open(path, "r", encoding="utf8").read()
16 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
17
18
19 def get_long_description():
20 """
21 Return the README.
22 """
23 return open("README.md", "r", encoding="utf8").read()
24
25
26 def get_packages(package):
27 """
28 Return root package and all sub-packages.
29 """
30 return [
31 dirpath
32 for dirpath, dirnames, filenames in os.walk(package)
33 if os.path.exists(os.path.join(dirpath, "__init__.py"))
34 ]
35
36
37 env_marker_cpython = (
38 "sys_platform != 'win32'"
39 " and (sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'PyPy')"
41 )
42
43 env_marker_win = "sys_platform == 'win32'"
44 env_marker_below_38 = "python_version < '3.8'"
45
46 minimal_requirements = [
47 "asgiref>=3.4.0",
48 "click>=7.0",
49 "h11>=0.8",
50 "typing-extensions;" + env_marker_below_38,
51 ]
52
53
54 extra_requirements = [
55 "websockets>=10.0",
56 "httptools>=0.2.0,<0.4.0",
57 "uvloop>=0.14.0,!=0.15.0,!=0.15.1; " + env_marker_cpython,
58 "colorama>=0.4;" + env_marker_win,
59 "watchgod>=0.6",
60 "python-dotenv>=0.13",
61 "PyYAML>=5.1",
62 ]
63
64
65 setup(
66 name="uvicorn",
67 version=get_version("uvicorn"),
68 url="https://www.uvicorn.org/",
69 license="BSD",
70 description="The lightning-fast ASGI server.",
71 long_description=get_long_description(),
72 long_description_content_type="text/markdown",
73 author="Tom Christie",
74 author_email="[email protected]",
75 packages=get_packages("uvicorn"),
76 install_requires=minimal_requirements,
77 extras_require={"standard": extra_requirements},
78 include_package_data=True,
79 classifiers=[
80 "Development Status :: 4 - Beta",
81 "Environment :: Web Environment",
82 "Intended Audience :: Developers",
83 "License :: OSI Approved :: BSD License",
84 "Operating System :: OS Independent",
85 "Topic :: Internet :: WWW/HTTP",
86 "Programming Language :: Python :: 3",
87 "Programming Language :: Python :: 3.7",
88 "Programming Language :: Python :: 3.8",
89 "Programming Language :: Python :: 3.9",
90 "Programming Language :: Python :: 3.10",
91 "Programming Language :: Python :: Implementation :: CPython",
92 "Programming Language :: Python :: Implementation :: PyPy",
93 ],
94 entry_points="""
95 [console_scripts]
96 uvicorn=uvicorn.main:main
97 """,
98 project_urls={
99 "Funding": "https://github.com/sponsors/encode",
100 "Source": "https://github.com/encode/uvicorn",
101 "Changelog": "https://github.com/encode/uvicorn/blob/master/CHANGELOG.md",
102 },
103 )
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -73,6 +73,7 @@
author="Tom Christie",
author_email="[email protected]",
packages=get_packages("uvicorn"),
+ python_requires=">=3.7",
install_requires=minimal_requirements,
extras_require={"standard": extra_requirements},
include_package_data=True,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -73,6 +73,7 @@\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n+ python_requires=\">=3.7\",\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n", "issue": "No `python_requires` defined\n### Checklist\r\n\r\n- [X] The bug is reproducible against the latest release or `master`.\r\n- [X] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nIt seems that no `python_requires` is defined for the `uvicorn` package, which in turn results in the latest version being installed in a Python 3.6 (CI) environment (that subsequently fails).\r\n\r\nIf `python_requires` were defined to restrict the package to supported versions of the interpreter, I would have got an older version (that supported `py36`) instead.\r\n\r\n### Steps to reproduce the bug\r\n\r\nIn a `py36` environment\r\n```\r\npip install uvicorn\r\n# Run uvicorn\r\n# ...\r\n```\r\n\r\n### Expected behavior\r\n\r\nAn older version is installed that works.\r\n\r\n### Actual behavior\r\n\r\n`uvicorn` errors out, says `py36` is unsupported.\r\n\r\n### Debugging material\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nCPython 3.6\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, \"__init__.py\")\n init_py = open(path, \"r\", encoding=\"utf8\").read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open(\"README.md\", \"r\", encoding=\"utf8\").read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [\n dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, \"__init__.py\"))\n ]\n\n\nenv_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and (sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy')\"\n)\n\nenv_marker_win = \"sys_platform == 'win32'\"\nenv_marker_below_38 = \"python_version < '3.8'\"\n\nminimal_requirements = [\n \"asgiref>=3.4.0\",\n \"click>=7.0\",\n \"h11>=0.8\",\n \"typing-extensions;\" + env_marker_below_38,\n]\n\n\nextra_requirements = [\n \"websockets>=10.0\",\n \"httptools>=0.2.0,<0.4.0\",\n \"uvloop>=0.14.0,!=0.15.0,!=0.15.1; \" + env_marker_cpython,\n \"colorama>=0.4;\" + env_marker_win,\n \"watchgod>=0.6\",\n \"python-dotenv>=0.13\",\n \"PyYAML>=5.1\",\n]\n\n\nsetup(\n name=\"uvicorn\",\n version=get_version(\"uvicorn\"),\n url=\"https://www.uvicorn.org/\",\n license=\"BSD\",\n description=\"The lightning-fast ASGI server.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\",\n project_urls={\n \"Funding\": \"https://github.com/sponsors/encode\",\n \"Source\": \"https://github.com/encode/uvicorn\",\n \"Changelog\": \"https://github.com/encode/uvicorn/blob/master/CHANGELOG.md\",\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, \"__init__.py\")\n init_py = open(path, \"r\", encoding=\"utf8\").read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open(\"README.md\", \"r\", encoding=\"utf8\").read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [\n dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, \"__init__.py\"))\n ]\n\n\nenv_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and (sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy')\"\n)\n\nenv_marker_win = \"sys_platform == 'win32'\"\nenv_marker_below_38 = \"python_version < '3.8'\"\n\nminimal_requirements = [\n \"asgiref>=3.4.0\",\n \"click>=7.0\",\n \"h11>=0.8\",\n \"typing-extensions;\" + env_marker_below_38,\n]\n\n\nextra_requirements = [\n \"websockets>=10.0\",\n \"httptools>=0.2.0,<0.4.0\",\n \"uvloop>=0.14.0,!=0.15.0,!=0.15.1; \" + env_marker_cpython,\n \"colorama>=0.4;\" + env_marker_win,\n \"watchgod>=0.6\",\n \"python-dotenv>=0.13\",\n \"PyYAML>=5.1\",\n]\n\n\nsetup(\n name=\"uvicorn\",\n version=get_version(\"uvicorn\"),\n url=\"https://www.uvicorn.org/\",\n license=\"BSD\",\n description=\"The lightning-fast ASGI server.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n python_requires=\">=3.7\",\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\",\n project_urls={\n \"Funding\": \"https://github.com/sponsors/encode\",\n \"Source\": \"https://github.com/encode/uvicorn\",\n \"Changelog\": \"https://github.com/encode/uvicorn/blob/master/CHANGELOG.md\",\n },\n)\n", "path": "setup.py"}]} | 1,450 | 89 |
gh_patches_debug_7776 | rasdani/github-patches | git_diff | secdev__scapy-4349 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect RTCP SR + RR parsing
### Brief description
The RTCP parser fails to handle a packet that contains both Sender Report and Received Report, which is is the most common data for a two-way session.
It seems that the "sender_info" info contain a payload, this should be parsed as a ReceptionReport info
Incorrect behavior demonstrated in UTS here: https://github.com/secdev/scapy/commit/0bb9db2932d91d2f6e057caea60db78a2ad54f96
### Scapy version
main
### Python version
3.10
### Operating system
Linux 5.15.146
### Additional environment information
_No response_
### How to reproduce
Run tests on provided branch:
`test/run_tests -P "load_contrib('rtcp')" -t test/contrib/rtcp.uts -F`
### Actual result
Demo test should fail.
ReceptionReport after SenderInfo should be parsed. SenderInfo should never have a payload, it's a fixed-sized struct
### Expected result
The commented asserts should pass instead
### Related resources
https://datatracker.ietf.org/doc/html/rfc3550
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/contrib/rtcp.py`
Content:
```
1 # SPDX-License-Identifier: GPL-2.0-only
2 # This file is part of Scapy
3 # See https://scapy.net/ for more information
4 # Copyright (C) Pavel Oborin <[email protected]>
5
6 # RFC 3550
7 # scapy.contrib.description = Real-Time Transport Control Protocol
8 # scapy.contrib.status = loads
9
10 """
11 RTCP (rfc 3550)
12
13 Use bind_layers(UDP, RTCP, dport=...) to start using it
14 """
15
16 import struct
17
18 from scapy.packet import Packet
19 from scapy.fields import (
20 BitField,
21 BitFieldLenField,
22 ByteEnumField,
23 ByteField,
24 ConditionalField,
25 FieldLenField,
26 IntField,
27 LenField,
28 LongField,
29 PacketField,
30 PacketListField,
31 StrLenField,
32 X3BytesField,
33 )
34
35
36 _rtcp_packet_types = {
37 200: 'Sender report',
38 201: 'Receiver report',
39 202: 'Source description',
40 203: 'BYE',
41 204: 'APP'
42 }
43
44
45 class SenderInfo(Packet):
46 name = "Sender info"
47 fields_desc = [
48 LongField('ntp_timestamp', None),
49 IntField('rtp_timestamp', None),
50 IntField('sender_packet_count', None),
51 IntField('sender_octet_count', None)
52 ]
53
54
55 class ReceptionReport(Packet):
56 name = "Reception report"
57 fields_desc = [
58 IntField('sourcesync', None),
59 ByteField('fraction_lost', None),
60 X3BytesField('cumulative_lost', None),
61 IntField('highest_seqnum_recv', None),
62 IntField('interarrival_jitter', None),
63 IntField('last_SR_timestamp', None),
64 IntField('delay_since_last_SR', None)
65 ]
66
67
68 _sdes_chunk_types = {
69 0: "END",
70 1: "CNAME",
71 2: "NAME",
72 3: "EMAIL",
73 4: "PHONE",
74 5: "LOC",
75 6: "TOOL",
76 7: "NOTE",
77 8: "PRIV"
78 }
79
80
81 class SDESItem(Packet):
82 name = "SDES item"
83 fields_desc = [
84 ByteEnumField('chunk_type', None, _sdes_chunk_types),
85 FieldLenField('length', None, fmt='!b', length_of='value'),
86 StrLenField('value', None, length_from=lambda pkt: pkt.length)
87 ]
88
89 def extract_padding(self, p):
90 return "", p
91
92
93 class SDESChunk(Packet):
94 name = "SDES chunk"
95 fields_desc = [
96 IntField('sourcesync', None),
97 PacketListField(
98 'items', None,
99 next_cls_cb=(
100 lambda x, y, p, z: None if (p and p.chunk_type == 0) else SDESItem
101 )
102 )
103 ]
104
105
106 class RTCP(Packet):
107 name = "RTCP"
108
109 fields_desc = [
110 # HEADER
111 BitField('version', 2, 2),
112 BitField('padding', 0, 1),
113 BitFieldLenField('count', 0, 5, count_of='report_blocks'),
114 ByteEnumField('packet_type', 0, _rtcp_packet_types),
115 LenField('length', None, fmt='!h'),
116 # SR/RR
117 ConditionalField(
118 IntField('sourcesync', 0),
119 lambda pkt: pkt.packet_type in (200, 201)
120 ),
121 ConditionalField(
122 PacketField('sender_info', SenderInfo(), SenderInfo),
123 lambda pkt: pkt.packet_type == 200
124 ),
125 ConditionalField(
126 PacketListField('report_blocks', None, pkt_cls=ReceptionReport,
127 count_from=lambda pkt: pkt.count),
128 lambda pkt: pkt.packet_type in (200, 201)
129 ),
130 # SDES
131 ConditionalField(
132 PacketListField('sdes_chunks', None, pkt_cls=SDESChunk,
133 count_from=lambda pkt: pkt.count),
134 lambda pkt: pkt.packet_type == 202
135 ),
136 ]
137
138 def post_build(self, pkt, pay):
139 pkt += pay
140 if self.length is None:
141 pkt = pkt[:2] + struct.pack("!h", len(pkt) // 4 - 1) + pkt[4:]
142 return pkt
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scapy/contrib/rtcp.py b/scapy/contrib/rtcp.py
--- a/scapy/contrib/rtcp.py
+++ b/scapy/contrib/rtcp.py
@@ -51,6 +51,9 @@
IntField('sender_octet_count', None)
]
+ def extract_padding(self, p):
+ return "", p
+
class ReceptionReport(Packet):
name = "Reception report"
@@ -64,6 +67,9 @@
IntField('delay_since_last_SR', None)
]
+ def extract_padding(self, p):
+ return "", p
+
_sdes_chunk_types = {
0: "END",
| {"golden_diff": "diff --git a/scapy/contrib/rtcp.py b/scapy/contrib/rtcp.py\n--- a/scapy/contrib/rtcp.py\n+++ b/scapy/contrib/rtcp.py\n@@ -51,6 +51,9 @@\n IntField('sender_octet_count', None)\n ]\n \n+ def extract_padding(self, p):\n+ return \"\", p\n+\n \n class ReceptionReport(Packet):\n name = \"Reception report\"\n@@ -64,6 +67,9 @@\n IntField('delay_since_last_SR', None)\n ]\n \n+ def extract_padding(self, p):\n+ return \"\", p\n+\n \n _sdes_chunk_types = {\n 0: \"END\",\n", "issue": "Incorrect RTCP SR + RR parsing\n### Brief description\n\nThe RTCP parser fails to handle a packet that contains both Sender Report and Received Report, which is is the most common data for a two-way session.\r\n\r\nIt seems that the \"sender_info\" info contain a payload, this should be parsed as a ReceptionReport info\r\n\r\nIncorrect behavior demonstrated in UTS here: https://github.com/secdev/scapy/commit/0bb9db2932d91d2f6e057caea60db78a2ad54f96\n\n### Scapy version\n\nmain\n\n### Python version\n\n3.10\n\n### Operating system\n\nLinux 5.15.146\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\nRun tests on provided branch:\r\n\r\n`test/run_tests -P \"load_contrib('rtcp')\" -t test/contrib/rtcp.uts -F`\r\n\r\n\n\n### Actual result\n\nDemo test should fail.\r\n\r\nReceptionReport after SenderInfo should be parsed. SenderInfo should never have a payload, it's a fixed-sized struct\n\n### Expected result\n\nThe commented asserts should pass instead\n\n### Related resources\n\nhttps://datatracker.ietf.org/doc/html/rfc3550\n", "before_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) Pavel Oborin <[email protected]>\n\n# RFC 3550\n# scapy.contrib.description = Real-Time Transport Control Protocol\n# scapy.contrib.status = loads\n\n\"\"\"\nRTCP (rfc 3550)\n\nUse bind_layers(UDP, RTCP, dport=...) to start using it\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet\nfrom scapy.fields import (\n BitField,\n BitFieldLenField,\n ByteEnumField,\n ByteField,\n ConditionalField,\n FieldLenField,\n IntField,\n LenField,\n LongField,\n PacketField,\n PacketListField,\n StrLenField,\n X3BytesField,\n)\n\n\n_rtcp_packet_types = {\n 200: 'Sender report',\n 201: 'Receiver report',\n 202: 'Source description',\n 203: 'BYE',\n 204: 'APP'\n}\n\n\nclass SenderInfo(Packet):\n name = \"Sender info\"\n fields_desc = [\n LongField('ntp_timestamp', None),\n IntField('rtp_timestamp', None),\n IntField('sender_packet_count', None),\n IntField('sender_octet_count', None)\n ]\n\n\nclass ReceptionReport(Packet):\n name = \"Reception report\"\n fields_desc = [\n IntField('sourcesync', None),\n ByteField('fraction_lost', None),\n X3BytesField('cumulative_lost', None),\n IntField('highest_seqnum_recv', None),\n IntField('interarrival_jitter', None),\n IntField('last_SR_timestamp', None),\n IntField('delay_since_last_SR', None)\n ]\n\n\n_sdes_chunk_types = {\n 0: \"END\",\n 1: \"CNAME\",\n 2: \"NAME\",\n 3: \"EMAIL\",\n 4: \"PHONE\",\n 5: \"LOC\",\n 6: \"TOOL\",\n 7: \"NOTE\",\n 8: \"PRIV\"\n}\n\n\nclass SDESItem(Packet):\n name = \"SDES item\"\n fields_desc = [\n ByteEnumField('chunk_type', None, _sdes_chunk_types),\n FieldLenField('length', None, fmt='!b', length_of='value'),\n StrLenField('value', None, length_from=lambda pkt: pkt.length)\n ]\n\n def extract_padding(self, p):\n return \"\", p\n\n\nclass SDESChunk(Packet):\n name = \"SDES chunk\"\n fields_desc = [\n IntField('sourcesync', None),\n PacketListField(\n 'items', None,\n next_cls_cb=(\n lambda x, y, p, z: None if (p and p.chunk_type == 0) else SDESItem\n )\n )\n ]\n\n\nclass RTCP(Packet):\n name = \"RTCP\"\n\n fields_desc = [\n # HEADER\n BitField('version', 2, 2),\n BitField('padding', 0, 1),\n BitFieldLenField('count', 0, 5, count_of='report_blocks'),\n ByteEnumField('packet_type', 0, _rtcp_packet_types),\n LenField('length', None, fmt='!h'),\n # SR/RR\n ConditionalField(\n IntField('sourcesync', 0),\n lambda pkt: pkt.packet_type in (200, 201)\n ),\n ConditionalField(\n PacketField('sender_info', SenderInfo(), SenderInfo),\n lambda pkt: pkt.packet_type == 200\n ),\n ConditionalField(\n PacketListField('report_blocks', None, pkt_cls=ReceptionReport,\n count_from=lambda pkt: pkt.count),\n lambda pkt: pkt.packet_type in (200, 201)\n ),\n # SDES\n ConditionalField(\n PacketListField('sdes_chunks', None, pkt_cls=SDESChunk,\n count_from=lambda pkt: pkt.count),\n lambda pkt: pkt.packet_type == 202\n ),\n ]\n\n def post_build(self, pkt, pay):\n pkt += pay\n if self.length is None:\n pkt = pkt[:2] + struct.pack(\"!h\", len(pkt) // 4 - 1) + pkt[4:]\n return pkt\n", "path": "scapy/contrib/rtcp.py"}], "after_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) Pavel Oborin <[email protected]>\n\n# RFC 3550\n# scapy.contrib.description = Real-Time Transport Control Protocol\n# scapy.contrib.status = loads\n\n\"\"\"\nRTCP (rfc 3550)\n\nUse bind_layers(UDP, RTCP, dport=...) to start using it\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet\nfrom scapy.fields import (\n BitField,\n BitFieldLenField,\n ByteEnumField,\n ByteField,\n ConditionalField,\n FieldLenField,\n IntField,\n LenField,\n LongField,\n PacketField,\n PacketListField,\n StrLenField,\n X3BytesField,\n)\n\n\n_rtcp_packet_types = {\n 200: 'Sender report',\n 201: 'Receiver report',\n 202: 'Source description',\n 203: 'BYE',\n 204: 'APP'\n}\n\n\nclass SenderInfo(Packet):\n name = \"Sender info\"\n fields_desc = [\n LongField('ntp_timestamp', None),\n IntField('rtp_timestamp', None),\n IntField('sender_packet_count', None),\n IntField('sender_octet_count', None)\n ]\n\n def extract_padding(self, p):\n return \"\", p\n\n\nclass ReceptionReport(Packet):\n name = \"Reception report\"\n fields_desc = [\n IntField('sourcesync', None),\n ByteField('fraction_lost', None),\n X3BytesField('cumulative_lost', None),\n IntField('highest_seqnum_recv', None),\n IntField('interarrival_jitter', None),\n IntField('last_SR_timestamp', None),\n IntField('delay_since_last_SR', None)\n ]\n\n def extract_padding(self, p):\n return \"\", p\n\n\n_sdes_chunk_types = {\n 0: \"END\",\n 1: \"CNAME\",\n 2: \"NAME\",\n 3: \"EMAIL\",\n 4: \"PHONE\",\n 5: \"LOC\",\n 6: \"TOOL\",\n 7: \"NOTE\",\n 8: \"PRIV\"\n}\n\n\nclass SDESItem(Packet):\n name = \"SDES item\"\n fields_desc = [\n ByteEnumField('chunk_type', None, _sdes_chunk_types),\n FieldLenField('length', None, fmt='!b', length_of='value'),\n StrLenField('value', None, length_from=lambda pkt: pkt.length)\n ]\n\n def extract_padding(self, p):\n return \"\", p\n\n\nclass SDESChunk(Packet):\n name = \"SDES chunk\"\n fields_desc = [\n IntField('sourcesync', None),\n PacketListField(\n 'items', None,\n next_cls_cb=(\n lambda x, y, p, z: None if (p and p.chunk_type == 0) else SDESItem\n )\n )\n ]\n\n\nclass RTCP(Packet):\n name = \"RTCP\"\n\n fields_desc = [\n # HEADER\n BitField('version', 2, 2),\n BitField('padding', 0, 1),\n BitFieldLenField('count', 0, 5, count_of='report_blocks'),\n ByteEnumField('packet_type', 0, _rtcp_packet_types),\n LenField('length', None, fmt='!h'),\n # SR/RR\n ConditionalField(\n IntField('sourcesync', 0),\n lambda pkt: pkt.packet_type in (200, 201)\n ),\n ConditionalField(\n PacketField('sender_info', SenderInfo(), SenderInfo),\n lambda pkt: pkt.packet_type == 200\n ),\n ConditionalField(\n PacketListField('report_blocks', None, pkt_cls=ReceptionReport,\n count_from=lambda pkt: pkt.count),\n lambda pkt: pkt.packet_type in (200, 201)\n ),\n # SDES\n ConditionalField(\n PacketListField('sdes_chunks', None, pkt_cls=SDESChunk,\n count_from=lambda pkt: pkt.count),\n lambda pkt: pkt.packet_type == 202\n ),\n ]\n\n def post_build(self, pkt, pay):\n pkt += pay\n if self.length is None:\n pkt = pkt[:2] + struct.pack(\"!h\", len(pkt) // 4 - 1) + pkt[4:]\n return pkt\n", "path": "scapy/contrib/rtcp.py"}]} | 1,824 | 157 |
gh_patches_debug_1016 | rasdani/github-patches | git_diff | scikit-hep__pyhf-2068 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs build failing on Pygments lexter warning
Hm. Something related to https://github.com/spatialaudio/nbsphinx/issues/24 is breaking the docs build. We're getting
```pytb
WARNING: Pygments lexer name 'ipython3' is not known
```
for all the notebooks during the docs build and we fail on warnings.
_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/2066#issuecomment-1329937208_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow>=2.7.0', # c.f. PR #1962
7 'tensorflow-probability>=0.11.0', # c.f. PR #1657
8 ],
9 'torch': ['torch>=1.10.0'], # c.f. PR #1657
10 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501
11 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567
12 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['test'] = sorted(
24 set(
25 extras_require['backends']
26 + extras_require['xmlio']
27 + extras_require['contrib']
28 + extras_require['shellcomplete']
29 + [
30 'scikit-hep-testdata>=0.4.11',
31 'pytest>=6.0',
32 'coverage[toml]>=6.0.0',
33 'pytest-mock',
34 'requests-mock>=1.9.0',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'papermill~=2.3.4',
40 'scrapbook~=0.5.0',
41 'jupyter',
42 'graphviz',
43 'pytest-socket>=0.2.0', # c.f. PR #1917
44 ]
45 )
46 )
47 extras_require['docs'] = sorted(
48 set(
49 extras_require['xmlio']
50 + extras_require['contrib']
51 + [
52 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926
53 'sphinxcontrib-bibtex~=2.1',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>=0.3.2',
60 'sphinx-togglebutton>=0.3.0',
61 ]
62 )
63 )
64 extras_require['develop'] = sorted(
65 set(
66 extras_require['docs']
67 + extras_require['test']
68 + [
69 'nbdime',
70 'tbump>=6.7.0',
71 'ipython',
72 'pre-commit',
73 'nox',
74 'check-manifest',
75 'codemetapy>=2.3.0',
76 'twine',
77 ]
78 )
79 )
80 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
81
82
83 setup(
84 extras_require=extras_require,
85 use_scm_version=lambda: {'local_scheme': lambda version: ''},
86 )
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -58,6 +58,7 @@
'sphinx-issues',
'sphinx-copybutton>=0.3.2',
'sphinx-togglebutton>=0.3.0',
+ 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068
]
)
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -58,6 +58,7 @@\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n+ 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068\n ]\n )\n )\n", "issue": "docs build failing on Pygments lexter warning\nHm. Something related to https://github.com/spatialaudio/nbsphinx/issues/24 is breaking the docs build. We're getting\r\n\r\n```pytb\r\nWARNING: Pygments lexer name 'ipython3' is not known\r\n```\r\n\r\nfor all the notebooks during the docs build and we fail on warnings.\r\n\r\n_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/2066#issuecomment-1329937208_\r\n \n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow>=2.7.0', # c.f. PR #1962\n 'tensorflow-probability>=0.11.0', # c.f. PR #1657\n ],\n 'torch': ['torch>=1.10.0'], # c.f. PR #1657\n 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501\n 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567\n 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'scikit-hep-testdata>=0.4.11',\n 'pytest>=6.0',\n 'coverage[toml]>=6.0.0',\n 'pytest-mock',\n 'requests-mock>=1.9.0',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.3.4',\n 'scrapbook~=0.5.0',\n 'jupyter',\n 'graphviz',\n 'pytest-socket>=0.2.0', # c.f. PR #1917\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + [\n 'nbdime',\n 'tbump>=6.7.0',\n 'ipython',\n 'pre-commit',\n 'nox',\n 'check-manifest',\n 'codemetapy>=2.3.0',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow>=2.7.0', # c.f. PR #1962\n 'tensorflow-probability>=0.11.0', # c.f. PR #1657\n ],\n 'torch': ['torch>=1.10.0'], # c.f. PR #1657\n 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501\n 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567\n 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'scikit-hep-testdata>=0.4.11',\n 'pytest>=6.0',\n 'coverage[toml]>=6.0.0',\n 'pytest-mock',\n 'requests-mock>=1.9.0',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.3.4',\n 'scrapbook~=0.5.0',\n 'jupyter',\n 'graphviz',\n 'pytest-socket>=0.2.0', # c.f. PR #1917\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + [\n 'nbdime',\n 'tbump>=6.7.0',\n 'ipython',\n 'pre-commit',\n 'nox',\n 'check-manifest',\n 'codemetapy>=2.3.0',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,285 | 105 |
gh_patches_debug_22693 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-1440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pubsub.list_topics fails when there are no topics
[Offending line](https://github.com/GoogleCloudPlatform/gcloud-python/blob/0910f9979a45af8cc2826dd4c6ff38d9efa5ccec/gcloud/pubsub/client.py#L80). Reproduce via:
``` python
client = pubsub.Client()
>>> client.list_topics()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gcloud/pubsub/client.py", line 80, in list_topics
for resource in resp['topics']]
KeyError: 'topics'
```
@tseaver ISTM we should locate all instances where we assume a key is present and just protect against this. The time between releases behooves us to be "protective" of users. (I realize that we've usually done it this way based on documented outputs.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gcloud/pubsub/client.py`
Content:
```
1 # Copyright 2015 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Client for interacting with the Google Cloud Pub/Sub API."""
16
17
18 from gcloud.client import JSONClient
19 from gcloud.pubsub.connection import Connection
20 from gcloud.pubsub.subscription import Subscription
21 from gcloud.pubsub.topic import Topic
22
23
24 class Client(JSONClient):
25 """Client to bundle configuration needed for API requests.
26
27 :type project: string
28 :param project: the project which the client acts on behalf of. Will be
29 passed when creating a topic. If not passed,
30 falls back to the default inferred from the environment.
31
32 :type credentials: :class:`oauth2client.client.OAuth2Credentials` or
33 :class:`NoneType`
34 :param credentials: The OAuth2 Credentials to use for the connection
35 owned by this client. If not passed (and if no ``http``
36 object is passed), falls back to the default inferred
37 from the environment.
38
39 :type http: :class:`httplib2.Http` or class that defines ``request()``.
40 :param http: An optional HTTP object to make requests. If not passed, an
41 ``http`` object is created that is bound to the
42 ``credentials`` for the current object.
43 """
44
45 _connection_class = Connection
46
47 def list_topics(self, page_size=None, page_token=None):
48 """List topics for the project associated with this client.
49
50 See:
51 https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list
52
53 :type page_size: int
54 :param page_size: maximum number of topics to return, If not passed,
55 defaults to a value set by the API.
56
57 :type page_token: string
58 :param page_token: opaque marker for the next "page" of topics. If not
59 passed, the API will return the first page of
60 topics.
61
62 :rtype: tuple, (list, str)
63 :returns: list of :class:`gcloud.pubsub.topic.Topic`, plus a
64 "next page token" string: if not None, indicates that
65 more topics can be retrieved with another call (pass that
66 value as ``page_token``).
67 """
68 params = {}
69
70 if page_size is not None:
71 params['pageSize'] = page_size
72
73 if page_token is not None:
74 params['pageToken'] = page_token
75
76 path = '/projects/%s/topics' % (self.project,)
77 resp = self.connection.api_request(method='GET', path=path,
78 query_params=params)
79 topics = [Topic.from_api_repr(resource, self)
80 for resource in resp['topics']]
81 return topics, resp.get('nextPageToken')
82
83 def list_subscriptions(self, page_size=None, page_token=None,
84 topic_name=None):
85 """List subscriptions for the project associated with this client.
86
87 See:
88 https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list
89
90 and (where ``topic_name`` is passed):
91 https://cloud.google.com/pubsub/reference/rest/v1/projects.topics.subscriptions/list
92
93 :type page_size: int
94 :param page_size: maximum number of topics to return, If not passed,
95 defaults to a value set by the API.
96
97 :type page_token: string
98 :param page_token: opaque marker for the next "page" of topics. If not
99 passed, the API will return the first page of
100 topics.
101
102 :type topic_name: string
103 :param topic_name: limit results to subscriptions bound to the given
104 topic.
105
106 :rtype: tuple, (list, str)
107 :returns: list of :class:`gcloud.pubsub.subscription.Subscription`,
108 plus a "next page token" string: if not None, indicates that
109 more topics can be retrieved with another call (pass that
110 value as ``page_token``).
111 """
112 params = {}
113
114 if page_size is not None:
115 params['pageSize'] = page_size
116
117 if page_token is not None:
118 params['pageToken'] = page_token
119
120 if topic_name is None:
121 path = '/projects/%s/subscriptions' % (self.project,)
122 else:
123 path = '/projects/%s/topics/%s/subscriptions' % (self.project,
124 topic_name)
125
126 resp = self.connection.api_request(method='GET', path=path,
127 query_params=params)
128 topics = {}
129 subscriptions = [Subscription.from_api_repr(resource, self,
130 topics=topics)
131 for resource in resp['subscriptions']]
132 return subscriptions, resp.get('nextPageToken')
133
134 def topic(self, name, timestamp_messages=False):
135 """Creates a topic bound to the current client.
136
137 :type name: string
138 :param name: the name of the topic to be constructed.
139
140 :type timestamp_messages: boolean
141 :param timestamp_messages: To be passed to ``Topic`` constructor.
142
143 :rtype: :class:`gcloud.pubsub.topic.Topic`
144 :returns: Topic created with the current client.
145 """
146 return Topic(name, client=self, timestamp_messages=timestamp_messages)
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gcloud/pubsub/client.py b/gcloud/pubsub/client.py
--- a/gcloud/pubsub/client.py
+++ b/gcloud/pubsub/client.py
@@ -77,7 +77,7 @@
resp = self.connection.api_request(method='GET', path=path,
query_params=params)
topics = [Topic.from_api_repr(resource, self)
- for resource in resp['topics']]
+ for resource in resp.get('topics', ())]
return topics, resp.get('nextPageToken')
def list_subscriptions(self, page_size=None, page_token=None,
@@ -128,7 +128,7 @@
topics = {}
subscriptions = [Subscription.from_api_repr(resource, self,
topics=topics)
- for resource in resp['subscriptions']]
+ for resource in resp.get('subscriptions', ())]
return subscriptions, resp.get('nextPageToken')
def topic(self, name, timestamp_messages=False):
| {"golden_diff": "diff --git a/gcloud/pubsub/client.py b/gcloud/pubsub/client.py\n--- a/gcloud/pubsub/client.py\n+++ b/gcloud/pubsub/client.py\n@@ -77,7 +77,7 @@\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = [Topic.from_api_repr(resource, self)\n- for resource in resp['topics']]\n+ for resource in resp.get('topics', ())]\n return topics, resp.get('nextPageToken')\n \n def list_subscriptions(self, page_size=None, page_token=None,\n@@ -128,7 +128,7 @@\n topics = {}\n subscriptions = [Subscription.from_api_repr(resource, self,\n topics=topics)\n- for resource in resp['subscriptions']]\n+ for resource in resp.get('subscriptions', ())]\n return subscriptions, resp.get('nextPageToken')\n \n def topic(self, name, timestamp_messages=False):\n", "issue": "Pubsub.list_topics fails when there are no topics\n[Offending line](https://github.com/GoogleCloudPlatform/gcloud-python/blob/0910f9979a45af8cc2826dd4c6ff38d9efa5ccec/gcloud/pubsub/client.py#L80). Reproduce via:\n\n``` python\nclient = pubsub.Client()\n>>> client.list_topics()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"gcloud/pubsub/client.py\", line 80, in list_topics\n for resource in resp['topics']]\nKeyError: 'topics'\n```\n\n@tseaver ISTM we should locate all instances where we assume a key is present and just protect against this. The time between releases behooves us to be \"protective\" of users. (I realize that we've usually done it this way based on documented outputs.)\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Client for interacting with the Google Cloud Pub/Sub API.\"\"\"\n\n\nfrom gcloud.client import JSONClient\nfrom gcloud.pubsub.connection import Connection\nfrom gcloud.pubsub.subscription import Subscription\nfrom gcloud.pubsub.topic import Topic\n\n\nclass Client(JSONClient):\n \"\"\"Client to bundle configuration needed for API requests.\n\n :type project: string\n :param project: the project which the client acts on behalf of. Will be\n passed when creating a topic. If not passed,\n falls back to the default inferred from the environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def list_topics(self, page_size=None, page_token=None):\n \"\"\"List topics for the project associated with this client.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list\n\n :type page_size: int\n :param page_size: maximum number of topics to return, If not passed,\n defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of topics. If not\n passed, the API will return the first page of\n topics.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.pubsub.topic.Topic`, plus a\n \"next page token\" string: if not None, indicates that\n more topics can be retrieved with another call (pass that\n value as ``page_token``).\n \"\"\"\n params = {}\n\n if page_size is not None:\n params['pageSize'] = page_size\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n path = '/projects/%s/topics' % (self.project,)\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = [Topic.from_api_repr(resource, self)\n for resource in resp['topics']]\n return topics, resp.get('nextPageToken')\n\n def list_subscriptions(self, page_size=None, page_token=None,\n topic_name=None):\n \"\"\"List subscriptions for the project associated with this client.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list\n\n and (where ``topic_name`` is passed):\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics.subscriptions/list\n\n :type page_size: int\n :param page_size: maximum number of topics to return, If not passed,\n defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of topics. If not\n passed, the API will return the first page of\n topics.\n\n :type topic_name: string\n :param topic_name: limit results to subscriptions bound to the given\n topic.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.pubsub.subscription.Subscription`,\n plus a \"next page token\" string: if not None, indicates that\n more topics can be retrieved with another call (pass that\n value as ``page_token``).\n \"\"\"\n params = {}\n\n if page_size is not None:\n params['pageSize'] = page_size\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n if topic_name is None:\n path = '/projects/%s/subscriptions' % (self.project,)\n else:\n path = '/projects/%s/topics/%s/subscriptions' % (self.project,\n topic_name)\n\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = {}\n subscriptions = [Subscription.from_api_repr(resource, self,\n topics=topics)\n for resource in resp['subscriptions']]\n return subscriptions, resp.get('nextPageToken')\n\n def topic(self, name, timestamp_messages=False):\n \"\"\"Creates a topic bound to the current client.\n\n :type name: string\n :param name: the name of the topic to be constructed.\n\n :type timestamp_messages: boolean\n :param timestamp_messages: To be passed to ``Topic`` constructor.\n\n :rtype: :class:`gcloud.pubsub.topic.Topic`\n :returns: Topic created with the current client.\n \"\"\"\n return Topic(name, client=self, timestamp_messages=timestamp_messages)\n", "path": "gcloud/pubsub/client.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Client for interacting with the Google Cloud Pub/Sub API.\"\"\"\n\n\nfrom gcloud.client import JSONClient\nfrom gcloud.pubsub.connection import Connection\nfrom gcloud.pubsub.subscription import Subscription\nfrom gcloud.pubsub.topic import Topic\n\n\nclass Client(JSONClient):\n \"\"\"Client to bundle configuration needed for API requests.\n\n :type project: string\n :param project: the project which the client acts on behalf of. Will be\n passed when creating a topic. If not passed,\n falls back to the default inferred from the environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def list_topics(self, page_size=None, page_token=None):\n \"\"\"List topics for the project associated with this client.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list\n\n :type page_size: int\n :param page_size: maximum number of topics to return, If not passed,\n defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of topics. If not\n passed, the API will return the first page of\n topics.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.pubsub.topic.Topic`, plus a\n \"next page token\" string: if not None, indicates that\n more topics can be retrieved with another call (pass that\n value as ``page_token``).\n \"\"\"\n params = {}\n\n if page_size is not None:\n params['pageSize'] = page_size\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n path = '/projects/%s/topics' % (self.project,)\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = [Topic.from_api_repr(resource, self)\n for resource in resp.get('topics', ())]\n return topics, resp.get('nextPageToken')\n\n def list_subscriptions(self, page_size=None, page_token=None,\n topic_name=None):\n \"\"\"List subscriptions for the project associated with this client.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics/list\n\n and (where ``topic_name`` is passed):\n https://cloud.google.com/pubsub/reference/rest/v1/projects.topics.subscriptions/list\n\n :type page_size: int\n :param page_size: maximum number of topics to return, If not passed,\n defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of topics. If not\n passed, the API will return the first page of\n topics.\n\n :type topic_name: string\n :param topic_name: limit results to subscriptions bound to the given\n topic.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.pubsub.subscription.Subscription`,\n plus a \"next page token\" string: if not None, indicates that\n more topics can be retrieved with another call (pass that\n value as ``page_token``).\n \"\"\"\n params = {}\n\n if page_size is not None:\n params['pageSize'] = page_size\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n if topic_name is None:\n path = '/projects/%s/subscriptions' % (self.project,)\n else:\n path = '/projects/%s/topics/%s/subscriptions' % (self.project,\n topic_name)\n\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n topics = {}\n subscriptions = [Subscription.from_api_repr(resource, self,\n topics=topics)\n for resource in resp.get('subscriptions', ())]\n return subscriptions, resp.get('nextPageToken')\n\n def topic(self, name, timestamp_messages=False):\n \"\"\"Creates a topic bound to the current client.\n\n :type name: string\n :param name: the name of the topic to be constructed.\n\n :type timestamp_messages: boolean\n :param timestamp_messages: To be passed to ``Topic`` constructor.\n\n :rtype: :class:`gcloud.pubsub.topic.Topic`\n :returns: Topic created with the current client.\n \"\"\"\n return Topic(name, client=self, timestamp_messages=timestamp_messages)\n", "path": "gcloud/pubsub/client.py"}]} | 2,022 | 206 |
gh_patches_debug_52175 | rasdani/github-patches | git_diff | microsoft__ptvsd-167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error reading integer
From VS (might not be a ptvsd bug, not sure at this point):
Create new python application
Add new item, python unit test
Set the unit test as startup file
F5
Result:
```
---------------------------
Microsoft Visual Studio
---------------------------
Error reading integer. Unexpected token: Boolean. Path 'exitCode'.
---------------------------
OK
---------------------------
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7
8 __author__ = "Microsoft Corporation <[email protected]>"
9 __version__ = "4.0.0a1"
10
11 DONT_DEBUG = []
12
13
14 def debug(filename, port_num, debug_id, debug_options, run_as):
15 # TODO: docstring
16
17 # import the wrapper first, so that it gets a chance
18 # to detour pydevd socket functionality.
19 import ptvsd.wrapper
20 import pydevd
21
22 args = [
23 '--port', str(port_num),
24 '--client', '127.0.0.1',
25 ]
26 if run_as == 'module':
27 args.append('--module')
28 args.extend(('--file', filename + ":"))
29 else:
30 args.extend(('--file', filename))
31 sys.argv[1:0] = args
32 try:
33 pydevd.main()
34 except SystemExit as ex:
35 ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
36 raise
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -32,5 +32,5 @@
try:
pydevd.main()
except SystemExit as ex:
- ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
+ ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)
raise
| {"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -32,5 +32,5 @@\n try:\n pydevd.main()\n except SystemExit as ex:\n- ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n+ ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "issue": "Error reading integer\nFrom VS (might not be a ptvsd bug, not sure at this point):\r\nCreate new python application\r\nAdd new item, python unit test\r\nSet the unit test as startup file\r\nF5\r\n\r\nResult:\r\n```\r\n---------------------------\r\nMicrosoft Visual Studio\r\n---------------------------\r\nError reading integer. Unexpected token: Boolean. Path 'exitCode'.\r\n---------------------------\r\nOK \r\n---------------------------\r\n```\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n ]\n if run_as == 'module':\n args.append('--module')\n args.extend(('--file', filename + \":\"))\n else:\n args.extend(('--file', filename))\n sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n raise\n", "path": "ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n ]\n if run_as == 'module':\n args.append('--module')\n args.extend(('--file', filename + \":\"))\n else:\n args.extend(('--file', filename))\n sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "path": "ptvsd/debugger.py"}]} | 654 | 104 |
gh_patches_debug_17990 | rasdani/github-patches | git_diff | rotki__rotki-3143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrading DB from v26->v27 can fail if user balancer LP events stored in their DB
## Problem Definition
A user who upgraded from v1.16.2 to v1.18.1 notified us that they saw a DB upgrade failure from v26->v27. Which means the app versions v1.17.2 to v1.18.0.
Turns out that for specific user DBs who have had some Balancer LP events detected and had both the balancer events and the balancer pools DB table populated the DB upgrade would fail, since the upgrade deletes the balancer pools table first, hence possibly hitting a constraint.
## Workaround
Workaround is rather easy. Download v1.17.0-v1.17.2, since that can open v26 DB, purge all uniswap and balancer data, and then open with v1.18.XX.
## Task
Fix the upgrade so that this does not occur even for this special case of users.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/db/upgrades/v26_v27.py`
Content:
```
1 from typing import TYPE_CHECKING
2
3 if TYPE_CHECKING:
4 from rotkehlchen.db.dbhandler import DBHandler
5
6
7 def upgrade_v26_to_v27(db: 'DBHandler') -> None:
8 """Upgrades the DB from v26 to v27
9
10 - Deletes and recreates the tables that were changed after removing UnknownEthereumToken
11 """
12 cursor = db.conn.cursor()
13 cursor.execute('DROP TABLE IF EXISTS balancer_pools;')
14
15 cursor.execute('DROP TABLE IF EXISTS balancer_events;')
16 cursor.execute("""
17 CREATE TABLE IF NOT EXISTS balancer_events (
18 tx_hash VARCHAR[42] NOT NULL,
19 log_index INTEGER NOT NULL,
20 address VARCHAR[42] NOT NULL,
21 timestamp INTEGER NOT NULL,
22 type TEXT NOT NULL,
23 pool_address_token TEXT NOT NULL,
24 lp_amount TEXT NOT NULL,
25 usd_value TEXT NOT NULL,
26 amount0 TEXT NOT NULL,
27 amount1 TEXT NOT NULL,
28 amount2 TEXT,
29 amount3 TEXT,
30 amount4 TEXT,
31 amount5 TEXT,
32 amount6 TEXT,
33 amount7 TEXT,
34 FOREIGN KEY (pool_address_token) REFERENCES assets(identifier) ON UPDATE CASCADE,
35 PRIMARY KEY (tx_hash, log_index)
36 );
37 """)
38 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "balancer_events%";')
39
40 cursor.execute('DROP TABLE IF EXISTS amm_swaps;')
41 cursor.execute("""
42 CREATE TABLE IF NOT EXISTS amm_swaps (
43 tx_hash VARCHAR[42] NOT NULL,
44 log_index INTEGER NOT NULL,
45 address VARCHAR[42] NOT NULL,
46 from_address VARCHAR[42] NOT NULL,
47 to_address VARCHAR[42] NOT NULL,
48 timestamp INTEGER NOT NULL,
49 location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),
50 token0_identifier TEXT NOT NULL,
51 token1_identifier TEXT NOT NULL,
52 amount0_in TEXT,
53 amount1_in TEXT,
54 amount0_out TEXT,
55 amount1_out TEXT,
56 FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
57 FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
58 PRIMARY KEY (tx_hash, log_index)
59 );""")
60 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "balancer_trades%";')
61 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "uniswap_trades%";')
62
63 cursor.execute('DROP TABLE IF EXISTS uniswap_events;')
64 cursor.execute("""
65 CREATE TABLE IF NOT EXISTS uniswap_events (
66 tx_hash VARCHAR[42] NOT NULL,
67 log_index INTEGER NOT NULL,
68 address VARCHAR[42] NOT NULL,
69 timestamp INTEGER NOT NULL,
70 type TEXT NOT NULL,
71 pool_address VARCHAR[42] NOT NULL,
72 token0_identifier TEXT NOT NULL,
73 token1_identifier TEXT NOT NULL,
74 amount0 TEXT,
75 amount1 TEXT,
76 usd_price TEXT,
77 lp_amount TEXT,
78 FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
79 FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
80 PRIMARY KEY (tx_hash, log_index)
81 );""")
82 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "uniswap_events%";')
83
84 db.conn.commit()
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/db/upgrades/v26_v27.py b/rotkehlchen/db/upgrades/v26_v27.py
--- a/rotkehlchen/db/upgrades/v26_v27.py
+++ b/rotkehlchen/db/upgrades/v26_v27.py
@@ -10,8 +10,6 @@
- Deletes and recreates the tables that were changed after removing UnknownEthereumToken
"""
cursor = db.conn.cursor()
- cursor.execute('DROP TABLE IF EXISTS balancer_pools;')
-
cursor.execute('DROP TABLE IF EXISTS balancer_events;')
cursor.execute("""
CREATE TABLE IF NOT EXISTS balancer_events (
@@ -35,6 +33,7 @@
PRIMARY KEY (tx_hash, log_index)
);
""")
+ cursor.execute('DROP TABLE IF EXISTS balancer_pools;')
cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "balancer_events%";')
cursor.execute('DROP TABLE IF EXISTS amm_swaps;')
| {"golden_diff": "diff --git a/rotkehlchen/db/upgrades/v26_v27.py b/rotkehlchen/db/upgrades/v26_v27.py\n--- a/rotkehlchen/db/upgrades/v26_v27.py\n+++ b/rotkehlchen/db/upgrades/v26_v27.py\n@@ -10,8 +10,6 @@\n - Deletes and recreates the tables that were changed after removing UnknownEthereumToken\n \"\"\"\n cursor = db.conn.cursor()\n- cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n-\n cursor.execute('DROP TABLE IF EXISTS balancer_events;')\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS balancer_events (\n@@ -35,6 +33,7 @@\n PRIMARY KEY (tx_hash, log_index)\n );\n \"\"\")\n+ cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_events%\";')\n \n cursor.execute('DROP TABLE IF EXISTS amm_swaps;')\n", "issue": "Upgrading DB from v26->v27 can fail if user balancer LP events stored in their DB\n## Problem Definition\r\n\r\nA user who upgraded from v1.16.2 to v1.18.1 notified us that they saw a DB upgrade failure from v26->v27. Which means the app versions v1.17.2 to v1.18.0.\r\n\r\nTurns out that for specific user DBs who have had some Balancer LP events detected and had both the balancer events and the balancer pools DB table populated the DB upgrade would fail, since the upgrade deletes the balancer pools table first, hence possibly hitting a constraint.\r\n\r\n## Workaround\r\n\r\nWorkaround is rather easy. Download v1.17.0-v1.17.2, since that can open v26 DB, purge all uniswap and balancer data, and then open with v1.18.XX.\r\n\r\n## Task\r\n\r\nFix the upgrade so that this does not occur even for this special case of users.\n", "before_files": [{"content": "from typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\n\ndef upgrade_v26_to_v27(db: 'DBHandler') -> None:\n \"\"\"Upgrades the DB from v26 to v27\n\n - Deletes and recreates the tables that were changed after removing UnknownEthereumToken\n \"\"\"\n cursor = db.conn.cursor()\n cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n\n cursor.execute('DROP TABLE IF EXISTS balancer_events;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS balancer_events (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n type TEXT NOT NULL,\n pool_address_token TEXT NOT NULL,\n lp_amount TEXT NOT NULL,\n usd_value TEXT NOT NULL,\n amount0 TEXT NOT NULL,\n amount1 TEXT NOT NULL,\n amount2 TEXT,\n amount3 TEXT,\n amount4 TEXT,\n amount5 TEXT,\n amount6 TEXT,\n amount7 TEXT,\n FOREIGN KEY (pool_address_token) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\n\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_events%\";')\n\n cursor.execute('DROP TABLE IF EXISTS amm_swaps;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS amm_swaps (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n from_address VARCHAR[42] NOT NULL,\n to_address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),\n token0_identifier TEXT NOT NULL,\n token1_identifier TEXT NOT NULL,\n amount0_in TEXT,\n amount1_in TEXT,\n amount0_out TEXT,\n amount1_out TEXT,\n FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_trades%\";')\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"uniswap_trades%\";')\n\n cursor.execute('DROP TABLE IF EXISTS uniswap_events;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS uniswap_events (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n type TEXT NOT NULL,\n pool_address VARCHAR[42] NOT NULL,\n token0_identifier TEXT NOT NULL,\n token1_identifier TEXT NOT NULL,\n amount0 TEXT,\n amount1 TEXT,\n usd_price TEXT,\n lp_amount TEXT,\n FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"uniswap_events%\";')\n\n db.conn.commit()\n", "path": "rotkehlchen/db/upgrades/v26_v27.py"}], "after_files": [{"content": "from typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\n\ndef upgrade_v26_to_v27(db: 'DBHandler') -> None:\n \"\"\"Upgrades the DB from v26 to v27\n\n - Deletes and recreates the tables that were changed after removing UnknownEthereumToken\n \"\"\"\n cursor = db.conn.cursor()\n cursor.execute('DROP TABLE IF EXISTS balancer_events;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS balancer_events (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n type TEXT NOT NULL,\n pool_address_token TEXT NOT NULL,\n lp_amount TEXT NOT NULL,\n usd_value TEXT NOT NULL,\n amount0 TEXT NOT NULL,\n amount1 TEXT NOT NULL,\n amount2 TEXT,\n amount3 TEXT,\n amount4 TEXT,\n amount5 TEXT,\n amount6 TEXT,\n amount7 TEXT,\n FOREIGN KEY (pool_address_token) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\n\"\"\")\n cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_events%\";')\n\n cursor.execute('DROP TABLE IF EXISTS amm_swaps;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS amm_swaps (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n from_address VARCHAR[42] NOT NULL,\n to_address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),\n token0_identifier TEXT NOT NULL,\n token1_identifier TEXT NOT NULL,\n amount0_in TEXT,\n amount1_in TEXT,\n amount0_out TEXT,\n amount1_out TEXT,\n FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_trades%\";')\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"uniswap_trades%\";')\n\n cursor.execute('DROP TABLE IF EXISTS uniswap_events;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS uniswap_events (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n type TEXT NOT NULL,\n pool_address VARCHAR[42] NOT NULL,\n token0_identifier TEXT NOT NULL,\n token1_identifier TEXT NOT NULL,\n amount0 TEXT,\n amount1 TEXT,\n usd_price TEXT,\n lp_amount TEXT,\n FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"uniswap_events%\";')\n\n db.conn.commit()\n", "path": "rotkehlchen/db/upgrades/v26_v27.py"}]} | 1,344 | 228 |
gh_patches_debug_11800 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-2017 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backwards incompatible change to MSE for pixelwise regression
## 🐛 Bug
#1937 introduces an unintended consequence: pixelwise regression is no longer supported.
### To Reproduce
Run the following script:
```python
import torch
import torchmetrics
B = 4
H = W = 3
x = torch.rand(B, H, W)
y = torch.rand(B, H, W)
torchmetrics.functional.mean_squared_error(x, y)
```
This results in the following error msg:
```
Traceback (most recent call last):
File "test.py", line 10, in <module>
torchmetrics.functional.mean_squared_error(x, y, num_outputs=H * W)
File "lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py", line 84, in mean_squared_error
sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)
File "lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py", line 35, in _mean_squared_error_update
_check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)
File "lib/python3.10/site-packages/torchmetrics/functional/regression/utils.py", line 31, in _check_data_shape_to_num_outputs
raise ValueError(
ValueError: Expected both predictions and target to be either 1- or 2-dimensional tensors, but got 3 and 3.
```
### Expected behavior
I would expect the MSE metrics to support pixelwise regression (predicting a single regression value for each pixel in an image). The above script works fine with torchmetrics 1.0.3.
### Environment
- TorchMetrics version: 1.1.0, spack
- Python & PyTorch Version: 3.10.10, 2.1.0
- Any other relevant information such as OS: macOS
### Additional context
@SkafteNicki @Borda @justusschock
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/functional/regression/mse.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Tuple, Union
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
20 from torchmetrics.utilities.checks import _check_same_shape
21
22
23 def _mean_squared_error_update(preds: Tensor, target: Tensor, num_outputs: int) -> Tuple[Tensor, int]:
24 """Update and returns variables required to compute Mean Squared Error.
25
26 Check for same shape of input tensors.
27
28 Args:
29 preds: Predicted tensor
30 target: Ground truth tensor
31 num_outputs: Number of outputs in multioutput setting
32
33 """
34 _check_same_shape(preds, target)
35 _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)
36 if num_outputs == 1:
37 preds = preds.view(-1)
38 target = target.view(-1)
39 diff = preds - target
40 sum_squared_error = torch.sum(diff * diff, dim=0)
41 n_obs = target.shape[0]
42 return sum_squared_error, n_obs
43
44
45 def _mean_squared_error_compute(sum_squared_error: Tensor, n_obs: Union[int, Tensor], squared: bool = True) -> Tensor:
46 """Compute Mean Squared Error.
47
48 Args:
49 sum_squared_error: Sum of square of errors over all observations
50 n_obs: Number of predictions or observations
51 squared: Returns RMSE value if set to False.
52
53 Example:
54 >>> preds = torch.tensor([0., 1, 2, 3])
55 >>> target = torch.tensor([0., 1, 2, 2])
56 >>> sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=1)
57 >>> _mean_squared_error_compute(sum_squared_error, n_obs)
58 tensor(0.2500)
59
60 """
61 return sum_squared_error / n_obs if squared else torch.sqrt(sum_squared_error / n_obs)
62
63
64 def mean_squared_error(preds: Tensor, target: Tensor, squared: bool = True, num_outputs: int = 1) -> Tensor:
65 """Compute mean squared error.
66
67 Args:
68 preds: estimated labels
69 target: ground truth labels
70 squared: returns RMSE value if set to False
71 num_outputs: Number of outputs in multioutput setting
72
73 Return:
74 Tensor with MSE
75
76 Example:
77 >>> from torchmetrics.functional.regression import mean_squared_error
78 >>> x = torch.tensor([0., 1, 2, 3])
79 >>> y = torch.tensor([0., 1, 2, 2])
80 >>> mean_squared_error(x, y)
81 tensor(0.2500)
82
83 """
84 sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)
85 return _mean_squared_error_compute(sum_squared_error, n_obs, squared=squared)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/torchmetrics/functional/regression/mse.py b/src/torchmetrics/functional/regression/mse.py
--- a/src/torchmetrics/functional/regression/mse.py
+++ b/src/torchmetrics/functional/regression/mse.py
@@ -16,7 +16,6 @@
import torch
from torch import Tensor
-from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
from torchmetrics.utilities.checks import _check_same_shape
@@ -32,7 +31,6 @@
"""
_check_same_shape(preds, target)
- _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)
if num_outputs == 1:
preds = preds.view(-1)
target = target.view(-1)
| {"golden_diff": "diff --git a/src/torchmetrics/functional/regression/mse.py b/src/torchmetrics/functional/regression/mse.py\n--- a/src/torchmetrics/functional/regression/mse.py\n+++ b/src/torchmetrics/functional/regression/mse.py\n@@ -16,7 +16,6 @@\n import torch\n from torch import Tensor\n \n-from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\n from torchmetrics.utilities.checks import _check_same_shape\n \n \n@@ -32,7 +31,6 @@\n \n \"\"\"\n _check_same_shape(preds, target)\n- _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)\n if num_outputs == 1:\n preds = preds.view(-1)\n target = target.view(-1)\n", "issue": "Backwards incompatible change to MSE for pixelwise regression\n## \ud83d\udc1b Bug\r\n\r\n#1937 introduces an unintended consequence: pixelwise regression is no longer supported.\r\n\r\n### To Reproduce\r\n\r\nRun the following script:\r\n```python\r\nimport torch\r\nimport torchmetrics\r\n\r\nB = 4\r\nH = W = 3\r\n\r\nx = torch.rand(B, H, W)\r\ny = torch.rand(B, H, W)\r\n\r\ntorchmetrics.functional.mean_squared_error(x, y)\r\n```\r\nThis results in the following error msg:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 10, in <module>\r\n torchmetrics.functional.mean_squared_error(x, y, num_outputs=H * W)\r\n File \"lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py\", line 84, in mean_squared_error\r\n sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)\r\n File \"lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py\", line 35, in _mean_squared_error_update\r\n _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)\r\n File \"lib/python3.10/site-packages/torchmetrics/functional/regression/utils.py\", line 31, in _check_data_shape_to_num_outputs\r\n raise ValueError(\r\nValueError: Expected both predictions and target to be either 1- or 2-dimensional tensors, but got 3 and 3.\r\n```\r\n\r\n### Expected behavior\r\n\r\nI would expect the MSE metrics to support pixelwise regression (predicting a single regression value for each pixel in an image). The above script works fine with torchmetrics 1.0.3.\r\n\r\n### Environment\r\n\r\n- TorchMetrics version: 1.1.0, spack\r\n- Python & PyTorch Version: 3.10.10, 2.1.0\r\n- Any other relevant information such as OS: macOS\r\n\r\n### Additional context\r\n\r\n@SkafteNicki @Borda @justusschock \n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Tuple, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n\ndef _mean_squared_error_update(preds: Tensor, target: Tensor, num_outputs: int) -> Tuple[Tensor, int]:\n \"\"\"Update and returns variables required to compute Mean Squared Error.\n\n Check for same shape of input tensors.\n\n Args:\n preds: Predicted tensor\n target: Ground truth tensor\n num_outputs: Number of outputs in multioutput setting\n\n \"\"\"\n _check_same_shape(preds, target)\n _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)\n if num_outputs == 1:\n preds = preds.view(-1)\n target = target.view(-1)\n diff = preds - target\n sum_squared_error = torch.sum(diff * diff, dim=0)\n n_obs = target.shape[0]\n return sum_squared_error, n_obs\n\n\ndef _mean_squared_error_compute(sum_squared_error: Tensor, n_obs: Union[int, Tensor], squared: bool = True) -> Tensor:\n \"\"\"Compute Mean Squared Error.\n\n Args:\n sum_squared_error: Sum of square of errors over all observations\n n_obs: Number of predictions or observations\n squared: Returns RMSE value if set to False.\n\n Example:\n >>> preds = torch.tensor([0., 1, 2, 3])\n >>> target = torch.tensor([0., 1, 2, 2])\n >>> sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=1)\n >>> _mean_squared_error_compute(sum_squared_error, n_obs)\n tensor(0.2500)\n\n \"\"\"\n return sum_squared_error / n_obs if squared else torch.sqrt(sum_squared_error / n_obs)\n\n\ndef mean_squared_error(preds: Tensor, target: Tensor, squared: bool = True, num_outputs: int = 1) -> Tensor:\n \"\"\"Compute mean squared error.\n\n Args:\n preds: estimated labels\n target: ground truth labels\n squared: returns RMSE value if set to False\n num_outputs: Number of outputs in multioutput setting\n\n Return:\n Tensor with MSE\n\n Example:\n >>> from torchmetrics.functional.regression import mean_squared_error\n >>> x = torch.tensor([0., 1, 2, 3])\n >>> y = torch.tensor([0., 1, 2, 2])\n >>> mean_squared_error(x, y)\n tensor(0.2500)\n\n \"\"\"\n sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)\n return _mean_squared_error_compute(sum_squared_error, n_obs, squared=squared)\n", "path": "src/torchmetrics/functional/regression/mse.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Tuple, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n\ndef _mean_squared_error_update(preds: Tensor, target: Tensor, num_outputs: int) -> Tuple[Tensor, int]:\n \"\"\"Update and returns variables required to compute Mean Squared Error.\n\n Check for same shape of input tensors.\n\n Args:\n preds: Predicted tensor\n target: Ground truth tensor\n num_outputs: Number of outputs in multioutput setting\n\n \"\"\"\n _check_same_shape(preds, target)\n if num_outputs == 1:\n preds = preds.view(-1)\n target = target.view(-1)\n diff = preds - target\n sum_squared_error = torch.sum(diff * diff, dim=0)\n n_obs = target.shape[0]\n return sum_squared_error, n_obs\n\n\ndef _mean_squared_error_compute(sum_squared_error: Tensor, n_obs: Union[int, Tensor], squared: bool = True) -> Tensor:\n \"\"\"Compute Mean Squared Error.\n\n Args:\n sum_squared_error: Sum of square of errors over all observations\n n_obs: Number of predictions or observations\n squared: Returns RMSE value if set to False.\n\n Example:\n >>> preds = torch.tensor([0., 1, 2, 3])\n >>> target = torch.tensor([0., 1, 2, 2])\n >>> sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=1)\n >>> _mean_squared_error_compute(sum_squared_error, n_obs)\n tensor(0.2500)\n\n \"\"\"\n return sum_squared_error / n_obs if squared else torch.sqrt(sum_squared_error / n_obs)\n\n\ndef mean_squared_error(preds: Tensor, target: Tensor, squared: bool = True, num_outputs: int = 1) -> Tensor:\n \"\"\"Compute mean squared error.\n\n Args:\n preds: estimated labels\n target: ground truth labels\n squared: returns RMSE value if set to False\n num_outputs: Number of outputs in multioutput setting\n\n Return:\n Tensor with MSE\n\n Example:\n >>> from torchmetrics.functional.regression import mean_squared_error\n >>> x = torch.tensor([0., 1, 2, 3])\n >>> y = torch.tensor([0., 1, 2, 2])\n >>> mean_squared_error(x, y)\n tensor(0.2500)\n\n \"\"\"\n sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)\n return _mean_squared_error_compute(sum_squared_error, n_obs, squared=squared)\n", "path": "src/torchmetrics/functional/regression/mse.py"}]} | 1,626 | 179 |
gh_patches_debug_2978 | rasdani/github-patches | git_diff | frappe__frappe-20434 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable Scheduler from desk
Feature to enable scheduler from desk.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/utils/scheduler.py`
Content:
```
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # License: MIT. See LICENSE
3 """
4 Events:
5 always
6 daily
7 monthly
8 weekly
9 """
10
11 # imports - standard imports
12 import os
13 import time
14 from typing import NoReturn
15
16 # imports - module imports
17 import frappe
18 from frappe.installer import update_site_config
19 from frappe.utils import cint, get_datetime, get_sites, now_datetime
20 from frappe.utils.background_jobs import get_jobs
21
22 DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
23
24
25 def cprint(*args, **kwargs):
26 """Prints only if called from STDOUT"""
27 try:
28 os.get_terminal_size()
29 print(*args, **kwargs)
30 except Exception:
31 pass
32
33
34 def start_scheduler() -> NoReturn:
35 """Run enqueue_events_for_all_sites based on scheduler tick.
36 Specify scheduler_interval in seconds in common_site_config.json"""
37
38 tick = cint(frappe.get_conf().scheduler_tick_interval) or 60
39
40 while True:
41 time.sleep(tick)
42 enqueue_events_for_all_sites()
43
44
45 def enqueue_events_for_all_sites() -> None:
46 """Loop through sites and enqueue events that are not already queued"""
47
48 if os.path.exists(os.path.join(".", ".restarting")):
49 # Don't add task to queue if webserver is in restart mode
50 return
51
52 with frappe.init_site():
53 sites = get_sites()
54
55 for site in sites:
56 try:
57 enqueue_events_for_site(site=site)
58 except Exception:
59 frappe.logger("scheduler").debug(f"Failed to enqueue events for site: {site}", exc_info=True)
60
61
62 def enqueue_events_for_site(site: str) -> None:
63 def log_exc():
64 frappe.logger("scheduler").error(f"Exception in Enqueue Events for Site {site}", exc_info=True)
65
66 try:
67 frappe.init(site=site)
68 frappe.connect()
69 if is_scheduler_inactive():
70 return
71
72 enqueue_events(site=site)
73
74 frappe.logger("scheduler").debug(f"Queued events for site {site}")
75 except Exception as e:
76 if frappe.db.is_access_denied(e):
77 frappe.logger("scheduler").debug(f"Access denied for site {site}")
78 log_exc()
79
80 finally:
81 frappe.destroy()
82
83
84 def enqueue_events(site: str) -> list[str] | None:
85 if schedule_jobs_based_on_activity():
86 enqueued_jobs = []
87 for job_type in frappe.get_all("Scheduled Job Type", ("name", "method"), {"stopped": 0}):
88 job_type = frappe.get_cached_doc("Scheduled Job Type", job_type.name)
89 if _enqueued := job_type.enqueue():
90 enqueued_jobs.append(job_type.method)
91
92 return enqueued_jobs
93
94
95 def is_scheduler_inactive(verbose=True) -> bool:
96 if frappe.local.conf.maintenance_mode:
97 if verbose:
98 cprint(f"{frappe.local.site}: Maintenance mode is ON")
99 return True
100
101 if frappe.local.conf.pause_scheduler:
102 if verbose:
103 cprint(f"{frappe.local.site}: frappe.conf.pause_scheduler is SET")
104 return True
105
106 if is_scheduler_disabled(verbose=verbose):
107 return True
108
109 return False
110
111
112 def is_scheduler_disabled(verbose=True) -> bool:
113 if frappe.conf.disable_scheduler:
114 if verbose:
115 cprint(f"{frappe.local.site}: frappe.conf.disable_scheduler is SET")
116 return True
117
118 scheduler_disabled = not frappe.utils.cint(
119 frappe.db.get_single_value("System Settings", "enable_scheduler")
120 )
121 if scheduler_disabled:
122 if verbose:
123 cprint(f"{frappe.local.site}: SystemSettings.enable_scheduler is UNSET")
124 return scheduler_disabled
125
126
127 def toggle_scheduler(enable):
128 frappe.db.set_single_value("System Settings", "enable_scheduler", int(enable))
129
130
131 def enable_scheduler():
132 toggle_scheduler(True)
133
134
135 def disable_scheduler():
136 toggle_scheduler(False)
137
138
139 def schedule_jobs_based_on_activity(check_time=None):
140 """Returns True for active sites defined by Activity Log
141 Returns True for inactive sites once in 24 hours"""
142 if is_dormant(check_time=check_time):
143 # ensure last job is one day old
144 last_job_timestamp = _get_last_modified_timestamp("Scheduled Job Log")
145 if not last_job_timestamp:
146 return True
147 else:
148 if ((check_time or now_datetime()) - last_job_timestamp).total_seconds() >= 86400:
149 # one day is passed since jobs are run, so lets do this
150 return True
151 else:
152 # schedulers run in the last 24 hours, do nothing
153 return False
154 else:
155 # site active, lets run the jobs
156 return True
157
158
159 def is_dormant(check_time=None):
160 last_activity_log_timestamp = _get_last_modified_timestamp("Activity Log")
161 since = (frappe.get_system_settings("dormant_days") or 4) * 86400
162 if not last_activity_log_timestamp:
163 return True
164 if ((check_time or now_datetime()) - last_activity_log_timestamp).total_seconds() >= since:
165 return True
166 return False
167
168
169 def _get_last_modified_timestamp(doctype):
170 timestamp = frappe.db.get_value(
171 doctype, filters={}, fieldname="modified", order_by="modified desc"
172 )
173 if timestamp:
174 return get_datetime(timestamp)
175
176
177 @frappe.whitelist()
178 def activate_scheduler():
179 if is_scheduler_disabled():
180 enable_scheduler()
181 if frappe.conf.pause_scheduler:
182 update_site_config("pause_scheduler", 0)
183
184
185 @frappe.whitelist()
186 def get_scheduler_status():
187 if is_scheduler_inactive():
188 return {"status": "inactive"}
189 return {"status": "active"}
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py
--- a/frappe/utils/scheduler.py
+++ b/frappe/utils/scheduler.py
@@ -176,6 +176,11 @@
@frappe.whitelist()
def activate_scheduler():
+ frappe.only_for("Administrator")
+
+ if frappe.local.conf.maintenance_mode:
+ frappe.throw(frappe._("Scheduler can not be re-enabled when maintenance mode is active."))
+
if is_scheduler_disabled():
enable_scheduler()
if frappe.conf.pause_scheduler:
| {"golden_diff": "diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py\n--- a/frappe/utils/scheduler.py\n+++ b/frappe/utils/scheduler.py\n@@ -176,6 +176,11 @@\n \n @frappe.whitelist()\n def activate_scheduler():\n+\tfrappe.only_for(\"Administrator\")\n+\n+\tif frappe.local.conf.maintenance_mode:\n+\t\tfrappe.throw(frappe._(\"Scheduler can not be re-enabled when maintenance mode is active.\"))\n+\n \tif is_scheduler_disabled():\n \t\tenable_scheduler()\n \tif frappe.conf.pause_scheduler:\n", "issue": "Enable Scheduler from desk\nFeature to enable scheduler from desk.\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\n# imports - standard imports\nimport os\nimport time\nfrom typing import NoReturn\n\n# imports - module imports\nimport frappe\nfrom frappe.installer import update_site_config\nfrom frappe.utils import cint, get_datetime, get_sites, now_datetime\nfrom frappe.utils.background_jobs import get_jobs\n\nDATETIME_FORMAT = \"%Y-%m-%d %H:%M:%S\"\n\n\ndef cprint(*args, **kwargs):\n\t\"\"\"Prints only if called from STDOUT\"\"\"\n\ttry:\n\t\tos.get_terminal_size()\n\t\tprint(*args, **kwargs)\n\texcept Exception:\n\t\tpass\n\n\ndef start_scheduler() -> NoReturn:\n\t\"\"\"Run enqueue_events_for_all_sites based on scheduler tick.\n\tSpecify scheduler_interval in seconds in common_site_config.json\"\"\"\n\n\ttick = cint(frappe.get_conf().scheduler_tick_interval) or 60\n\n\twhile True:\n\t\ttime.sleep(tick)\n\t\tenqueue_events_for_all_sites()\n\n\ndef enqueue_events_for_all_sites() -> None:\n\t\"\"\"Loop through sites and enqueue events that are not already queued\"\"\"\n\n\tif os.path.exists(os.path.join(\".\", \".restarting\")):\n\t\t# Don't add task to queue if webserver is in restart mode\n\t\treturn\n\n\twith frappe.init_site():\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site)\n\t\texcept Exception:\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Failed to enqueue events for site: {site}\", exc_info=True)\n\n\ndef enqueue_events_for_site(site: str) -> None:\n\tdef log_exc():\n\t\tfrappe.logger(\"scheduler\").error(f\"Exception in Enqueue Events for Site {site}\", exc_info=True)\n\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tfrappe.connect()\n\t\tif is_scheduler_inactive():\n\t\t\treturn\n\n\t\tenqueue_events(site=site)\n\n\t\tfrappe.logger(\"scheduler\").debug(f\"Queued events for site {site}\")\n\texcept Exception as e:\n\t\tif frappe.db.is_access_denied(e):\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Access denied for site {site}\")\n\t\tlog_exc()\n\n\tfinally:\n\t\tfrappe.destroy()\n\n\ndef enqueue_events(site: str) -> list[str] | None:\n\tif schedule_jobs_based_on_activity():\n\t\tenqueued_jobs = []\n\t\tfor job_type in frappe.get_all(\"Scheduled Job Type\", (\"name\", \"method\"), {\"stopped\": 0}):\n\t\t\tjob_type = frappe.get_cached_doc(\"Scheduled Job Type\", job_type.name)\n\t\t\tif _enqueued := job_type.enqueue():\n\t\t\t\tenqueued_jobs.append(job_type.method)\n\n\t\treturn enqueued_jobs\n\n\ndef is_scheduler_inactive(verbose=True) -> bool:\n\tif frappe.local.conf.maintenance_mode:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: Maintenance mode is ON\")\n\t\treturn True\n\n\tif frappe.local.conf.pause_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.pause_scheduler is SET\")\n\t\treturn True\n\n\tif is_scheduler_disabled(verbose=verbose):\n\t\treturn True\n\n\treturn False\n\n\ndef is_scheduler_disabled(verbose=True) -> bool:\n\tif frappe.conf.disable_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.disable_scheduler is SET\")\n\t\treturn True\n\n\tscheduler_disabled = not frappe.utils.cint(\n\t\tfrappe.db.get_single_value(\"System Settings\", \"enable_scheduler\")\n\t)\n\tif scheduler_disabled:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: SystemSettings.enable_scheduler is UNSET\")\n\treturn scheduler_disabled\n\n\ndef toggle_scheduler(enable):\n\tfrappe.db.set_single_value(\"System Settings\", \"enable_scheduler\", int(enable))\n\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\n\ndef schedule_jobs_based_on_activity(check_time=None):\n\t\"\"\"Returns True for active sites defined by Activity Log\n\tReturns True for inactive sites once in 24 hours\"\"\"\n\tif is_dormant(check_time=check_time):\n\t\t# ensure last job is one day old\n\t\tlast_job_timestamp = _get_last_modified_timestamp(\"Scheduled Job Log\")\n\t\tif not last_job_timestamp:\n\t\t\treturn True\n\t\telse:\n\t\t\tif ((check_time or now_datetime()) - last_job_timestamp).total_seconds() >= 86400:\n\t\t\t\t# one day is passed since jobs are run, so lets do this\n\t\t\t\treturn True\n\t\t\telse:\n\t\t\t\t# schedulers run in the last 24 hours, do nothing\n\t\t\t\treturn False\n\telse:\n\t\t# site active, lets run the jobs\n\t\treturn True\n\n\ndef is_dormant(check_time=None):\n\tlast_activity_log_timestamp = _get_last_modified_timestamp(\"Activity Log\")\n\tsince = (frappe.get_system_settings(\"dormant_days\") or 4) * 86400\n\tif not last_activity_log_timestamp:\n\t\treturn True\n\tif ((check_time or now_datetime()) - last_activity_log_timestamp).total_seconds() >= since:\n\t\treturn True\n\treturn False\n\n\ndef _get_last_modified_timestamp(doctype):\n\ttimestamp = frappe.db.get_value(\n\t\tdoctype, filters={}, fieldname=\"modified\", order_by=\"modified desc\"\n\t)\n\tif timestamp:\n\t\treturn get_datetime(timestamp)\n\n\[email protected]()\ndef activate_scheduler():\n\tif is_scheduler_disabled():\n\t\tenable_scheduler()\n\tif frappe.conf.pause_scheduler:\n\t\tupdate_site_config(\"pause_scheduler\", 0)\n\n\[email protected]()\ndef get_scheduler_status():\n\tif is_scheduler_inactive():\n\t\treturn {\"status\": \"inactive\"}\n\treturn {\"status\": \"active\"}\n", "path": "frappe/utils/scheduler.py"}], "after_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\n# imports - standard imports\nimport os\nimport time\nfrom typing import NoReturn\n\n# imports - module imports\nimport frappe\nfrom frappe.installer import update_site_config\nfrom frappe.utils import cint, get_datetime, get_sites, now_datetime\nfrom frappe.utils.background_jobs import get_jobs\n\nDATETIME_FORMAT = \"%Y-%m-%d %H:%M:%S\"\n\n\ndef cprint(*args, **kwargs):\n\t\"\"\"Prints only if called from STDOUT\"\"\"\n\ttry:\n\t\tos.get_terminal_size()\n\t\tprint(*args, **kwargs)\n\texcept Exception:\n\t\tpass\n\n\ndef start_scheduler() -> NoReturn:\n\t\"\"\"Run enqueue_events_for_all_sites based on scheduler tick.\n\tSpecify scheduler_interval in seconds in common_site_config.json\"\"\"\n\n\ttick = cint(frappe.get_conf().scheduler_tick_interval) or 60\n\n\twhile True:\n\t\ttime.sleep(tick)\n\t\tenqueue_events_for_all_sites()\n\n\ndef enqueue_events_for_all_sites() -> None:\n\t\"\"\"Loop through sites and enqueue events that are not already queued\"\"\"\n\n\tif os.path.exists(os.path.join(\".\", \".restarting\")):\n\t\t# Don't add task to queue if webserver is in restart mode\n\t\treturn\n\n\twith frappe.init_site():\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site)\n\t\texcept Exception:\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Failed to enqueue events for site: {site}\", exc_info=True)\n\n\ndef enqueue_events_for_site(site: str) -> None:\n\tdef log_exc():\n\t\tfrappe.logger(\"scheduler\").error(f\"Exception in Enqueue Events for Site {site}\", exc_info=True)\n\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tfrappe.connect()\n\t\tif is_scheduler_inactive():\n\t\t\treturn\n\n\t\tenqueue_events(site=site)\n\n\t\tfrappe.logger(\"scheduler\").debug(f\"Queued events for site {site}\")\n\texcept Exception as e:\n\t\tif frappe.db.is_access_denied(e):\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Access denied for site {site}\")\n\t\tlog_exc()\n\n\tfinally:\n\t\tfrappe.destroy()\n\n\ndef enqueue_events(site: str) -> list[str] | None:\n\tif schedule_jobs_based_on_activity():\n\t\tenqueued_jobs = []\n\t\tfor job_type in frappe.get_all(\"Scheduled Job Type\", (\"name\", \"method\"), {\"stopped\": 0}):\n\t\t\tjob_type = frappe.get_cached_doc(\"Scheduled Job Type\", job_type.name)\n\t\t\tif _enqueued := job_type.enqueue():\n\t\t\t\tenqueued_jobs.append(job_type.method)\n\n\t\treturn enqueued_jobs\n\n\ndef is_scheduler_inactive(verbose=True) -> bool:\n\tif frappe.local.conf.maintenance_mode:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: Maintenance mode is ON\")\n\t\treturn True\n\n\tif frappe.local.conf.pause_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.pause_scheduler is SET\")\n\t\treturn True\n\n\tif is_scheduler_disabled(verbose=verbose):\n\t\treturn True\n\n\treturn False\n\n\ndef is_scheduler_disabled(verbose=True) -> bool:\n\tif frappe.conf.disable_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.disable_scheduler is SET\")\n\t\treturn True\n\n\tscheduler_disabled = not frappe.utils.cint(\n\t\tfrappe.db.get_single_value(\"System Settings\", \"enable_scheduler\")\n\t)\n\tif scheduler_disabled:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: SystemSettings.enable_scheduler is UNSET\")\n\treturn scheduler_disabled\n\n\ndef toggle_scheduler(enable):\n\tfrappe.db.set_single_value(\"System Settings\", \"enable_scheduler\", int(enable))\n\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\n\ndef schedule_jobs_based_on_activity(check_time=None):\n\t\"\"\"Returns True for active sites defined by Activity Log\n\tReturns True for inactive sites once in 24 hours\"\"\"\n\tif is_dormant(check_time=check_time):\n\t\t# ensure last job is one day old\n\t\tlast_job_timestamp = _get_last_modified_timestamp(\"Scheduled Job Log\")\n\t\tif not last_job_timestamp:\n\t\t\treturn True\n\t\telse:\n\t\t\tif ((check_time or now_datetime()) - last_job_timestamp).total_seconds() >= 86400:\n\t\t\t\t# one day is passed since jobs are run, so lets do this\n\t\t\t\treturn True\n\t\t\telse:\n\t\t\t\t# schedulers run in the last 24 hours, do nothing\n\t\t\t\treturn False\n\telse:\n\t\t# site active, lets run the jobs\n\t\treturn True\n\n\ndef is_dormant(check_time=None):\n\tlast_activity_log_timestamp = _get_last_modified_timestamp(\"Activity Log\")\n\tsince = (frappe.get_system_settings(\"dormant_days\") or 4) * 86400\n\tif not last_activity_log_timestamp:\n\t\treturn True\n\tif ((check_time or now_datetime()) - last_activity_log_timestamp).total_seconds() >= since:\n\t\treturn True\n\treturn False\n\n\ndef _get_last_modified_timestamp(doctype):\n\ttimestamp = frappe.db.get_value(\n\t\tdoctype, filters={}, fieldname=\"modified\", order_by=\"modified desc\"\n\t)\n\tif timestamp:\n\t\treturn get_datetime(timestamp)\n\n\[email protected]()\ndef activate_scheduler():\n\tfrappe.only_for(\"Administrator\")\n\n\tif frappe.local.conf.maintenance_mode:\n\t\tfrappe.throw(frappe._(\"Scheduler can not be re-enabled when maintenance mode is active.\"))\n\n\tif is_scheduler_disabled():\n\t\tenable_scheduler()\n\tif frappe.conf.pause_scheduler:\n\t\tupdate_site_config(\"pause_scheduler\", 0)\n\n\[email protected]()\ndef get_scheduler_status():\n\tif is_scheduler_inactive():\n\t\treturn {\"status\": \"inactive\"}\n\treturn {\"status\": \"active\"}\n", "path": "frappe/utils/scheduler.py"}]} | 2,031 | 126 |
gh_patches_debug_31889 | rasdani/github-patches | git_diff | pre-commit__pre-commit-575 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
git unadd changes lost if hook fails on windows
```
D:\CubeadProjects\devops [test +0 ~2 -0 | +0 ~1 -0 !]> git cm "asd"
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to C:\Users\56929\.pre-commit\patch1501482991.
run pylint...............................................................Failed
hookid: python-pylint
************* Module install
C: 10, 0: Exactly one space required around assignment
a=1
^ (bad-whitespace)
C: 46, 0: Line too long (108/100) (line-too-long)
W: 39, 4: Unused variable 'stylelint_root' (unused-variable)
W: 37, 4: Unused variable 'node_root' (unused-variable)
W: 24, 8: Unused variable 'checks' (unused-variable)
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: ('C:\\Program Files\\Git\\mingw64\\libexec\\git-core\\git.exe', 'apply', 'C:\\Users\\56929\\.pre-commit\\patch1501483011')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20
error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply
Check the log at ~/.pre-commit/pre-commit.log
```
### ~/.pre-commit/pre-commit.log
```
An unexpected error has occurred: CalledProcessError: Command: ('C:\\Program Files\\Git\\mingw64\\libexec\\git-core\\git.exe', 'apply', 'C:\\Users\\56929\\.pre-commit\\patch1501483011')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20
error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply
Traceback (most recent call last):
File "c:\python27\lib\site-packages\pre_commit\error_handler.py", line 48, in error_handler
yield
File "c:\python27\lib\site-packages\pre_commit\main.py", line 231, in main
return run(runner, args)
File "c:\python27\lib\site-packages\pre_commit\commands\run.py", line 273, in run
return _run_hooks(repo_hooks, args, environ)
File "c:\python27\lib\contextlib.py", line 24, in __exit__
self.gen.next()
File "c:\python27\lib\site-packages\pre_commit\staged_files_only.py", line 58, in staged_files_only
cmd_runner.run(('git', 'apply', patch_filename), encoding=None)
File "c:\python27\lib\site-packages\pre_commit\prefixed_command_runner.py", line 38, in run
return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)
File "c:\python27\lib\site-packages\pre_commit\util.py", line 189, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
CalledProcessError: Command: ('C:\\Program Files\\Git\\mingw64\\libexec\\git-core\\git.exe', 'apply', 'C:\\Users\\56929\\.pre-commit\\patch1501483011')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20
error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply
```
Then, I open the patch file. (C:\\Users\\56929\\.pre-commit\\patch1501483011),it looks like
```diff
diff --git a/svnchecker_stylelint_support/checks/Stylelint.py b/svnchecker_stylelint_support/checks/Stylelint.py
index 4422b4d..f85ecb1 100644
--- a/svnchecker_stylelint_support/checks/Stylelint.py
+++ b/svnchecker_stylelint_support/checks/Stylelint.py
@@ -20,3 +20,5 @@ def run(transaction, config):
return ('{}\n{}'.format(stdoutdata, stderrdata), 1)^M
^M
return ("", 0)^M
^M
^M
^M
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/staged_files_only.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import time
7
8 from pre_commit.util import CalledProcessError
9
10
11 logger = logging.getLogger('pre_commit')
12
13
14 @contextlib.contextmanager
15 def staged_files_only(cmd_runner):
16 """Clear any unstaged changes from the git working directory inside this
17 context.
18
19 Args:
20 cmd_runner - PrefixedCommandRunner
21 """
22 # Determine if there are unstaged files
23 tree = cmd_runner.run(('git', 'write-tree'))[1].strip()
24 retcode, diff_stdout_binary, _ = cmd_runner.run(
25 (
26 'git', 'diff-index', '--ignore-submodules', '--binary',
27 '--exit-code', '--no-color', '--no-ext-diff', tree, '--',
28 ),
29 retcode=None,
30 encoding=None,
31 )
32 if retcode and diff_stdout_binary.strip():
33 patch_filename = cmd_runner.path('patch{}'.format(int(time.time())))
34 logger.warning('Unstaged files detected.')
35 logger.info(
36 'Stashing unstaged files to {}.'.format(patch_filename),
37 )
38 # Save the current unstaged changes as a patch
39 with io.open(patch_filename, 'wb') as patch_file:
40 patch_file.write(diff_stdout_binary)
41
42 # Clear the working directory of unstaged changes
43 cmd_runner.run(('git', 'checkout', '--', '.'))
44 try:
45 yield
46 finally:
47 # Try to apply the patch we saved
48 try:
49 cmd_runner.run(
50 ('git', 'apply', '--whitespace=nowarn', patch_filename),
51 encoding=None,
52 )
53 except CalledProcessError:
54 logger.warning(
55 'Stashed changes conflicted with hook auto-fixes... '
56 'Rolling back fixes...',
57 )
58 # We failed to apply the patch, presumably due to fixes made
59 # by hooks.
60 # Roll back the changes made by hooks.
61 cmd_runner.run(('git', 'checkout', '--', '.'))
62 cmd_runner.run(
63 ('git', 'apply', patch_filename, '--whitespace=nowarn'),
64 encoding=None,
65 )
66 logger.info('Restored changes from {}.'.format(patch_filename))
67 else:
68 # There weren't any staged files so we don't need to do anything
69 # special
70 yield
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -11,6 +11,16 @@
logger = logging.getLogger('pre_commit')
+def _git_apply(cmd_runner, patch):
+ args = ('apply', '--whitespace=nowarn', patch)
+ try:
+ cmd_runner.run(('git',) + args, encoding=None)
+ except CalledProcessError:
+ # Retry with autocrlf=false -- see #570
+ cmd = ('git', '-c', 'core.autocrlf=false') + args
+ cmd_runner.run(cmd, encoding=None)
+
+
@contextlib.contextmanager
def staged_files_only(cmd_runner):
"""Clear any unstaged changes from the git working directory inside this
@@ -46,10 +56,7 @@
finally:
# Try to apply the patch we saved
try:
- cmd_runner.run(
- ('git', 'apply', '--whitespace=nowarn', patch_filename),
- encoding=None,
- )
+ _git_apply(cmd_runner, patch_filename)
except CalledProcessError:
logger.warning(
'Stashed changes conflicted with hook auto-fixes... '
@@ -59,10 +66,7 @@
# by hooks.
# Roll back the changes made by hooks.
cmd_runner.run(('git', 'checkout', '--', '.'))
- cmd_runner.run(
- ('git', 'apply', patch_filename, '--whitespace=nowarn'),
- encoding=None,
- )
+ _git_apply(cmd_runner, patch_filename)
logger.info('Restored changes from {}.'.format(patch_filename))
else:
# There weren't any staged files so we don't need to do anything
| {"golden_diff": "diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py\n--- a/pre_commit/staged_files_only.py\n+++ b/pre_commit/staged_files_only.py\n@@ -11,6 +11,16 @@\n logger = logging.getLogger('pre_commit')\n \n \n+def _git_apply(cmd_runner, patch):\n+ args = ('apply', '--whitespace=nowarn', patch)\n+ try:\n+ cmd_runner.run(('git',) + args, encoding=None)\n+ except CalledProcessError:\n+ # Retry with autocrlf=false -- see #570\n+ cmd = ('git', '-c', 'core.autocrlf=false') + args\n+ cmd_runner.run(cmd, encoding=None)\n+\n+\n @contextlib.contextmanager\n def staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n@@ -46,10 +56,7 @@\n finally:\n # Try to apply the patch we saved\n try:\n- cmd_runner.run(\n- ('git', 'apply', '--whitespace=nowarn', patch_filename),\n- encoding=None,\n- )\n+ _git_apply(cmd_runner, patch_filename)\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n@@ -59,10 +66,7 @@\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(('git', 'checkout', '--', '.'))\n- cmd_runner.run(\n- ('git', 'apply', patch_filename, '--whitespace=nowarn'),\n- encoding=None,\n- )\n+ _git_apply(cmd_runner, patch_filename)\n logger.info('Restored changes from {}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n", "issue": "git unadd changes lost if hook fails on windows\n```\r\nD:\\CubeadProjects\\devops [test +0 ~2 -0 | +0 ~1 -0 !]> git cm \"asd\"\r\n[WARNING] Unstaged files detected.\r\n[INFO] Stashing unstaged files to C:\\Users\\56929\\.pre-commit\\patch1501482991.\r\nrun pylint...............................................................Failed\r\nhookid: python-pylint\r\n\r\n************* Module install\r\nC: 10, 0: Exactly one space required around assignment\r\na=1\r\n ^ (bad-whitespace)\r\nC: 46, 0: Line too long (108/100) (line-too-long)\r\nW: 39, 4: Unused variable 'stylelint_root' (unused-variable)\r\nW: 37, 4: Unused variable 'node_root' (unused-variable)\r\nW: 24, 8: Unused variable 'checks' (unused-variable)\r\n\r\n[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...\r\nAn unexpected error has occurred: CalledProcessError: Command: ('C:\\\\Program Files\\\\Git\\\\mingw64\\\\libexec\\\\git-core\\\\git.exe', 'apply', 'C:\\\\Users\\\\56929\\\\.pre-commit\\\\patch1501483011')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: (none)\r\nErrors:\r\n error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20\r\n error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply\r\n\r\n\r\nCheck the log at ~/.pre-commit/pre-commit.log\r\n```\r\n\r\n### ~/.pre-commit/pre-commit.log\r\n```\r\nAn unexpected error has occurred: CalledProcessError: Command: ('C:\\\\Program Files\\\\Git\\\\mingw64\\\\libexec\\\\git-core\\\\git.exe', 'apply', 'C:\\\\Users\\\\56929\\\\.pre-commit\\\\patch1501483011')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: (none)\r\nErrors: \r\n error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20\r\n error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply\r\n \r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python27\\lib\\site-packages\\pre_commit\\error_handler.py\", line 48, in error_handler\r\n yield\r\n File \"c:\\python27\\lib\\site-packages\\pre_commit\\main.py\", line 231, in main\r\n return run(runner, args)\r\n File \"c:\\python27\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 273, in run\r\n return _run_hooks(repo_hooks, args, environ)\r\n File \"c:\\python27\\lib\\contextlib.py\", line 24, in __exit__\r\n self.gen.next()\r\n File \"c:\\python27\\lib\\site-packages\\pre_commit\\staged_files_only.py\", line 58, in staged_files_only\r\n cmd_runner.run(('git', 'apply', patch_filename), encoding=None)\r\n File \"c:\\python27\\lib\\site-packages\\pre_commit\\prefixed_command_runner.py\", line 38, in run\r\n return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)\r\n File \"c:\\python27\\lib\\site-packages\\pre_commit\\util.py\", line 189, in cmd_output\r\n returncode, cmd, retcode, output=(stdout, stderr),\r\nCalledProcessError: Command: ('C:\\\\Program Files\\\\Git\\\\mingw64\\\\libexec\\\\git-core\\\\git.exe', 'apply', 'C:\\\\Users\\\\56929\\\\.pre-commit\\\\patch1501483011')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: (none)\r\nErrors: \r\n error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20\r\n error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply\r\n```\r\nThen, I open the patch file. (C:\\\\Users\\\\56929\\\\.pre-commit\\\\patch1501483011),it looks like \r\n\r\n```diff\r\ndiff --git a/svnchecker_stylelint_support/checks/Stylelint.py b/svnchecker_stylelint_support/checks/Stylelint.py\r\nindex 4422b4d..f85ecb1 100644\r\n--- a/svnchecker_stylelint_support/checks/Stylelint.py\r\n+++ b/svnchecker_stylelint_support/checks/Stylelint.py\r\n@@ -20,3 +20,5 @@ def run(transaction, config):\r\n return ('{}\\n{}'.format(stdoutdata, stderrdata), 1)^M\r\n^M\r\n return (\"\", 0)^M\r\n^M\r\n^M\r\n^M\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport time\n\nfrom pre_commit.util import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n tree = cmd_runner.run(('git', 'write-tree'))[1].strip()\n retcode, diff_stdout_binary, _ = cmd_runner.run(\n (\n 'git', 'diff-index', '--ignore-submodules', '--binary',\n '--exit-code', '--no-color', '--no-ext-diff', tree, '--',\n ),\n retcode=None,\n encoding=None,\n )\n if retcode and diff_stdout_binary.strip():\n patch_filename = cmd_runner.path('patch{}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with io.open(patch_filename, 'wb') as patch_file:\n patch_file.write(diff_stdout_binary)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(('git', 'checkout', '--', '.'))\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(\n ('git', 'apply', '--whitespace=nowarn', patch_filename),\n encoding=None,\n )\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...',\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(('git', 'checkout', '--', '.'))\n cmd_runner.run(\n ('git', 'apply', patch_filename, '--whitespace=nowarn'),\n encoding=None,\n )\n logger.info('Restored changes from {}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport time\n\nfrom pre_commit.util import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _git_apply(cmd_runner, patch):\n args = ('apply', '--whitespace=nowarn', patch)\n try:\n cmd_runner.run(('git',) + args, encoding=None)\n except CalledProcessError:\n # Retry with autocrlf=false -- see #570\n cmd = ('git', '-c', 'core.autocrlf=false') + args\n cmd_runner.run(cmd, encoding=None)\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n tree = cmd_runner.run(('git', 'write-tree'))[1].strip()\n retcode, diff_stdout_binary, _ = cmd_runner.run(\n (\n 'git', 'diff-index', '--ignore-submodules', '--binary',\n '--exit-code', '--no-color', '--no-ext-diff', tree, '--',\n ),\n retcode=None,\n encoding=None,\n )\n if retcode and diff_stdout_binary.strip():\n patch_filename = cmd_runner.path('patch{}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with io.open(patch_filename, 'wb') as patch_file:\n patch_file.write(diff_stdout_binary)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(('git', 'checkout', '--', '.'))\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n _git_apply(cmd_runner, patch_filename)\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...',\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(('git', 'checkout', '--', '.'))\n _git_apply(cmd_runner, patch_filename)\n logger.info('Restored changes from {}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}]} | 1,994 | 406 |
gh_patches_debug_7053 | rasdani/github-patches | git_diff | zulip__zulip-21237 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs: make links equally browsable on both GitHub and ReadTheDocs
Once upstream bug https://github.com/readthedocs/recommonmark/issues/179 is fixed, we can replace the `.html` part in links of the form `file_name.html#anchor` with `.md`.
This is a followup to https://github.com/zulip/zulip/pull/13232.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version.py`
Content:
```
1 import os
2
3 ZULIP_VERSION = "5.0-dev+git"
4
5 # Add information on number of commits and commit hash to version, if available
6 zulip_git_version_file = os.path.join(
7 os.path.dirname(os.path.abspath(__file__)), "zulip-git-version"
8 )
9 lines = [ZULIP_VERSION, ""]
10 if os.path.exists(zulip_git_version_file):
11 with open(zulip_git_version_file) as f:
12 lines = f.readlines() + ["", ""]
13 ZULIP_VERSION = lines.pop(0).strip()
14 ZULIP_MERGE_BASE = lines.pop(0).strip()
15
16 LATEST_MAJOR_VERSION = "4.0"
17 LATEST_RELEASE_VERSION = "4.10"
18 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.com/2021/05/13/zulip-4-0-released/"
19
20 # Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be
21 # prevented from connecting to the Zulip server. Versions above
22 # DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have
23 # a banner at the top of the page asking the user to upgrade.
24 DESKTOP_MINIMUM_VERSION = "5.2.0"
25 DESKTOP_WARNING_VERSION = "5.4.3"
26
27 # Bump the API_FEATURE_LEVEL whenever an API change is made
28 # that clients might want to condition on. If we forget at
29 # the time we make the change, then bump it later as soon
30 # as we notice; clients using API_FEATURE_LEVEL will just not
31 # use the new feature/API until the bump.
32 #
33 # Changes should be accompanied by documentation explaining what the
34 # new level means in templates/zerver/api/changelog.md, as well as
35 # "**Changes**" entries in the endpoint's documentation in `zulip.yaml`.
36 API_FEATURE_LEVEL = 117
37
38 # Bump the minor PROVISION_VERSION to indicate that folks should provision
39 # only when going from an old version of the code to a newer version. Bump
40 # the major version to indicate that folks should provision in both
41 # directions.
42
43 # Typically,
44 # * adding a dependency only requires a minor version bump;
45 # * removing a dependency requires a major version bump;
46 # * upgrading a dependency requires a major version bump, unless the
47 # upgraded dependency is backwards compatible with all of our
48 # historical commits sharing the same major version, in which case a
49 # minor version bump suffices.
50
51 PROVISION_VERSION = "179.0"
52
```
Path: `docs/conf.py`
Content:
```
1 # For documentation on Sphinx configuration options, see:
2 # https://www.sphinx-doc.org/en/master/usage/configuration.html
3 # https://myst-parser.readthedocs.io/en/latest/sphinx/reference.html
4 # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html
5
6 import os
7 import sys
8 from typing import Any
9
10 sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
11 from version import LATEST_RELEASE_VERSION, ZULIP_VERSION
12
13 on_rtd = os.environ.get("READTHEDOCS") == "True"
14
15 # General configuration
16
17 extensions = [
18 "myst_parser",
19 "sphinx_rtd_theme",
20 ]
21 templates_path = ["_templates"]
22 project = "Zulip"
23 copyright = "2012–2015 Dropbox, Inc., 2015–2021 Kandra Labs, Inc., and contributors"
24 author = "The Zulip Team"
25 version = ZULIP_VERSION
26 release = ZULIP_VERSION
27 exclude_patterns = ["_build", "README.md"]
28 suppress_warnings = [
29 "myst.header",
30 ]
31 pygments_style = "sphinx"
32
33 # Options for Markdown parser
34
35 myst_enable_extensions = [
36 "colon_fence",
37 "substitution",
38 ]
39 myst_substitutions = {
40 "LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
41 }
42
43 # Options for HTML output
44
45 html_theme = "sphinx_rtd_theme"
46 html_theme_options = {
47 "collapse_navigation": not on_rtd, # makes local builds much faster
48 "logo_only": True,
49 }
50 html_logo = "images/zulip-logo.svg"
51 html_static_path = ["_static"]
52
53
54 def setup(app: Any) -> None:
55 # overrides for wide tables in RTD theme
56 app.add_css_file("theme_overrides.css") # path relative to _static
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -36,6 +36,7 @@
"colon_fence",
"substitution",
]
+myst_heading_anchors = 6
myst_substitutions = {
"LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
}
diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -48,4 +48,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = "179.0"
+PROVISION_VERSION = "180.0"
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -36,6 +36,7 @@\n \"colon_fence\",\n \"substitution\",\n ]\n+myst_heading_anchors = 6\n myst_substitutions = {\n \"LATEST_RELEASE_VERSION\": LATEST_RELEASE_VERSION,\n }\ndiff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -48,4 +48,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = \"179.0\"\n+PROVISION_VERSION = \"180.0\"\n", "issue": "docs: make links equally browsable on both GitHub and ReadTheDocs\nOnce upstream bug https://github.com/readthedocs/recommonmark/issues/179 is fixed, we can replace the `.html` part in links of the form `file_name.html#anchor` with `.md`.\r\n\r\nThis is a followup to https://github.com/zulip/zulip/pull/13232.\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"5.0-dev+git\"\n\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"zulip-git-version\"\n)\nlines = [ZULIP_VERSION, \"\"]\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n lines = f.readlines() + [\"\", \"\"]\nZULIP_VERSION = lines.pop(0).strip()\nZULIP_MERGE_BASE = lines.pop(0).strip()\n\nLATEST_MAJOR_VERSION = \"4.0\"\nLATEST_RELEASE_VERSION = \"4.10\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.com/2021/05/13/zulip-4-0-released/\"\n\n# Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be\n# prevented from connecting to the Zulip server. Versions above\n# DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have\n# a banner at the top of the page asking the user to upgrade.\nDESKTOP_MINIMUM_VERSION = \"5.2.0\"\nDESKTOP_WARNING_VERSION = \"5.4.3\"\n\n# Bump the API_FEATURE_LEVEL whenever an API change is made\n# that clients might want to condition on. If we forget at\n# the time we make the change, then bump it later as soon\n# as we notice; clients using API_FEATURE_LEVEL will just not\n# use the new feature/API until the bump.\n#\n# Changes should be accompanied by documentation explaining what the\n# new level means in templates/zerver/api/changelog.md, as well as\n# \"**Changes**\" entries in the endpoint's documentation in `zulip.yaml`.\nAPI_FEATURE_LEVEL = 117\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = \"179.0\"\n", "path": "version.py"}, {"content": "# For documentation on Sphinx configuration options, see:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n# https://myst-parser.readthedocs.io/en/latest/sphinx/reference.html\n# https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html\n\nimport os\nimport sys\nfrom typing import Any\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\")))\nfrom version import LATEST_RELEASE_VERSION, ZULIP_VERSION\n\non_rtd = os.environ.get(\"READTHEDOCS\") == \"True\"\n\n# General configuration\n\nextensions = [\n \"myst_parser\",\n \"sphinx_rtd_theme\",\n]\ntemplates_path = [\"_templates\"]\nproject = \"Zulip\"\ncopyright = \"2012\u20132015 Dropbox, Inc., 2015\u20132021 Kandra Labs, Inc., and contributors\"\nauthor = \"The Zulip Team\"\nversion = ZULIP_VERSION\nrelease = ZULIP_VERSION\nexclude_patterns = [\"_build\", \"README.md\"]\nsuppress_warnings = [\n \"myst.header\",\n]\npygments_style = \"sphinx\"\n\n# Options for Markdown parser\n\nmyst_enable_extensions = [\n \"colon_fence\",\n \"substitution\",\n]\nmyst_substitutions = {\n \"LATEST_RELEASE_VERSION\": LATEST_RELEASE_VERSION,\n}\n\n# Options for HTML output\n\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_options = {\n \"collapse_navigation\": not on_rtd, # makes local builds much faster\n \"logo_only\": True,\n}\nhtml_logo = \"images/zulip-logo.svg\"\nhtml_static_path = [\"_static\"]\n\n\ndef setup(app: Any) -> None:\n # overrides for wide tables in RTD theme\n app.add_css_file(\"theme_overrides.css\") # path relative to _static\n", "path": "docs/conf.py"}], "after_files": [{"content": "import os\n\nZULIP_VERSION = \"5.0-dev+git\"\n\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"zulip-git-version\"\n)\nlines = [ZULIP_VERSION, \"\"]\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n lines = f.readlines() + [\"\", \"\"]\nZULIP_VERSION = lines.pop(0).strip()\nZULIP_MERGE_BASE = lines.pop(0).strip()\n\nLATEST_MAJOR_VERSION = \"4.0\"\nLATEST_RELEASE_VERSION = \"4.9\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.com/2021/05/13/zulip-4-0-released/\"\n\n# Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be\n# prevented from connecting to the Zulip server. Versions above\n# DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have\n# a banner at the top of the page asking the user to upgrade.\nDESKTOP_MINIMUM_VERSION = \"5.2.0\"\nDESKTOP_WARNING_VERSION = \"5.4.3\"\n\n# Bump the API_FEATURE_LEVEL whenever an API change is made\n# that clients might want to condition on. If we forget at\n# the time we make the change, then bump it later as soon\n# as we notice; clients using API_FEATURE_LEVEL will just not\n# use the new feature/API until the bump.\n#\n# Changes should be accompanied by documentation explaining what the\n# new level means in templates/zerver/api/changelog.md, as well as\n# \"**Changes**\" entries in the endpoint's documentation in `zulip.yaml`.\nAPI_FEATURE_LEVEL = 116\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = \"180.0\"\n", "path": "version.py"}, {"content": "# For documentation on Sphinx configuration options, see:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n# https://myst-parser.readthedocs.io/en/latest/sphinx/reference.html\n# https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html\n\nimport os\nimport sys\nfrom typing import Any\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\")))\nfrom version import LATEST_RELEASE_VERSION, ZULIP_VERSION\n\non_rtd = os.environ.get(\"READTHEDOCS\") == \"True\"\n\n# General configuration\n\nextensions = [\n \"myst_parser\",\n \"sphinx_rtd_theme\",\n]\ntemplates_path = [\"_templates\"]\nproject = \"Zulip\"\ncopyright = \"2012\u20132015 Dropbox, Inc., 2015\u20132021 Kandra Labs, Inc., and contributors\"\nauthor = \"The Zulip Team\"\nversion = ZULIP_VERSION\nrelease = ZULIP_VERSION\nexclude_patterns = [\"_build\", \"README.md\"]\nsuppress_warnings = [\n \"myst.header\",\n]\npygments_style = \"sphinx\"\n\n# Options for Markdown parser\n\nmyst_enable_extensions = [\n \"colon_fence\",\n \"substitution\",\n]\nmyst_heading_anchors = 6\nmyst_substitutions = {\n \"LATEST_RELEASE_VERSION\": LATEST_RELEASE_VERSION,\n}\n\n# Options for HTML output\n\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_options = {\n \"collapse_navigation\": not on_rtd, # makes local builds much faster\n \"logo_only\": True,\n}\nhtml_logo = \"images/zulip-logo.svg\"\nhtml_static_path = [\"_static\"]\n\n\ndef setup(app: Any) -> None:\n # overrides for wide tables in RTD theme\n app.add_css_file(\"theme_overrides.css\") # path relative to _static\n", "path": "docs/conf.py"}]} | 1,508 | 154 |
gh_patches_debug_22323 | rasdani/github-patches | git_diff | scikit-hep__pyhf-937 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document simplemodels API
# Description
In discussion today with @coolalexzb, I realized that the [`pyhf.simplemodels`](https://github.com/scikit-hep/pyhf/blob/79984be837ef6e53bdd12a82163c34d47d507dba/src/pyhf/simplemodels.py) API is not documented in our docs. Even thought this isn't something we want people to really use, we still show it in our examples and so it needs documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/simplemodels.py`
Content:
```
1 from . import Model
2
3
4 def hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):
5 spec = {
6 'channels': [
7 {
8 'name': 'singlechannel',
9 'samples': [
10 {
11 'name': 'signal',
12 'data': signal_data,
13 'modifiers': [
14 {'name': 'mu', 'type': 'normfactor', 'data': None}
15 ],
16 },
17 {
18 'name': 'background',
19 'data': bkg_data,
20 'modifiers': [
21 {
22 'name': 'uncorr_bkguncrt',
23 'type': 'shapesys',
24 'data': bkg_uncerts,
25 }
26 ],
27 },
28 ],
29 }
30 ]
31 }
32 return Model(spec, batch_size=batch_size)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyhf/simplemodels.py b/src/pyhf/simplemodels.py
--- a/src/pyhf/simplemodels.py
+++ b/src/pyhf/simplemodels.py
@@ -2,6 +2,38 @@
def hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):
+ """
+ Construct a simple single channel :class:`~pyhf.pdf.Model` with a
+ :class:`~pyhf.modifiers.shapesys` modifier representing an uncorrelated
+ background uncertainty.
+
+ Example:
+ >>> import pyhf
+ >>> pyhf.set_backend("numpy")
+ >>> model = pyhf.simplemodels.hepdata_like(
+ ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
+ ... )
+ >>> model.schema
+ 'model.json'
+ >>> model.config.channels
+ ['singlechannel']
+ >>> model.config.samples
+ ['background', 'signal']
+ >>> model.config.parameters
+ ['mu', 'uncorr_bkguncrt']
+ >>> model.expected_data(model.config.suggested_init())
+ array([ 62. , 63. , 277.77777778, 55.18367347])
+
+ Args:
+ signal_data (`list`): The data in the signal sample
+ bkg_data (`list`): The data in the background sample
+ bkg_uncerts (`list`): The statistical uncertainty on the background sample counts
+ batch_size (`None` or `int`): Number of simultaneous (batched) Models to compute
+
+ Returns:
+ ~pyhf.pdf.Model: The statistical model adhering to the :obj:`model.json` schema
+
+ """
spec = {
'channels': [
{
| {"golden_diff": "diff --git a/src/pyhf/simplemodels.py b/src/pyhf/simplemodels.py\n--- a/src/pyhf/simplemodels.py\n+++ b/src/pyhf/simplemodels.py\n@@ -2,6 +2,38 @@\n \n \n def hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):\n+ \"\"\"\n+ Construct a simple single channel :class:`~pyhf.pdf.Model` with a\n+ :class:`~pyhf.modifiers.shapesys` modifier representing an uncorrelated\n+ background uncertainty.\n+\n+ Example:\n+ >>> import pyhf\n+ >>> pyhf.set_backend(\"numpy\")\n+ >>> model = pyhf.simplemodels.hepdata_like(\n+ ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n+ ... )\n+ >>> model.schema\n+ 'model.json'\n+ >>> model.config.channels\n+ ['singlechannel']\n+ >>> model.config.samples\n+ ['background', 'signal']\n+ >>> model.config.parameters\n+ ['mu', 'uncorr_bkguncrt']\n+ >>> model.expected_data(model.config.suggested_init())\n+ array([ 62. , 63. , 277.77777778, 55.18367347])\n+\n+ Args:\n+ signal_data (`list`): The data in the signal sample\n+ bkg_data (`list`): The data in the background sample\n+ bkg_uncerts (`list`): The statistical uncertainty on the background sample counts\n+ batch_size (`None` or `int`): Number of simultaneous (batched) Models to compute\n+\n+ Returns:\n+ ~pyhf.pdf.Model: The statistical model adhering to the :obj:`model.json` schema\n+\n+ \"\"\"\n spec = {\n 'channels': [\n {\n", "issue": "Document simplemodels API\n# Description\r\n\r\nIn discussion today with @coolalexzb, I realized that the [`pyhf.simplemodels`](https://github.com/scikit-hep/pyhf/blob/79984be837ef6e53bdd12a82163c34d47d507dba/src/pyhf/simplemodels.py) API is not documented in our docs. Even thought this isn't something we want people to really use, we still show it in our examples and so it needs documentation.\n", "before_files": [{"content": "from . import Model\n\n\ndef hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):\n spec = {\n 'channels': [\n {\n 'name': 'singlechannel',\n 'samples': [\n {\n 'name': 'signal',\n 'data': signal_data,\n 'modifiers': [\n {'name': 'mu', 'type': 'normfactor', 'data': None}\n ],\n },\n {\n 'name': 'background',\n 'data': bkg_data,\n 'modifiers': [\n {\n 'name': 'uncorr_bkguncrt',\n 'type': 'shapesys',\n 'data': bkg_uncerts,\n }\n ],\n },\n ],\n }\n ]\n }\n return Model(spec, batch_size=batch_size)\n", "path": "src/pyhf/simplemodels.py"}], "after_files": [{"content": "from . import Model\n\n\ndef hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):\n \"\"\"\n Construct a simple single channel :class:`~pyhf.pdf.Model` with a\n :class:`~pyhf.modifiers.shapesys` modifier representing an uncorrelated\n background uncertainty.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> model.schema\n 'model.json'\n >>> model.config.channels\n ['singlechannel']\n >>> model.config.samples\n ['background', 'signal']\n >>> model.config.parameters\n ['mu', 'uncorr_bkguncrt']\n >>> model.expected_data(model.config.suggested_init())\n array([ 62. , 63. , 277.77777778, 55.18367347])\n\n Args:\n signal_data (`list`): The data in the signal sample\n bkg_data (`list`): The data in the background sample\n bkg_uncerts (`list`): The statistical uncertainty on the background sample counts\n batch_size (`None` or `int`): Number of simultaneous (batched) Models to compute\n\n Returns:\n ~pyhf.pdf.Model: The statistical model adhering to the :obj:`model.json` schema\n\n \"\"\"\n spec = {\n 'channels': [\n {\n 'name': 'singlechannel',\n 'samples': [\n {\n 'name': 'signal',\n 'data': signal_data,\n 'modifiers': [\n {'name': 'mu', 'type': 'normfactor', 'data': None}\n ],\n },\n {\n 'name': 'background',\n 'data': bkg_data,\n 'modifiers': [\n {\n 'name': 'uncorr_bkguncrt',\n 'type': 'shapesys',\n 'data': bkg_uncerts,\n }\n ],\n },\n ],\n }\n ]\n }\n return Model(spec, batch_size=batch_size)\n", "path": "src/pyhf/simplemodels.py"}]} | 603 | 437 |
gh_patches_debug_22770 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error log still occurs when tracer is disabled (Django)
The tracer is logging the following error when disabled:
> 2017-07-05 12:54:36,552:[none]:[ddtrace.writer:134]:ERROR cannot send services: [Errno 111] Connection refused
This is occurring when integrated with Django with the following configuration:
```python
DATADOG_TRACE = {
'ENABLED': False
}
```
From reading the [documentation](http://pypi.datadoghq.com/trace/docs/#module-ddtrace.contrib.django) which states:
> ENABLED (default: not django_settings.DEBUG): defines if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the trace agent. This setting cannot be changed at runtime and a restart is required. By default the tracer is disabled when in DEBUG mode, enabled otherwise.
It seems this log should not occur. If no spans are sent to the trace agent then presumably a connection should not be established?
Package Info
------------------
> datadog==0.15.0
> ddtrace==0.8.5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/django/apps.py`
Content:
```
1 import logging
2
3 # 3rd party
4 from django.apps import AppConfig
5
6 # project
7 from .db import patch_db
8 from .conf import settings
9 from .cache import patch_cache
10 from .templates import patch_template
11 from .middleware import insert_exception_middleware
12
13 from ...ext import AppTypes
14
15
16 log = logging.getLogger(__name__)
17
18
19 class TracerConfig(AppConfig):
20 name = 'ddtrace.contrib.django'
21 label = 'datadog_django'
22
23 def ready(self):
24 """
25 Ready is called as soon as the registry is fully populated.
26 Tracing capabilities must be enabled in this function so that
27 all Django internals are properly configured.
28 """
29 tracer = settings.TRACER
30
31 if settings.TAGS:
32 tracer.set_tags(settings.TAGS)
33
34 # define the service details
35 tracer.set_service_info(
36 app='django',
37 app_type=AppTypes.web,
38 service=settings.DEFAULT_SERVICE,
39 )
40
41 # configure the tracer instance
42 # TODO[manu]: we may use configure() but because it creates a new
43 # AgentWriter, it breaks all tests. The configure() behavior must
44 # be changed to use it in this integration
45 tracer.enabled = settings.ENABLED
46 tracer.writer.api.hostname = settings.AGENT_HOSTNAME
47 tracer.writer.api.port = settings.AGENT_PORT
48
49 if settings.AUTO_INSTRUMENT:
50 # trace Django internals
51 insert_exception_middleware()
52 try:
53 patch_db(tracer)
54 except Exception:
55 log.exception('error patching Django database connections')
56
57 try:
58 patch_template(tracer)
59 except Exception:
60 log.exception('error patching Django template rendering')
61
62 try:
63 patch_cache(tracer)
64 except Exception:
65 log.exception('error patching Django cache')
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/django/apps.py b/ddtrace/contrib/django/apps.py
--- a/ddtrace/contrib/django/apps.py
+++ b/ddtrace/contrib/django/apps.py
@@ -31,13 +31,6 @@
if settings.TAGS:
tracer.set_tags(settings.TAGS)
- # define the service details
- tracer.set_service_info(
- app='django',
- app_type=AppTypes.web,
- service=settings.DEFAULT_SERVICE,
- )
-
# configure the tracer instance
# TODO[manu]: we may use configure() but because it creates a new
# AgentWriter, it breaks all tests. The configure() behavior must
@@ -46,6 +39,13 @@
tracer.writer.api.hostname = settings.AGENT_HOSTNAME
tracer.writer.api.port = settings.AGENT_PORT
+ # define the service details
+ tracer.set_service_info(
+ app='django',
+ app_type=AppTypes.web,
+ service=settings.DEFAULT_SERVICE,
+ )
+
if settings.AUTO_INSTRUMENT:
# trace Django internals
insert_exception_middleware()
| {"golden_diff": "diff --git a/ddtrace/contrib/django/apps.py b/ddtrace/contrib/django/apps.py\n--- a/ddtrace/contrib/django/apps.py\n+++ b/ddtrace/contrib/django/apps.py\n@@ -31,13 +31,6 @@\n if settings.TAGS:\n tracer.set_tags(settings.TAGS)\n \n- # define the service details\n- tracer.set_service_info(\n- app='django',\n- app_type=AppTypes.web,\n- service=settings.DEFAULT_SERVICE,\n- )\n-\n # configure the tracer instance\n # TODO[manu]: we may use configure() but because it creates a new\n # AgentWriter, it breaks all tests. The configure() behavior must\n@@ -46,6 +39,13 @@\n tracer.writer.api.hostname = settings.AGENT_HOSTNAME\n tracer.writer.api.port = settings.AGENT_PORT\n \n+ # define the service details\n+ tracer.set_service_info(\n+ app='django',\n+ app_type=AppTypes.web,\n+ service=settings.DEFAULT_SERVICE,\n+ )\n+\n if settings.AUTO_INSTRUMENT:\n # trace Django internals\n insert_exception_middleware()\n", "issue": "Error log still occurs when tracer is disabled (Django)\nThe tracer is logging the following error when disabled:\r\n\r\n> 2017-07-05 12:54:36,552:[none]:[ddtrace.writer:134]:ERROR cannot send services: [Errno 111] Connection refused\r\n\r\nThis is occurring when integrated with Django with the following configuration:\r\n\r\n```python\r\nDATADOG_TRACE = {\r\n 'ENABLED': False\r\n}\r\n```\r\nFrom reading the [documentation](http://pypi.datadoghq.com/trace/docs/#module-ddtrace.contrib.django) which states:\r\n> ENABLED (default: not django_settings.DEBUG): defines if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the trace agent. This setting cannot be changed at runtime and a restart is required. By default the tracer is disabled when in DEBUG mode, enabled otherwise.\r\n\r\nIt seems this log should not occur. If no spans are sent to the trace agent then presumably a connection should not be established?\r\n\r\nPackage Info\r\n------------------\r\n\r\n> datadog==0.15.0\r\n> ddtrace==0.8.5 \r\n\n", "before_files": [{"content": "import logging\n\n# 3rd party\nfrom django.apps import AppConfig\n\n# project\nfrom .db import patch_db\nfrom .conf import settings\nfrom .cache import patch_cache\nfrom .templates import patch_template\nfrom .middleware import insert_exception_middleware\n\nfrom ...ext import AppTypes\n\n\nlog = logging.getLogger(__name__)\n\n\nclass TracerConfig(AppConfig):\n name = 'ddtrace.contrib.django'\n label = 'datadog_django'\n\n def ready(self):\n \"\"\"\n Ready is called as soon as the registry is fully populated.\n Tracing capabilities must be enabled in this function so that\n all Django internals are properly configured.\n \"\"\"\n tracer = settings.TRACER\n\n if settings.TAGS:\n tracer.set_tags(settings.TAGS)\n\n # define the service details\n tracer.set_service_info(\n app='django',\n app_type=AppTypes.web,\n service=settings.DEFAULT_SERVICE,\n )\n\n # configure the tracer instance\n # TODO[manu]: we may use configure() but because it creates a new\n # AgentWriter, it breaks all tests. The configure() behavior must\n # be changed to use it in this integration\n tracer.enabled = settings.ENABLED\n tracer.writer.api.hostname = settings.AGENT_HOSTNAME\n tracer.writer.api.port = settings.AGENT_PORT\n\n if settings.AUTO_INSTRUMENT:\n # trace Django internals\n insert_exception_middleware()\n try:\n patch_db(tracer)\n except Exception:\n log.exception('error patching Django database connections')\n\n try:\n patch_template(tracer)\n except Exception:\n log.exception('error patching Django template rendering')\n\n try:\n patch_cache(tracer)\n except Exception:\n log.exception('error patching Django cache')\n", "path": "ddtrace/contrib/django/apps.py"}], "after_files": [{"content": "import logging\n\n# 3rd party\nfrom django.apps import AppConfig\n\n# project\nfrom .db import patch_db\nfrom .conf import settings\nfrom .cache import patch_cache\nfrom .templates import patch_template\nfrom .middleware import insert_exception_middleware\n\nfrom ...ext import AppTypes\n\n\nlog = logging.getLogger(__name__)\n\n\nclass TracerConfig(AppConfig):\n name = 'ddtrace.contrib.django'\n label = 'datadog_django'\n\n def ready(self):\n \"\"\"\n Ready is called as soon as the registry is fully populated.\n Tracing capabilities must be enabled in this function so that\n all Django internals are properly configured.\n \"\"\"\n tracer = settings.TRACER\n\n if settings.TAGS:\n tracer.set_tags(settings.TAGS)\n\n # configure the tracer instance\n # TODO[manu]: we may use configure() but because it creates a new\n # AgentWriter, it breaks all tests. The configure() behavior must\n # be changed to use it in this integration\n tracer.enabled = settings.ENABLED\n tracer.writer.api.hostname = settings.AGENT_HOSTNAME\n tracer.writer.api.port = settings.AGENT_PORT\n\n # define the service details\n tracer.set_service_info(\n app='django',\n app_type=AppTypes.web,\n service=settings.DEFAULT_SERVICE,\n )\n\n if settings.AUTO_INSTRUMENT:\n # trace Django internals\n insert_exception_middleware()\n try:\n patch_db(tracer)\n except Exception:\n log.exception('error patching Django database connections')\n\n try:\n patch_template(tracer)\n except Exception:\n log.exception('error patching Django template rendering')\n\n try:\n patch_cache(tracer)\n except Exception:\n log.exception('error patching Django cache')\n", "path": "ddtrace/contrib/django/apps.py"}]} | 1,031 | 254 |
gh_patches_debug_6739 | rasdani/github-patches | git_diff | TheAlgorithms__Python-5734 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rewrite *other* fibonacci.py
I opened issue #5665 to have `fibonacci.py` rewritten, but apparently there are multiple files with that name. The PR that @citharus made (#5677) revamps the file `dynamic_programming/fibonacci.py` (thanks for your contributions btw!) whereas this issue seeks to revamp the file `maths/fibonacci.py`.
I'm opening this as a new issue since it's technically a different algorithm file and the two `fibonacci.py` files each use different algorithms to calculate the Fibonacci sequence.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `maths/fibonacci.py`
Content:
```
1 # fibonacci.py
2 """
3 Calculates the Fibonacci sequence using iteration, recursion, and a simplified
4 form of Binet's formula
5
6 NOTE 1: the iterative and recursive functions are more accurate than the Binet's
7 formula function because the iterative function doesn't use floats
8
9 NOTE 2: the Binet's formula function is much more limited in the size of inputs
10 that it can handle due to the size limitations of Python floats
11 """
12
13 from math import sqrt
14 from time import time
15
16
17 def time_func(func, *args, **kwargs):
18 """
19 Times the execution of a function with parameters
20 """
21 start = time()
22 output = func(*args, **kwargs)
23 end = time()
24 if int(end - start) > 0:
25 print(f"{func.__name__} runtime: {(end - start):0.4f} s")
26 else:
27 print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
28 return output
29
30
31 def fib_iterative(n: int) -> list[int]:
32 """
33 Calculates the first n (0-indexed) Fibonacci numbers using iteration
34 >>> fib_iterative(0)
35 [0]
36 >>> fib_iterative(1)
37 [0, 1]
38 >>> fib_iterative(5)
39 [0, 1, 1, 2, 3, 5]
40 >>> fib_iterative(10)
41 [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
42 >>> fib_iterative(-1)
43 Traceback (most recent call last):
44 ...
45 Exception: n is negative
46 """
47 if n < 0:
48 raise Exception("n is negative")
49 if n == 0:
50 return [0]
51 fib = [0, 1]
52 for _ in range(n - 1):
53 fib.append(fib[-1] + fib[-2])
54 return fib
55
56
57 def fib_recursive(n: int) -> list[int]:
58 """
59 Calculates the first n (0-indexed) Fibonacci numbers using recursion
60 >>> fib_iterative(0)
61 [0]
62 >>> fib_iterative(1)
63 [0, 1]
64 >>> fib_iterative(5)
65 [0, 1, 1, 2, 3, 5]
66 >>> fib_iterative(10)
67 [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
68 >>> fib_iterative(-1)
69 Traceback (most recent call last):
70 ...
71 Exception: n is negative
72 """
73
74 def fib_recursive_term(i: int) -> int:
75 """
76 Calculates the i-th (0-indexed) Fibonacci number using recursion
77 """
78 if i < 0:
79 raise Exception("n is negative")
80 if i < 2:
81 return i
82 return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
83
84 if n < 0:
85 raise Exception("n is negative")
86 return [fib_recursive_term(i) for i in range(n + 1)]
87
88
89 def fib_binet(n: int) -> list[int]:
90 """
91 Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
92 of Binet's formula:
93 https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
94
95 NOTE 1: this function diverges from fib_iterative at around n = 71, likely
96 due to compounding floating-point arithmetic errors
97
98 NOTE 2: this function overflows on n >= 1475 because of the size limitations
99 of Python floats
100 >>> fib_binet(0)
101 [0]
102 >>> fib_binet(1)
103 [0, 1]
104 >>> fib_binet(5)
105 [0, 1, 1, 2, 3, 5]
106 >>> fib_binet(10)
107 [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
108 >>> fib_binet(-1)
109 Traceback (most recent call last):
110 ...
111 Exception: n is negative
112 >>> fib_binet(1475)
113 Traceback (most recent call last):
114 ...
115 Exception: n is too large
116 """
117 if n < 0:
118 raise Exception("n is negative")
119 if n >= 1475:
120 raise Exception("n is too large")
121 sqrt_5 = sqrt(5)
122 phi = (1 + sqrt_5) / 2
123 return [round(phi ** i / sqrt_5) for i in range(n + 1)]
124
125
126 if __name__ == "__main__":
127 num = 20
128 time_func(fib_iterative, num)
129 time_func(fib_recursive, num)
130 time_func(fib_binet, num)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/maths/fibonacci.py b/maths/fibonacci.py
--- a/maths/fibonacci.py
+++ b/maths/fibonacci.py
@@ -95,8 +95,8 @@
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
- NOTE 2: this function overflows on n >= 1475 because of the size limitations
- of Python floats
+ NOTE 2: this function doesn't accept n >= 1475 because it overflows
+ thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
| {"golden_diff": "diff --git a/maths/fibonacci.py b/maths/fibonacci.py\n--- a/maths/fibonacci.py\n+++ b/maths/fibonacci.py\n@@ -95,8 +95,8 @@\n NOTE 1: this function diverges from fib_iterative at around n = 71, likely\n due to compounding floating-point arithmetic errors\n \n- NOTE 2: this function overflows on n >= 1475 because of the size limitations\n- of Python floats\n+ NOTE 2: this function doesn't accept n >= 1475 because it overflows\n+ thereafter due to the size limitations of Python floats\n >>> fib_binet(0)\n [0]\n >>> fib_binet(1)\n", "issue": "Rewrite *other* fibonacci.py\nI opened issue #5665 to have `fibonacci.py` rewritten, but apparently there are multiple files with that name. The PR that @citharus made (#5677) revamps the file `dynamic_programming/fibonacci.py` (thanks for your contributions btw!) whereas this issue seeks to revamp the file `maths/fibonacci.py`.\r\n\r\nI'm opening this as a new issue since it's technically a different algorithm file and the two `fibonacci.py` files each use different algorithms to calculate the Fibonacci sequence.\n", "before_files": [{"content": "# fibonacci.py\n\"\"\"\nCalculates the Fibonacci sequence using iteration, recursion, and a simplified\nform of Binet's formula\n\nNOTE 1: the iterative and recursive functions are more accurate than the Binet's\nformula function because the iterative function doesn't use floats\n\nNOTE 2: the Binet's formula function is much more limited in the size of inputs\nthat it can handle due to the size limitations of Python floats\n\"\"\"\n\nfrom math import sqrt\nfrom time import time\n\n\ndef time_func(func, *args, **kwargs):\n \"\"\"\n Times the execution of a function with parameters\n \"\"\"\n start = time()\n output = func(*args, **kwargs)\n end = time()\n if int(end - start) > 0:\n print(f\"{func.__name__} runtime: {(end - start):0.4f} s\")\n else:\n print(f\"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms\")\n return output\n\n\ndef fib_iterative(n: int) -> list[int]:\n \"\"\"\n Calculates the first n (0-indexed) Fibonacci numbers using iteration\n >>> fib_iterative(0)\n [0]\n >>> fib_iterative(1)\n [0, 1]\n >>> fib_iterative(5)\n [0, 1, 1, 2, 3, 5]\n >>> fib_iterative(10)\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n >>> fib_iterative(-1)\n Traceback (most recent call last):\n ...\n Exception: n is negative\n \"\"\"\n if n < 0:\n raise Exception(\"n is negative\")\n if n == 0:\n return [0]\n fib = [0, 1]\n for _ in range(n - 1):\n fib.append(fib[-1] + fib[-2])\n return fib\n\n\ndef fib_recursive(n: int) -> list[int]:\n \"\"\"\n Calculates the first n (0-indexed) Fibonacci numbers using recursion\n >>> fib_iterative(0)\n [0]\n >>> fib_iterative(1)\n [0, 1]\n >>> fib_iterative(5)\n [0, 1, 1, 2, 3, 5]\n >>> fib_iterative(10)\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n >>> fib_iterative(-1)\n Traceback (most recent call last):\n ...\n Exception: n is negative\n \"\"\"\n\n def fib_recursive_term(i: int) -> int:\n \"\"\"\n Calculates the i-th (0-indexed) Fibonacci number using recursion\n \"\"\"\n if i < 0:\n raise Exception(\"n is negative\")\n if i < 2:\n return i\n return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)\n\n if n < 0:\n raise Exception(\"n is negative\")\n return [fib_recursive_term(i) for i in range(n + 1)]\n\n\ndef fib_binet(n: int) -> list[int]:\n \"\"\"\n Calculates the first n (0-indexed) Fibonacci numbers using a simplified form\n of Binet's formula:\n https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding\n\n NOTE 1: this function diverges from fib_iterative at around n = 71, likely\n due to compounding floating-point arithmetic errors\n\n NOTE 2: this function overflows on n >= 1475 because of the size limitations\n of Python floats\n >>> fib_binet(0)\n [0]\n >>> fib_binet(1)\n [0, 1]\n >>> fib_binet(5)\n [0, 1, 1, 2, 3, 5]\n >>> fib_binet(10)\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n >>> fib_binet(-1)\n Traceback (most recent call last):\n ...\n Exception: n is negative\n >>> fib_binet(1475)\n Traceback (most recent call last):\n ...\n Exception: n is too large\n \"\"\"\n if n < 0:\n raise Exception(\"n is negative\")\n if n >= 1475:\n raise Exception(\"n is too large\")\n sqrt_5 = sqrt(5)\n phi = (1 + sqrt_5) / 2\n return [round(phi ** i / sqrt_5) for i in range(n + 1)]\n\n\nif __name__ == \"__main__\":\n num = 20\n time_func(fib_iterative, num)\n time_func(fib_recursive, num)\n time_func(fib_binet, num)\n", "path": "maths/fibonacci.py"}], "after_files": [{"content": "# fibonacci.py\n\"\"\"\nCalculates the Fibonacci sequence using iteration, recursion, and a simplified\nform of Binet's formula\n\nNOTE 1: the iterative and recursive functions are more accurate than the Binet's\nformula function because the iterative function doesn't use floats\n\nNOTE 2: the Binet's formula function is much more limited in the size of inputs\nthat it can handle due to the size limitations of Python floats\n\"\"\"\n\nfrom math import sqrt\nfrom time import time\n\n\ndef time_func(func, *args, **kwargs):\n \"\"\"\n Times the execution of a function with parameters\n \"\"\"\n start = time()\n output = func(*args, **kwargs)\n end = time()\n if int(end - start) > 0:\n print(f\"{func.__name__} runtime: {(end - start):0.4f} s\")\n else:\n print(f\"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms\")\n return output\n\n\ndef fib_iterative(n: int) -> list[int]:\n \"\"\"\n Calculates the first n (0-indexed) Fibonacci numbers using iteration\n >>> fib_iterative(0)\n [0]\n >>> fib_iterative(1)\n [0, 1]\n >>> fib_iterative(5)\n [0, 1, 1, 2, 3, 5]\n >>> fib_iterative(10)\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n >>> fib_iterative(-1)\n Traceback (most recent call last):\n ...\n Exception: n is negative\n \"\"\"\n if n < 0:\n raise Exception(\"n is negative\")\n if n == 0:\n return [0]\n fib = [0, 1]\n for _ in range(n - 1):\n fib.append(fib[-1] + fib[-2])\n return fib\n\n\ndef fib_recursive(n: int) -> list[int]:\n \"\"\"\n Calculates the first n (0-indexed) Fibonacci numbers using recursion\n >>> fib_iterative(0)\n [0]\n >>> fib_iterative(1)\n [0, 1]\n >>> fib_iterative(5)\n [0, 1, 1, 2, 3, 5]\n >>> fib_iterative(10)\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n >>> fib_iterative(-1)\n Traceback (most recent call last):\n ...\n Exception: n is negative\n \"\"\"\n\n def fib_recursive_term(i: int) -> int:\n \"\"\"\n Calculates the i-th (0-indexed) Fibonacci number using recursion\n \"\"\"\n if i < 0:\n raise Exception(\"n is negative\")\n if i < 2:\n return i\n return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)\n\n if n < 0:\n raise Exception(\"n is negative\")\n return [fib_recursive_term(i) for i in range(n + 1)]\n\n\ndef fib_binet(n: int) -> list[int]:\n \"\"\"\n Calculates the first n (0-indexed) Fibonacci numbers using a simplified form\n of Binet's formula:\n https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding\n\n NOTE 1: this function diverges from fib_iterative at around n = 71, likely\n due to compounding floating-point arithmetic errors\n\n NOTE 2: this function doesn't accept n >= 1475 because it overflows\n thereafter due to the size limitations of Python floats\n >>> fib_binet(0)\n [0]\n >>> fib_binet(1)\n [0, 1]\n >>> fib_binet(5)\n [0, 1, 1, 2, 3, 5]\n >>> fib_binet(10)\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n >>> fib_binet(-1)\n Traceback (most recent call last):\n ...\n Exception: n is negative\n >>> fib_binet(1475)\n Traceback (most recent call last):\n ...\n Exception: n is too large\n \"\"\"\n if n < 0:\n raise Exception(\"n is negative\")\n if n >= 1475:\n raise Exception(\"n is too large\")\n sqrt_5 = sqrt(5)\n phi = (1 + sqrt_5) / 2\n return [round(phi ** i / sqrt_5) for i in range(n + 1)]\n\n\nif __name__ == \"__main__\":\n num = 20\n time_func(fib_iterative, num)\n time_func(fib_recursive, num)\n time_func(fib_binet, num)\n", "path": "maths/fibonacci.py"}]} | 1,783 | 166 |
gh_patches_debug_6077 | rasdani/github-patches | git_diff | learningequality__kolibri-3759 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove translations for user kinds on backend
### Observed behavior
In role kinds we use the string "Classroom Assignable Coach": https://github.com/learningequality/kolibri/blob/develop/kolibri/auth/constants/role_kinds.py#L15
This string is not something that should be user-facing
### Expected behavior
implementation details hidden from user
### User-facing consequences
confusing, inconsistent terminology
### Context
https://crowdin.com/translate/kolibri/498/en-es#37506
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/auth/constants/role_kinds.py`
Content:
```
1 """
2 This module contains constants representing the kinds of "roles" that a user can have with respect to a Collection.
3 """
4 from __future__ import unicode_literals
5
6 from django.utils.translation import ugettext_lazy as _
7
8 ADMIN = "admin"
9 COACH = "coach"
10 ASSIGNABLE_COACH = "classroom assignable coach"
11
12 choices = (
13 (ADMIN, _("Admin")),
14 (COACH, _("Coach")),
15 (ASSIGNABLE_COACH, _("Classroom Assignable Coach")),
16 )
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/auth/constants/role_kinds.py b/kolibri/auth/constants/role_kinds.py
--- a/kolibri/auth/constants/role_kinds.py
+++ b/kolibri/auth/constants/role_kinds.py
@@ -3,14 +3,12 @@
"""
from __future__ import unicode_literals
-from django.utils.translation import ugettext_lazy as _
-
ADMIN = "admin"
COACH = "coach"
ASSIGNABLE_COACH = "classroom assignable coach"
choices = (
- (ADMIN, _("Admin")),
- (COACH, _("Coach")),
- (ASSIGNABLE_COACH, _("Classroom Assignable Coach")),
+ (ADMIN, "Admin"),
+ (COACH, "Coach"),
+ (ASSIGNABLE_COACH, "Classroom Assignable Coach"),
)
| {"golden_diff": "diff --git a/kolibri/auth/constants/role_kinds.py b/kolibri/auth/constants/role_kinds.py\n--- a/kolibri/auth/constants/role_kinds.py\n+++ b/kolibri/auth/constants/role_kinds.py\n@@ -3,14 +3,12 @@\n \"\"\"\n from __future__ import unicode_literals\n \n-from django.utils.translation import ugettext_lazy as _\n-\n ADMIN = \"admin\"\n COACH = \"coach\"\n ASSIGNABLE_COACH = \"classroom assignable coach\"\n \n choices = (\n- (ADMIN, _(\"Admin\")),\n- (COACH, _(\"Coach\")),\n- (ASSIGNABLE_COACH, _(\"Classroom Assignable Coach\")),\n+ (ADMIN, \"Admin\"),\n+ (COACH, \"Coach\"),\n+ (ASSIGNABLE_COACH, \"Classroom Assignable Coach\"),\n )\n", "issue": "remove translations for user kinds on backend\n### Observed behavior\r\n\r\nIn role kinds we use the string \"Classroom Assignable Coach\": https://github.com/learningequality/kolibri/blob/develop/kolibri/auth/constants/role_kinds.py#L15\r\n\r\nThis string is not something that should be user-facing\r\n\r\n### Expected behavior\r\n\r\nimplementation details hidden from user\r\n\r\n### User-facing consequences\r\n\r\nconfusing, inconsistent terminology\r\n\r\n\r\n### Context\r\n\r\nhttps://crowdin.com/translate/kolibri/498/en-es#37506\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThis module contains constants representing the kinds of \"roles\" that a user can have with respect to a Collection.\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\n\nADMIN = \"admin\"\nCOACH = \"coach\"\nASSIGNABLE_COACH = \"classroom assignable coach\"\n\nchoices = (\n (ADMIN, _(\"Admin\")),\n (COACH, _(\"Coach\")),\n (ASSIGNABLE_COACH, _(\"Classroom Assignable Coach\")),\n)\n", "path": "kolibri/auth/constants/role_kinds.py"}], "after_files": [{"content": "\"\"\"\nThis module contains constants representing the kinds of \"roles\" that a user can have with respect to a Collection.\n\"\"\"\nfrom __future__ import unicode_literals\n\nADMIN = \"admin\"\nCOACH = \"coach\"\nASSIGNABLE_COACH = \"classroom assignable coach\"\n\nchoices = (\n (ADMIN, \"Admin\"),\n (COACH, \"Coach\"),\n (ASSIGNABLE_COACH, \"Classroom Assignable Coach\"),\n)\n", "path": "kolibri/auth/constants/role_kinds.py"}]} | 504 | 177 |
gh_patches_debug_12296 | rasdani/github-patches | git_diff | fedora-infra__bodhi-2359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot run database migrations on the 3.7 branch
I built a beta out of the ```HEAD``` of the ```3.7``` branch, and the migrations fail to run:
```
[root@bodhi-backend01 bowlofeggs][STG]# /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.env] Emitting SQL to allow for global DDL locking with BDR
/usr/lib/python2.7/site-packages/alembic/util/messaging.py:69: UserWarning: Revision be25565a1211 referenced from be25565a1211 -> 59c0f5fbc1b2 (head), Add a greenwave_unsatisfied_requirements column to the updates table. is not present
warnings.warn(msg)
Traceback (most recent call last):
File "/usr/bin/alembic", line 12, in <module>
sys.exit(load_entry_point('alembic', 'console_scripts', 'alembic')())
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 479, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 473, in main
self.run_cmd(cfg, options)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 456, in run_cmd
**dict((k, getattr(options, k, None)) for k in kwarg)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 254, in upgrade
script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 425, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file
module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 141, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "/usr/lib/python2.7/site-packages/bodhi/server/migrations/env.py", line 112, in <module>
run_migrations_online()
File "/usr/lib/python2.7/site-packages/bodhi/server/migrations/env.py", line 104, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 836, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 321, in run_migrations
for step in self._migrations_fn(heads, self):
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 243, in upgrade
return script._upgrade_revs(revision, rev)
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 334, in _upgrade_revs
revs = list(revs)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 645, in _iterate_revisions
requested_lowers = self.get_revisions(lower)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 299, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 301, in get_revisions
resolved_id, branch_label = self._resolve_revision_number(id_)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 437, in _resolve_revision_number
self._revision_map
File "/usr/lib/python2.7/site-packages/alembic/util/langhelpers.py", line 239, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 152, in _revision_map
down_revision = map_[downrev]
KeyError: 'be25565a1211'
```
It sounds like there's a migration on ```develop``` that is not on the ```3.7``` branch, and when I cherry-picked the migration back to ```3.7``` it now references a migration that does not exist. To fix this, I'll need to shuffle the order of the migrations.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py`
Content:
```
1 # Copyright (c) 2018 Red Hat, Inc.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """
19 Add a greenwave_unsatisfied_requirements column to the updates table.
20
21 Revision ID: 59c0f5fbc1b2
22 Revises: be25565a1211
23 Create Date: 2018-05-01 15:37:07.346034
24 """
25 from alembic import op
26 import sqlalchemy as sa
27
28
29 # revision identifiers, used by Alembic.
30 revision = '59c0f5fbc1b2'
31 down_revision = 'be25565a1211'
32
33
34 def upgrade():
35 """Add a greenwave_unsatisfied_requirements to the updates table."""
36 op.add_column('updates',
37 sa.Column('greenwave_unsatisfied_requirements', sa.UnicodeText(), nullable=True))
38
39
40 def downgrade():
41 """Drop the greenwave_unsatisfied_requirements from the updates table."""
42 op.drop_column('updates', 'greenwave_unsatisfied_requirements')
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py b/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py
--- a/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py
+++ b/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py
@@ -19,7 +19,7 @@
Add a greenwave_unsatisfied_requirements column to the updates table.
Revision ID: 59c0f5fbc1b2
-Revises: be25565a1211
+Revises: c21dd18b161a
Create Date: 2018-05-01 15:37:07.346034
"""
from alembic import op
@@ -28,7 +28,7 @@
# revision identifiers, used by Alembic.
revision = '59c0f5fbc1b2'
-down_revision = 'be25565a1211'
+down_revision = 'c21dd18b161a'
def upgrade():
| {"golden_diff": "diff --git a/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py b/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py\n--- a/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py\n+++ b/bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py\n@@ -19,7 +19,7 @@\n Add a greenwave_unsatisfied_requirements column to the updates table.\n \n Revision ID: 59c0f5fbc1b2\n-Revises: be25565a1211\n+Revises: c21dd18b161a\n Create Date: 2018-05-01 15:37:07.346034\n \"\"\"\n from alembic import op\n@@ -28,7 +28,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '59c0f5fbc1b2'\n-down_revision = 'be25565a1211'\n+down_revision = 'c21dd18b161a'\n \n \n def upgrade():\n", "issue": "Cannot run database migrations on the 3.7 branch\nI built a beta out of the ```HEAD``` of the ```3.7``` branch, and the migrations fail to run:\r\n\r\n```\r\n[root@bodhi-backend01 bowlofeggs][STG]# /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head\r\nINFO [alembic.runtime.migration] Context impl PostgresqlImpl.\r\nINFO [alembic.runtime.migration] Will assume transactional DDL.\r\nINFO [alembic.env] Emitting SQL to allow for global DDL locking with BDR\r\n/usr/lib/python2.7/site-packages/alembic/util/messaging.py:69: UserWarning: Revision be25565a1211 referenced from be25565a1211 -> 59c0f5fbc1b2 (head), Add a greenwave_unsatisfied_requirements column to the updates table. is not present\r\n warnings.warn(msg)\r\nTraceback (most recent call last):\r\n File \"/usr/bin/alembic\", line 12, in <module>\r\n sys.exit(load_entry_point('alembic', 'console_scripts', 'alembic')())\r\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 479, in main\r\n CommandLine(prog=prog).main(argv=argv)\r\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 473, in main\r\n self.run_cmd(cfg, options)\r\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 456, in run_cmd\r\n **dict((k, getattr(options, k, None)) for k in kwarg)\r\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 254, in upgrade\r\n script.run_env()\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 425, in run_env\r\n util.load_python_file(self.dir, 'env.py')\r\n File \"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py\", line 81, in load_python_file\r\n module = load_module_py(module_id, path)\r\n File \"/usr/lib/python2.7/site-packages/alembic/util/compat.py\", line 141, in load_module_py\r\n mod = imp.load_source(module_id, path, fp)\r\n File \"/usr/lib/python2.7/site-packages/bodhi/server/migrations/env.py\", line 112, in <module>\r\n run_migrations_online()\r\n File \"/usr/lib/python2.7/site-packages/bodhi/server/migrations/env.py\", line 104, in run_migrations_online\r\n context.run_migrations()\r\n File \"<string>\", line 8, in run_migrations\r\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/environment.py\", line 836, in run_migrations\r\n self.get_context().run_migrations(**kw)\r\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/migration.py\", line 321, in run_migrations\r\n for step in self._migrations_fn(heads, self):\r\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 243, in upgrade\r\n return script._upgrade_revs(revision, rev)\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 334, in _upgrade_revs\r\n revs = list(revs)\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 645, in _iterate_revisions\r\n requested_lowers = self.get_revisions(lower)\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 299, in get_revisions\r\n return sum([self.get_revisions(id_elem) for id_elem in id_], ())\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 301, in get_revisions\r\n resolved_id, branch_label = self._resolve_revision_number(id_)\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 437, in _resolve_revision_number\r\n self._revision_map\r\n File \"/usr/lib/python2.7/site-packages/alembic/util/langhelpers.py\", line 239, in __get__\r\n obj.__dict__[self.__name__] = result = self.fget(obj)\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 152, in _revision_map\r\n down_revision = map_[downrev]\r\nKeyError: 'be25565a1211'\r\n```\r\n\r\nIt sounds like there's a migration on ```develop``` that is not on the ```3.7``` branch, and when I cherry-picked the migration back to ```3.7``` it now references a migration that does not exist. To fix this, I'll need to shuffle the order of the migrations.\n", "before_files": [{"content": "# Copyright (c) 2018 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nAdd a greenwave_unsatisfied_requirements column to the updates table.\n\nRevision ID: 59c0f5fbc1b2\nRevises: be25565a1211\nCreate Date: 2018-05-01 15:37:07.346034\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '59c0f5fbc1b2'\ndown_revision = 'be25565a1211'\n\n\ndef upgrade():\n \"\"\"Add a greenwave_unsatisfied_requirements to the updates table.\"\"\"\n op.add_column('updates',\n sa.Column('greenwave_unsatisfied_requirements', sa.UnicodeText(), nullable=True))\n\n\ndef downgrade():\n \"\"\"Drop the greenwave_unsatisfied_requirements from the updates table.\"\"\"\n op.drop_column('updates', 'greenwave_unsatisfied_requirements')\n", "path": "bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py"}], "after_files": [{"content": "# Copyright (c) 2018 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nAdd a greenwave_unsatisfied_requirements column to the updates table.\n\nRevision ID: 59c0f5fbc1b2\nRevises: c21dd18b161a\nCreate Date: 2018-05-01 15:37:07.346034\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '59c0f5fbc1b2'\ndown_revision = 'c21dd18b161a'\n\n\ndef upgrade():\n \"\"\"Add a greenwave_unsatisfied_requirements to the updates table.\"\"\"\n op.add_column('updates',\n sa.Column('greenwave_unsatisfied_requirements', sa.UnicodeText(), nullable=True))\n\n\ndef downgrade():\n \"\"\"Drop the greenwave_unsatisfied_requirements from the updates table.\"\"\"\n op.drop_column('updates', 'greenwave_unsatisfied_requirements')\n", "path": "bodhi/server/migrations/versions/59c0f5fbc1b2_add_a_greenwave_unsatisfied_.py"}]} | 1,915 | 318 |
gh_patches_debug_33088 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-317 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User should be able to configure multiple databases in settings
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Currently, the user can only configure one Mathesar database in the settings. They should be able to configure as many databases to connect to Mathesar as they want.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
The user should be able to configure multiple databases in the `.env` file.
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
We might want to use `python-decouple`'s [built in CSV helper](https://github.com/henriquebastos/python-decouple/#built-in-csv-helper) for this.
Ideally, the user would be able to associate the database key with the connection information directly using a tuple or something like that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings.py`
Content:
```
1 """
2 Django settings for config project.
3
4 Generated by 'django-admin startproject' using Django 3.1.7.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.1/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.1/ref/settings/
11 """
12
13 import os
14 from pathlib import Path
15
16 from decouple import Csv, config as decouple_config
17 from dj_database_url import parse as db_url
18
19 # Build paths inside the project like this: BASE_DIR / 'subdir'.
20 BASE_DIR = Path(__file__).resolve().parent.parent
21
22 # Application definition
23
24 INSTALLED_APPS = [
25 "django.contrib.admin",
26 "django.contrib.auth",
27 "django.contrib.contenttypes",
28 "django.contrib.sessions",
29 "django.contrib.messages",
30 "django.contrib.staticfiles",
31 "rest_framework",
32 "django_filters",
33 "django_property_filter",
34 "mathesar",
35 ]
36
37 MIDDLEWARE = [
38 "django.middleware.security.SecurityMiddleware",
39 "django.contrib.sessions.middleware.SessionMiddleware",
40 "django.middleware.common.CommonMiddleware",
41 "django.middleware.csrf.CsrfViewMiddleware",
42 "django.contrib.auth.middleware.AuthenticationMiddleware",
43 "django.contrib.messages.middleware.MessageMiddleware",
44 "django.middleware.clickjacking.XFrameOptionsMiddleware",
45 ]
46
47 ROOT_URLCONF = "config.urls"
48
49 TEMPLATES = [
50 {
51 "BACKEND": "django.template.backends.django.DjangoTemplates",
52 "DIRS": [],
53 "APP_DIRS": True,
54 "OPTIONS": {
55 "context_processors": [
56 "config.context_processors.get_settings",
57 "django.template.context_processors.debug",
58 "django.template.context_processors.request",
59 "django.contrib.auth.context_processors.auth",
60 "django.contrib.messages.context_processors.messages",
61 ],
62 },
63 },
64 ]
65
66 WSGI_APPLICATION = "config.wsgi.application"
67
68 # Database
69 # https://docs.djangoproject.com/en/3.1/ref/settings/#databases
70
71 # TODO: Add to documentation that database keys should not be than 128 characters.
72 DATABASES = {
73 decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),
74 decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)
75 }
76
77 # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
78 # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
79 if decouple_config('TEST', default=False, cast=bool):
80 DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {
81 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']
82 }
83
84
85 # Quick-start development settings - unsuitable for production
86 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
87
88 # SECURITY WARNING: keep the secret key used in production secret!
89 SECRET_KEY = decouple_config('SECRET_KEY')
90
91 # SECURITY WARNING: don't run with debug turned on in production!
92 DEBUG = decouple_config('DEBUG', default=False, cast=bool)
93
94 ALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())
95
96 # Password validation
97 # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
98
99 AUTH_PASSWORD_VALIDATORS = [
100 {
101 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
102 },
103 {
104 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
105 },
106 {
107 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
108 },
109 {
110 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
111 },
112 ]
113
114
115 # Internationalization
116 # https://docs.djangoproject.com/en/3.1/topics/i18n/
117
118 LANGUAGE_CODE = "en-us"
119
120 TIME_ZONE = "UTC"
121
122 USE_I18N = True
123
124 USE_L10N = True
125
126 USE_TZ = True
127
128
129 # Static files (CSS, JavaScript, Images)
130 # https://docs.djangoproject.com/en/3.1/howto/static-files/
131
132 STATIC_URL = "/static/"
133
134 CLIENT_DEV_URL = "http://localhost:3000"
135
136
137 # Media files (uploaded by the user)
138
139 MEDIA_ROOT = os.path.join(BASE_DIR, '.media/')
140
141 MEDIA_URL = "/media/"
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/config/settings.py b/config/settings.py
--- a/config/settings.py
+++ b/config/settings.py
@@ -16,6 +16,16 @@
from decouple import Csv, config as decouple_config
from dj_database_url import parse as db_url
+
+# We use a 'tuple' with pipes as delimiters as decople naively splits the global
+# variables on commas when casting to Csv()
+def pipe_delim(pipe_string):
+ # Remove opening and closing brackets
+ pipe_string = pipe_string[1:-1]
+ # Split on pipe delim
+ return pipe_string.split("|")
+
+
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
@@ -69,17 +79,20 @@
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
# TODO: Add to documentation that database keys should not be than 128 characters.
+
+# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'
+# See pipe_delim above for why we use pipes as delimiters
DATABASES = {
- decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),
- decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)
+ db_key: db_url(url_string)
+ for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))
}
+DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
if decouple_config('TEST', default=False, cast=bool):
- DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {
- 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']
- }
+ for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):
+ DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}
# Quick-start development settings - unsuitable for production
| {"golden_diff": "diff --git a/config/settings.py b/config/settings.py\n--- a/config/settings.py\n+++ b/config/settings.py\n@@ -16,6 +16,16 @@\n from decouple import Csv, config as decouple_config\n from dj_database_url import parse as db_url\n \n+\n+# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n+# variables on commas when casting to Csv()\n+def pipe_delim(pipe_string):\n+ # Remove opening and closing brackets\n+ pipe_string = pipe_string[1:-1]\n+ # Split on pipe delim\n+ return pipe_string.split(\"|\")\n+\n+\n # Build paths inside the project like this: BASE_DIR / 'subdir'.\n BASE_DIR = Path(__file__).resolve().parent.parent\n \n@@ -69,17 +79,20 @@\n # https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n \n # TODO: Add to documentation that database keys should not be than 128 characters.\n+\n+# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n+# See pipe_delim above for why we use pipes as delimiters\n DATABASES = {\n- decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),\n- decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)\n+ db_key: db_url(url_string)\n+ for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n }\n+DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n \n # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\n if decouple_config('TEST', default=False, cast=bool):\n- DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {\n- 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']\n- }\n+ for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n+ DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n \n \n # Quick-start development settings - unsuitable for production\n", "issue": "User should be able to configure multiple databases in settings\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nCurrently, the user can only configure one Mathesar database in the settings. They should be able to configure as many databases to connect to Mathesar as they want.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nThe user should be able to configure multiple databases in the `.env` file.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nWe might want to use `python-decouple`'s [built in CSV helper](https://github.com/henriquebastos/python-decouple/#built-in-csv-helper) for this.\r\n\r\nIdeally, the user would be able to associate the database key with the connection information directly using a tuple or something like that.\n", "before_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\nDATABASES = {\n decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),\n decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)\n}\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {\n 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']\n }\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}]} | 1,712 | 545 |
gh_patches_debug_3923 | rasdani/github-patches | git_diff | deepset-ai__haystack-6173 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create a script for 2.0 API Reference docs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/pydoc/renderers.py`
Content:
```
1 import os
2 import sys
3 import io
4 import dataclasses
5 import typing as t
6 import base64
7 import warnings
8 from pathlib import Path
9
10 import requests
11 import docspec
12 from pydoc_markdown.interfaces import Context, Renderer
13 from pydoc_markdown.contrib.renderers.markdown import MarkdownRenderer
14
15
16 README_FRONTMATTER = """---
17 title: {title}
18 excerpt: {excerpt}
19 category: {category}
20 slug: {slug}
21 parentDoc: {parent_doc}
22 order: {order}
23 hidden: false
24 ---
25
26 """
27
28
29 def create_headers(version: str):
30 # Utility function to create Readme.io headers.
31 # We assume the README_API_KEY env var is set since we check outside
32 # to show clearer error messages.
33 README_API_KEY = os.getenv("README_API_KEY")
34 token = base64.b64encode(f"{README_API_KEY}:".encode()).decode()
35 return {"authorization": f"Basic {token}", "x-readme-version": version}
36
37
38 @dataclasses.dataclass
39 class ReadmeRenderer(Renderer):
40 """
41 This custom Renderer is heavily based on the `MarkdownRenderer`,
42 it just prepends a front matter so that the output can be published
43 directly to readme.io.
44 """
45
46 # These settings will be used in the front matter output
47 title: str
48 category_slug: str
49 excerpt: str
50 slug: str
51 order: int
52 parent_doc_slug: str = ""
53 # Docs categories fetched from Readme.io
54 categories: t.Dict[str, str] = dataclasses.field(init=False)
55 # This exposes a special `markdown` settings value that can be used to pass
56 # parameters to the underlying `MarkdownRenderer`
57 markdown: MarkdownRenderer = dataclasses.field(default_factory=MarkdownRenderer)
58
59 def init(self, context: Context) -> None:
60 self.markdown.init(context)
61 self.version = self._doc_version()
62 self.categories = self._readme_categories(self.version)
63
64 def _doc_version(self) -> str:
65 """
66 Returns the docs version.
67 """
68 root = Path(__file__).absolute().parent.parent.parent
69 full_version = (root / "VERSION.txt").read_text()
70 major, minor = full_version.split(".")[:2]
71 if "rc0" in full_version:
72 return f"v{major}.{minor}-unstable"
73 return f"v{major}.{minor}"
74
75 def _readme_categories(self, version: str) -> t.Dict[str, str]:
76 """
77 Fetch the categories of the given version from Readme.io.
78 README_API_KEY env var must be set to correctly get the categories.
79 Returns dictionary containing all the categories slugs and their ids.
80 """
81 README_API_KEY = os.getenv("README_API_KEY")
82 if not README_API_KEY:
83 warnings.warn("README_API_KEY env var is not set, using a placeholder category ID")
84 return {"haystack-classes": "ID"}
85
86 headers = create_headers(version)
87
88 res = requests.get("https://dash.readme.com/api/v1/categories", headers=headers, timeout=60)
89
90 if not res.ok:
91 sys.exit(f"Error requesting {version} categories")
92
93 return {c["slug"]: c["id"] for c in res.json()}
94
95 def _doc_id(self, doc_slug: str, version: str) -> str:
96 """
97 Fetch the doc id of the given doc slug and version from Readme.io.
98 README_API_KEY env var must be set to correctly get the id.
99 If doc_slug is an empty string return an empty string.
100 """
101 if not doc_slug:
102 # Not all docs have a parent doc, in case we get no slug
103 # we just return an empty string.
104 return ""
105
106 README_API_KEY = os.getenv("README_API_KEY")
107 if not README_API_KEY:
108 warnings.warn("README_API_KEY env var is not set, using a placeholder doc ID")
109 return "fake-doc-id"
110
111 headers = create_headers(version)
112 res = requests.get(f"https://dash.readme.com/api/v1/docs/{doc_slug}", headers=headers, timeout=60)
113 if not res.ok:
114 sys.exit(f"Error requesting {doc_slug} doc for version {version}")
115
116 return res.json()["id"]
117
118 def render(self, modules: t.List[docspec.Module]) -> None:
119 if self.markdown.filename is None:
120 sys.stdout.write(self._frontmatter())
121 self.markdown.render_single_page(sys.stdout, modules)
122 else:
123 with io.open(self.markdown.filename, "w", encoding=self.markdown.encoding) as fp:
124 fp.write(self._frontmatter())
125 self.markdown.render_single_page(t.cast(t.TextIO, fp), modules)
126
127 def _frontmatter(self) -> str:
128 return README_FRONTMATTER.format(
129 title=self.title,
130 category=self.categories[self.category_slug],
131 parent_doc=self._doc_id(self.parent_doc_slug, self.version),
132 excerpt=self.excerpt,
133 slug=self.slug,
134 order=self.order,
135 )
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/pydoc/renderers.py b/docs/pydoc/renderers.py
--- a/docs/pydoc/renderers.py
+++ b/docs/pydoc/renderers.py
@@ -133,3 +133,16 @@
slug=self.slug,
order=self.order,
)
+
+
[email protected]
+class ReadmePreviewRenderer(ReadmeRenderer):
+ """
+ This custom Renderer behaves just like the ReadmeRenderer but renders docs with the hardcoded version 2.0 to generate correct category ids.
+ """
+
+ def _doc_version(self) -> str:
+ """
+ Returns the hardcoded docs version 2.0.
+ """
+ return "v2.0"
| {"golden_diff": "diff --git a/docs/pydoc/renderers.py b/docs/pydoc/renderers.py\n--- a/docs/pydoc/renderers.py\n+++ b/docs/pydoc/renderers.py\n@@ -133,3 +133,16 @@\n slug=self.slug,\n order=self.order,\n )\n+\n+\[email protected]\n+class ReadmePreviewRenderer(ReadmeRenderer):\n+ \"\"\"\n+ This custom Renderer behaves just like the ReadmeRenderer but renders docs with the hardcoded version 2.0 to generate correct category ids.\n+ \"\"\"\n+\n+ def _doc_version(self) -> str:\n+ \"\"\"\n+ Returns the hardcoded docs version 2.0.\n+ \"\"\"\n+ return \"v2.0\"\n", "issue": "Create a script for 2.0 API Reference docs\n\n", "before_files": [{"content": "import os\nimport sys\nimport io\nimport dataclasses\nimport typing as t\nimport base64\nimport warnings\nfrom pathlib import Path\n\nimport requests\nimport docspec\nfrom pydoc_markdown.interfaces import Context, Renderer\nfrom pydoc_markdown.contrib.renderers.markdown import MarkdownRenderer\n\n\nREADME_FRONTMATTER = \"\"\"---\ntitle: {title}\nexcerpt: {excerpt}\ncategory: {category}\nslug: {slug}\nparentDoc: {parent_doc}\norder: {order}\nhidden: false\n---\n\n\"\"\"\n\n\ndef create_headers(version: str):\n # Utility function to create Readme.io headers.\n # We assume the README_API_KEY env var is set since we check outside\n # to show clearer error messages.\n README_API_KEY = os.getenv(\"README_API_KEY\")\n token = base64.b64encode(f\"{README_API_KEY}:\".encode()).decode()\n return {\"authorization\": f\"Basic {token}\", \"x-readme-version\": version}\n\n\[email protected]\nclass ReadmeRenderer(Renderer):\n \"\"\"\n This custom Renderer is heavily based on the `MarkdownRenderer`,\n it just prepends a front matter so that the output can be published\n directly to readme.io.\n \"\"\"\n\n # These settings will be used in the front matter output\n title: str\n category_slug: str\n excerpt: str\n slug: str\n order: int\n parent_doc_slug: str = \"\"\n # Docs categories fetched from Readme.io\n categories: t.Dict[str, str] = dataclasses.field(init=False)\n # This exposes a special `markdown` settings value that can be used to pass\n # parameters to the underlying `MarkdownRenderer`\n markdown: MarkdownRenderer = dataclasses.field(default_factory=MarkdownRenderer)\n\n def init(self, context: Context) -> None:\n self.markdown.init(context)\n self.version = self._doc_version()\n self.categories = self._readme_categories(self.version)\n\n def _doc_version(self) -> str:\n \"\"\"\n Returns the docs version.\n \"\"\"\n root = Path(__file__).absolute().parent.parent.parent\n full_version = (root / \"VERSION.txt\").read_text()\n major, minor = full_version.split(\".\")[:2]\n if \"rc0\" in full_version:\n return f\"v{major}.{minor}-unstable\"\n return f\"v{major}.{minor}\"\n\n def _readme_categories(self, version: str) -> t.Dict[str, str]:\n \"\"\"\n Fetch the categories of the given version from Readme.io.\n README_API_KEY env var must be set to correctly get the categories.\n Returns dictionary containing all the categories slugs and their ids.\n \"\"\"\n README_API_KEY = os.getenv(\"README_API_KEY\")\n if not README_API_KEY:\n warnings.warn(\"README_API_KEY env var is not set, using a placeholder category ID\")\n return {\"haystack-classes\": \"ID\"}\n\n headers = create_headers(version)\n\n res = requests.get(\"https://dash.readme.com/api/v1/categories\", headers=headers, timeout=60)\n\n if not res.ok:\n sys.exit(f\"Error requesting {version} categories\")\n\n return {c[\"slug\"]: c[\"id\"] for c in res.json()}\n\n def _doc_id(self, doc_slug: str, version: str) -> str:\n \"\"\"\n Fetch the doc id of the given doc slug and version from Readme.io.\n README_API_KEY env var must be set to correctly get the id.\n If doc_slug is an empty string return an empty string.\n \"\"\"\n if not doc_slug:\n # Not all docs have a parent doc, in case we get no slug\n # we just return an empty string.\n return \"\"\n\n README_API_KEY = os.getenv(\"README_API_KEY\")\n if not README_API_KEY:\n warnings.warn(\"README_API_KEY env var is not set, using a placeholder doc ID\")\n return \"fake-doc-id\"\n\n headers = create_headers(version)\n res = requests.get(f\"https://dash.readme.com/api/v1/docs/{doc_slug}\", headers=headers, timeout=60)\n if not res.ok:\n sys.exit(f\"Error requesting {doc_slug} doc for version {version}\")\n\n return res.json()[\"id\"]\n\n def render(self, modules: t.List[docspec.Module]) -> None:\n if self.markdown.filename is None:\n sys.stdout.write(self._frontmatter())\n self.markdown.render_single_page(sys.stdout, modules)\n else:\n with io.open(self.markdown.filename, \"w\", encoding=self.markdown.encoding) as fp:\n fp.write(self._frontmatter())\n self.markdown.render_single_page(t.cast(t.TextIO, fp), modules)\n\n def _frontmatter(self) -> str:\n return README_FRONTMATTER.format(\n title=self.title,\n category=self.categories[self.category_slug],\n parent_doc=self._doc_id(self.parent_doc_slug, self.version),\n excerpt=self.excerpt,\n slug=self.slug,\n order=self.order,\n )\n", "path": "docs/pydoc/renderers.py"}], "after_files": [{"content": "import os\nimport sys\nimport io\nimport dataclasses\nimport typing as t\nimport base64\nimport warnings\nfrom pathlib import Path\n\nimport requests\nimport docspec\nfrom pydoc_markdown.interfaces import Context, Renderer\nfrom pydoc_markdown.contrib.renderers.markdown import MarkdownRenderer\n\n\nREADME_FRONTMATTER = \"\"\"---\ntitle: {title}\nexcerpt: {excerpt}\ncategory: {category}\nslug: {slug}\nparentDoc: {parent_doc}\norder: {order}\nhidden: false\n---\n\n\"\"\"\n\n\ndef create_headers(version: str):\n # Utility function to create Readme.io headers.\n # We assume the README_API_KEY env var is set since we check outside\n # to show clearer error messages.\n README_API_KEY = os.getenv(\"README_API_KEY\")\n token = base64.b64encode(f\"{README_API_KEY}:\".encode()).decode()\n return {\"authorization\": f\"Basic {token}\", \"x-readme-version\": version}\n\n\[email protected]\nclass ReadmeRenderer(Renderer):\n \"\"\"\n This custom Renderer is heavily based on the `MarkdownRenderer`,\n it just prepends a front matter so that the output can be published\n directly to readme.io.\n \"\"\"\n\n # These settings will be used in the front matter output\n title: str\n category_slug: str\n excerpt: str\n slug: str\n order: int\n parent_doc_slug: str = \"\"\n # Docs categories fetched from Readme.io\n categories: t.Dict[str, str] = dataclasses.field(init=False)\n # This exposes a special `markdown` settings value that can be used to pass\n # parameters to the underlying `MarkdownRenderer`\n markdown: MarkdownRenderer = dataclasses.field(default_factory=MarkdownRenderer)\n\n def init(self, context: Context) -> None:\n self.markdown.init(context)\n self.version = self._doc_version()\n self.categories = self._readme_categories(self.version)\n\n def _doc_version(self) -> str:\n \"\"\"\n Returns the docs version.\n \"\"\"\n root = Path(__file__).absolute().parent.parent.parent\n full_version = (root / \"VERSION.txt\").read_text()\n major, minor = full_version.split(\".\")[:2]\n if \"rc0\" in full_version:\n return f\"v{major}.{minor}-unstable\"\n return f\"v{major}.{minor}\"\n\n def _readme_categories(self, version: str) -> t.Dict[str, str]:\n \"\"\"\n Fetch the categories of the given version from Readme.io.\n README_API_KEY env var must be set to correctly get the categories.\n Returns dictionary containing all the categories slugs and their ids.\n \"\"\"\n README_API_KEY = os.getenv(\"README_API_KEY\")\n if not README_API_KEY:\n warnings.warn(\"README_API_KEY env var is not set, using a placeholder category ID\")\n return {\"haystack-classes\": \"ID\"}\n\n headers = create_headers(version)\n\n res = requests.get(\"https://dash.readme.com/api/v1/categories\", headers=headers, timeout=60)\n\n if not res.ok:\n sys.exit(f\"Error requesting {version} categories\")\n\n return {c[\"slug\"]: c[\"id\"] for c in res.json()}\n\n def _doc_id(self, doc_slug: str, version: str) -> str:\n \"\"\"\n Fetch the doc id of the given doc slug and version from Readme.io.\n README_API_KEY env var must be set to correctly get the id.\n If doc_slug is an empty string return an empty string.\n \"\"\"\n if not doc_slug:\n # Not all docs have a parent doc, in case we get no slug\n # we just return an empty string.\n return \"\"\n\n README_API_KEY = os.getenv(\"README_API_KEY\")\n if not README_API_KEY:\n warnings.warn(\"README_API_KEY env var is not set, using a placeholder doc ID\")\n return \"fake-doc-id\"\n\n headers = create_headers(version)\n res = requests.get(f\"https://dash.readme.com/api/v1/docs/{doc_slug}\", headers=headers, timeout=60)\n if not res.ok:\n sys.exit(f\"Error requesting {doc_slug} doc for version {version}\")\n\n return res.json()[\"id\"]\n\n def render(self, modules: t.List[docspec.Module]) -> None:\n if self.markdown.filename is None:\n sys.stdout.write(self._frontmatter())\n self.markdown.render_single_page(sys.stdout, modules)\n else:\n with io.open(self.markdown.filename, \"w\", encoding=self.markdown.encoding) as fp:\n fp.write(self._frontmatter())\n self.markdown.render_single_page(t.cast(t.TextIO, fp), modules)\n\n def _frontmatter(self) -> str:\n return README_FRONTMATTER.format(\n title=self.title,\n category=self.categories[self.category_slug],\n parent_doc=self._doc_id(self.parent_doc_slug, self.version),\n excerpt=self.excerpt,\n slug=self.slug,\n order=self.order,\n )\n\n\[email protected]\nclass ReadmePreviewRenderer(ReadmeRenderer):\n \"\"\"\n This custom Renderer behaves just like the ReadmeRenderer but renders docs with the hardcoded version 2.0 to generate correct category ids.\n \"\"\"\n\n def _doc_version(self) -> str:\n \"\"\"\n Returns the hardcoded docs version 2.0.\n \"\"\"\n return \"v2.0\"\n", "path": "docs/pydoc/renderers.py"}]} | 1,650 | 157 |
gh_patches_debug_22565 | rasdani/github-patches | git_diff | pre-commit__pre-commit-874 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Concurrent execution results in uneven work per thread
I'm running `pre-commit` from current `master` to test the concurrency feature introduced with #851. While it in general seems to work, work is distributed pretty uneven. One hook we run is [`prospector`](https://github.com/guykisel/prospector-mirror) which is nice for testing, because it takes a relatively long time and it prints the time taken in its output.
Running `pre-commit run -a --verbose prospector | grep "Time Taken"` on a medium sized project (~100 Python files) results in the following distribution of work to the available 4 logical CPU cores:
```
Time Taken: 17.10 seconds
Time Taken: 8.70 seconds
Time Taken: 18.68 seconds
Time Taken: 108.02 seconds
```
Especially compared to running it with concurrency disabled (using `PRE_COMMIT_NO_CONCURRENCY`), it's pretty obvious that concurrency doesn't provide any real benefit here:
```
Time Taken: 116.95 seconds
```
I'd be happy to help debugging this further. Just tell me what other information you need. :slightly_smiling_face:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/helpers.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import multiprocessing
4 import os
5 import shlex
6
7 from pre_commit.util import cmd_output
8 from pre_commit.xargs import xargs
9
10
11 def run_setup_cmd(prefix, cmd):
12 cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)
13
14
15 def environment_dir(ENVIRONMENT_DIR, language_version):
16 if ENVIRONMENT_DIR is None:
17 return None
18 else:
19 return '{}-{}'.format(ENVIRONMENT_DIR, language_version)
20
21
22 def to_cmd(hook):
23 return tuple(shlex.split(hook['entry'])) + tuple(hook['args'])
24
25
26 def assert_version_default(binary, version):
27 if version != 'default':
28 raise AssertionError(
29 'For now, pre-commit requires system-installed {}'.format(binary),
30 )
31
32
33 def assert_no_additional_deps(lang, additional_deps):
34 if additional_deps:
35 raise AssertionError(
36 'For now, pre-commit does not support '
37 'additional_dependencies for {}'.format(lang),
38 )
39
40
41 def basic_get_default_version():
42 return 'default'
43
44
45 def basic_healthy(prefix, language_version):
46 return True
47
48
49 def no_install(prefix, version, additional_dependencies):
50 raise AssertionError('This type is not installable')
51
52
53 def target_concurrency(hook):
54 if hook['require_serial'] or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:
55 return 1
56 else:
57 # Travis appears to have a bunch of CPUs, but we can't use them all.
58 if 'TRAVIS' in os.environ:
59 return 2
60 else:
61 try:
62 return multiprocessing.cpu_count()
63 except NotImplementedError:
64 return 1
65
66
67 def run_xargs(hook, cmd, file_args):
68 return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py
--- a/pre_commit/languages/helpers.py
+++ b/pre_commit/languages/helpers.py
@@ -2,12 +2,18 @@
import multiprocessing
import os
+import random
import shlex
+import six
+
from pre_commit.util import cmd_output
from pre_commit.xargs import xargs
+FIXED_RANDOM_SEED = 1542676186
+
+
def run_setup_cmd(prefix, cmd):
cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)
@@ -64,5 +70,21 @@
return 1
+def _shuffled(seq):
+ """Deterministically shuffle identically under both py2 + py3."""
+ fixed_random = random.Random()
+ if six.PY2: # pragma: no cover (py2)
+ fixed_random.seed(FIXED_RANDOM_SEED)
+ else:
+ fixed_random.seed(FIXED_RANDOM_SEED, version=1)
+
+ seq = list(seq)
+ random.shuffle(seq, random=fixed_random.random)
+ return seq
+
+
def run_xargs(hook, cmd, file_args):
+ # Shuffle the files so that they more evenly fill out the xargs partitions,
+ # but do it deterministically in case a hook cares about ordering.
+ file_args = _shuffled(file_args)
return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))
| {"golden_diff": "diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py\n--- a/pre_commit/languages/helpers.py\n+++ b/pre_commit/languages/helpers.py\n@@ -2,12 +2,18 @@\n \n import multiprocessing\n import os\n+import random\n import shlex\n \n+import six\n+\n from pre_commit.util import cmd_output\n from pre_commit.xargs import xargs\n \n \n+FIXED_RANDOM_SEED = 1542676186\n+\n+\n def run_setup_cmd(prefix, cmd):\n cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)\n \n@@ -64,5 +70,21 @@\n return 1\n \n \n+def _shuffled(seq):\n+ \"\"\"Deterministically shuffle identically under both py2 + py3.\"\"\"\n+ fixed_random = random.Random()\n+ if six.PY2: # pragma: no cover (py2)\n+ fixed_random.seed(FIXED_RANDOM_SEED)\n+ else:\n+ fixed_random.seed(FIXED_RANDOM_SEED, version=1)\n+\n+ seq = list(seq)\n+ random.shuffle(seq, random=fixed_random.random)\n+ return seq\n+\n+\n def run_xargs(hook, cmd, file_args):\n+ # Shuffle the files so that they more evenly fill out the xargs partitions,\n+ # but do it deterministically in case a hook cares about ordering.\n+ file_args = _shuffled(file_args)\n return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))\n", "issue": "Concurrent execution results in uneven work per thread\nI'm running `pre-commit` from current `master` to test the concurrency feature introduced with #851. While it in general seems to work, work is distributed pretty uneven. One hook we run is [`prospector`](https://github.com/guykisel/prospector-mirror) which is nice for testing, because it takes a relatively long time and it prints the time taken in its output.\r\n\r\nRunning `pre-commit run -a --verbose prospector | grep \"Time Taken\"` on a medium sized project (~100 Python files) results in the following distribution of work to the available 4 logical CPU cores:\r\n```\r\nTime Taken: 17.10 seconds\r\nTime Taken: 8.70 seconds\r\nTime Taken: 18.68 seconds\r\nTime Taken: 108.02 seconds\r\n```\r\n\r\nEspecially compared to running it with concurrency disabled (using `PRE_COMMIT_NO_CONCURRENCY`), it's pretty obvious that concurrency doesn't provide any real benefit here:\r\n```\r\nTime Taken: 116.95 seconds\r\n```\r\n\r\nI'd be happy to help debugging this further. Just tell me what other information you need. :slightly_smiling_face: \n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport multiprocessing\nimport os\nimport shlex\n\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\ndef run_setup_cmd(prefix, cmd):\n cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)\n\n\ndef environment_dir(ENVIRONMENT_DIR, language_version):\n if ENVIRONMENT_DIR is None:\n return None\n else:\n return '{}-{}'.format(ENVIRONMENT_DIR, language_version)\n\n\ndef to_cmd(hook):\n return tuple(shlex.split(hook['entry'])) + tuple(hook['args'])\n\n\ndef assert_version_default(binary, version):\n if version != 'default':\n raise AssertionError(\n 'For now, pre-commit requires system-installed {}'.format(binary),\n )\n\n\ndef assert_no_additional_deps(lang, additional_deps):\n if additional_deps:\n raise AssertionError(\n 'For now, pre-commit does not support '\n 'additional_dependencies for {}'.format(lang),\n )\n\n\ndef basic_get_default_version():\n return 'default'\n\n\ndef basic_healthy(prefix, language_version):\n return True\n\n\ndef no_install(prefix, version, additional_dependencies):\n raise AssertionError('This type is not installable')\n\n\ndef target_concurrency(hook):\n if hook['require_serial'] or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:\n return 1\n else:\n # Travis appears to have a bunch of CPUs, but we can't use them all.\n if 'TRAVIS' in os.environ:\n return 2\n else:\n try:\n return multiprocessing.cpu_count()\n except NotImplementedError:\n return 1\n\n\ndef run_xargs(hook, cmd, file_args):\n return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))\n", "path": "pre_commit/languages/helpers.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport multiprocessing\nimport os\nimport random\nimport shlex\n\nimport six\n\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nFIXED_RANDOM_SEED = 1542676186\n\n\ndef run_setup_cmd(prefix, cmd):\n cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)\n\n\ndef environment_dir(ENVIRONMENT_DIR, language_version):\n if ENVIRONMENT_DIR is None:\n return None\n else:\n return '{}-{}'.format(ENVIRONMENT_DIR, language_version)\n\n\ndef to_cmd(hook):\n return tuple(shlex.split(hook['entry'])) + tuple(hook['args'])\n\n\ndef assert_version_default(binary, version):\n if version != 'default':\n raise AssertionError(\n 'For now, pre-commit requires system-installed {}'.format(binary),\n )\n\n\ndef assert_no_additional_deps(lang, additional_deps):\n if additional_deps:\n raise AssertionError(\n 'For now, pre-commit does not support '\n 'additional_dependencies for {}'.format(lang),\n )\n\n\ndef basic_get_default_version():\n return 'default'\n\n\ndef basic_healthy(prefix, language_version):\n return True\n\n\ndef no_install(prefix, version, additional_dependencies):\n raise AssertionError('This type is not installable')\n\n\ndef target_concurrency(hook):\n if hook['require_serial'] or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:\n return 1\n else:\n # Travis appears to have a bunch of CPUs, but we can't use them all.\n if 'TRAVIS' in os.environ:\n return 2\n else:\n try:\n return multiprocessing.cpu_count()\n except NotImplementedError:\n return 1\n\n\ndef _shuffled(seq):\n \"\"\"Deterministically shuffle identically under both py2 + py3.\"\"\"\n fixed_random = random.Random()\n if six.PY2: # pragma: no cover (py2)\n fixed_random.seed(FIXED_RANDOM_SEED)\n else:\n fixed_random.seed(FIXED_RANDOM_SEED, version=1)\n\n seq = list(seq)\n random.shuffle(seq, random=fixed_random.random)\n return seq\n\n\ndef run_xargs(hook, cmd, file_args):\n # Shuffle the files so that they more evenly fill out the xargs partitions,\n # but do it deterministically in case a hook cares about ordering.\n file_args = _shuffled(file_args)\n return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))\n", "path": "pre_commit/languages/helpers.py"}]} | 1,040 | 333 |
gh_patches_debug_9732 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-9452 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-3190] Pinning detective work
### Housekeeping
- [X] I am a maintainer of dbt-core
### Short description
We recently pinned `types-requests<2.31.0` because it had a dependency conflict with `urllib3` which we have pinned to `~=1.0` because of another conflict with `requests` requiring `openssl`.
This ticket is to look into if those pins are still required and clean them up if not.
### Acceptance criteria
We have confirmed that the pins are
- required to continue to work
_or_
- not required and we have re-pinned appropriately
### Impact to Other Teams
adapters - based on the notes it seems like `urllib3` is pinned for the snowflake adapter as well so we will want to ensure changing the dependencies does not adversely affect them
### Will backports be required?
no
### Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 8):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.8 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.8.0a1"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": ["dbt = dbt.cli.main:cli"],
47 },
48 install_requires=[
49 # ----
50 # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).
51 # Pin to the patch or minor version, and bump in each new minor version of dbt-core.
52 "agate~=1.7.0",
53 "Jinja2~=3.1.2",
54 "mashumaro[msgpack]~=3.9",
55 # ----
56 # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)
57 # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core
58 "logbook>=1.5,<1.6",
59 # ----
60 # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility
61 # with major versions in each new minor version of dbt-core.
62 "click>=8.0.2,<9",
63 "networkx>=2.3,<4",
64 # ----
65 # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
66 # and check compatibility / bump in each new minor version of dbt-core.
67 "pathspec>=0.9,<0.12",
68 "sqlparse>=0.2.3,<0.5",
69 # ----
70 # These are major-version-0 packages also maintained by dbt-labs. Accept patches.
71 "dbt-extractor~=0.5.0",
72 "minimal-snowplow-tracker~=0.0.2",
73 "dbt-semantic-interfaces~=0.5.0a2",
74 "dbt-common~=0.1.0",
75 "dbt-adapters~=0.1.0a2",
76 # ----
77 # Expect compatibility with all new versions of these packages, so lower bounds only.
78 "packaging>20.9",
79 "protobuf>=4.0.0",
80 "pytz>=2015.7",
81 "pyyaml>=6.0",
82 "daff>=1.3.46",
83 "typing-extensions>=4.4",
84 # ----
85 ],
86 zip_safe=False,
87 classifiers=[
88 "Development Status :: 5 - Production/Stable",
89 "License :: OSI Approved :: Apache Software License",
90 "Operating System :: Microsoft :: Windows",
91 "Operating System :: MacOS :: MacOS X",
92 "Operating System :: POSIX :: Linux",
93 "Programming Language :: Python :: 3.8",
94 "Programming Language :: Python :: 3.9",
95 "Programming Language :: Python :: 3.10",
96 "Programming Language :: Python :: 3.11",
97 ],
98 python_requires=">=3.8",
99 )
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -61,6 +61,7 @@
# with major versions in each new minor version of dbt-core.
"click>=8.0.2,<9",
"networkx>=2.3,<4",
+ "requests<3.0.0", # should match dbt-common
# ----
# These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
# and check compatibility / bump in each new minor version of dbt-core.
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -61,6 +61,7 @@\n # with major versions in each new minor version of dbt-core.\n \"click>=8.0.2,<9\",\n \"networkx>=2.3,<4\",\n+ \"requests<3.0.0\", # should match dbt-common\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n", "issue": "[CT-3190] Pinning detective work\n### Housekeeping\n\n- [X] I am a maintainer of dbt-core\n\n### Short description\n\nWe recently pinned `types-requests<2.31.0` because it had a dependency conflict with `urllib3` which we have pinned to `~=1.0` because of another conflict with `requests` requiring `openssl`.\r\n\r\nThis ticket is to look into if those pins are still required and clean them up if not. \n\n### Acceptance criteria\n\nWe have confirmed that the pins are\r\n- required to continue to work\r\n_or_\r\n- not required and we have re-pinned appropriately\n\n### Impact to Other Teams\n\nadapters - based on the notes it seems like `urllib3` is pinned for the snowflake adapter as well so we will want to ensure changing the dependencies does not adversely affect them\n\n### Will backports be required?\n\nno\n\n### Context\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 8):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.8 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.8.0a1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n # ----\n # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).\n # Pin to the patch or minor version, and bump in each new minor version of dbt-core.\n \"agate~=1.7.0\",\n \"Jinja2~=3.1.2\",\n \"mashumaro[msgpack]~=3.9\",\n # ----\n # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)\n # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core\n \"logbook>=1.5,<1.6\",\n # ----\n # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility\n # with major versions in each new minor version of dbt-core.\n \"click>=8.0.2,<9\",\n \"networkx>=2.3,<4\",\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n \"pathspec>=0.9,<0.12\",\n \"sqlparse>=0.2.3,<0.5\",\n # ----\n # These are major-version-0 packages also maintained by dbt-labs. Accept patches.\n \"dbt-extractor~=0.5.0\",\n \"minimal-snowplow-tracker~=0.0.2\",\n \"dbt-semantic-interfaces~=0.5.0a2\",\n \"dbt-common~=0.1.0\",\n \"dbt-adapters~=0.1.0a2\",\n # ----\n # Expect compatibility with all new versions of these packages, so lower bounds only.\n \"packaging>20.9\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n \"pyyaml>=6.0\",\n \"daff>=1.3.46\",\n \"typing-extensions>=4.4\",\n # ----\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.8\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 8):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.8 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.8.0a1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n # ----\n # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).\n # Pin to the patch or minor version, and bump in each new minor version of dbt-core.\n \"agate~=1.7.0\",\n \"Jinja2~=3.1.2\",\n \"mashumaro[msgpack]~=3.9\",\n # ----\n # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)\n # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core\n \"logbook>=1.5,<1.6\",\n # ----\n # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility\n # with major versions in each new minor version of dbt-core.\n \"click>=8.0.2,<9\",\n \"networkx>=2.3,<4\",\n \"requests<3.0.0\", # should match dbt-common\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n \"pathspec>=0.9,<0.12\",\n \"sqlparse>=0.2.3,<0.5\",\n # ----\n # These are major-version-0 packages also maintained by dbt-labs. Accept patches.\n \"dbt-extractor~=0.5.0\",\n \"minimal-snowplow-tracker~=0.0.2\",\n \"dbt-semantic-interfaces~=0.5.0a2\",\n \"dbt-common~=0.1.0\",\n \"dbt-adapters~=0.1.0a2\",\n # ----\n # Expect compatibility with all new versions of these packages, so lower bounds only.\n \"packaging>20.9\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n \"pyyaml>=6.0\",\n \"daff>=1.3.46\",\n \"typing-extensions>=4.4\",\n # ----\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.8\",\n)\n", "path": "core/setup.py"}]} | 1,608 | 138 |
gh_patches_debug_40781 | rasdani/github-patches | git_diff | bokeh__bokeh-6812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inline, Minified Resources do not work in classic notebooks
This is due to an interaction with the classic notebooks use of JQuery, when output is published as `text/html`. New notebook code published a div and a script together as `text/html`. Propose to solve by publishing a single script as `application/javascript` (which should work) that creates the necessary div itself
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/util/notebook.py`
Content:
```
1 ''' Functions useful for loading Bokeh code and data in Jupyter/Zeppelin notebooks.
2
3 '''
4 from __future__ import absolute_import
5
6 from IPython.display import publish_display_data
7
8 from ..embed import _wrap_in_script_tag
9
10 LOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'
11 EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'
12
13 _notebook_loaded = None
14
15 # TODO (bev) notebook_type and zeppelin bits should be removed after external zeppelin hook available
16 def load_notebook(resources=None, verbose=False, hide_banner=False, load_timeout=5000, notebook_type='jupyter'):
17 ''' Prepare the IPython notebook for displaying Bokeh plots.
18
19 Args:
20 resources (Resource, optional) :
21 how and where to load BokehJS from (default: CDN)
22
23 verbose (bool, optional) :
24 whether to report detailed settings (default: False)
25
26 hide_banner (bool, optional):
27 whether to hide the Bokeh banner (default: False)
28
29 load_timeout (int, optional) :
30 Timeout in milliseconds when plots assume load timed out (default: 5000)
31
32 notebook_type (string):
33 notebook_type (default: jupyter)
34
35 .. warning::
36 Clearing the output cell containing the published BokehJS
37 resources HTML code may cause Bokeh CSS styling to be removed.
38
39 Returns:
40 None
41
42 '''
43 nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)
44 lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)
45 if notebook_type=='jupyter':
46 publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),
47 LOAD_MIME_TYPE: {"script": lab_js, "div": lab_html}})
48 else:
49 _publish_zeppelin_data(lab_html, lab_js)
50
51
52 FINALIZE_JS = """
53 document.getElementById("%s").textContent = "BokehJS is loading...";
54 """
55
56 # TODO (bev) This will eventually go away
57 def _publish_zeppelin_data(html, js):
58 print('%html ' + html)
59 print('%html ' + '<script type="text/javascript">' + js + "</script>")
60
61 def _load_notebook_html(resources=None, verbose=False, hide_banner=False,
62 load_timeout=5000, register_mimetype=True):
63 global _notebook_loaded
64
65 from .. import __version__
66 from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD
67 from ..util.serialization import make_id
68 from ..util.compiler import bundle_all_models
69 from ..resources import CDN
70
71 if resources is None:
72 resources = CDN
73
74 if resources.mode == 'inline':
75 js_info = 'inline'
76 css_info = 'inline'
77 else:
78 js_info = resources.js_files[0] if len(resources.js_files) == 1 else resources.js_files
79 css_info = resources.css_files[0] if len(resources.css_files) == 1 else resources.css_files
80
81 warnings = ["Warning: " + msg['text'] for msg in resources.messages if msg['type'] == 'warn']
82
83 if _notebook_loaded and verbose:
84 warnings.append('Warning: BokehJS previously loaded')
85
86 _notebook_loaded = resources
87
88 element_id = make_id()
89
90 html = NOTEBOOK_LOAD.render(
91 element_id = element_id,
92 verbose = verbose,
93 js_info = js_info,
94 css_info = css_info,
95 bokeh_version = __version__,
96 warnings = warnings,
97 hide_banner = hide_banner,
98 )
99
100 custom_models_js = bundle_all_models()
101
102 js = AUTOLOAD_NB_JS.render(
103 elementid = '' if hide_banner else element_id,
104 js_urls = resources.js_files,
105 css_urls = resources.css_files,
106 js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),
107 css_raw = resources.css_raw_str,
108 force = True,
109 timeout = load_timeout,
110 register_mimetype = register_mimetype
111 )
112
113 return html, js
114
115 def get_comms(target_name):
116 ''' Create a Jupyter comms object for a specific target, that can
117 be used to update Bokeh documents in the Jupyter notebook.
118
119 Args:
120 target_name (str) : the target name the Comms object should connect to
121
122 Returns
123 Jupyter Comms
124
125 '''
126 from ipykernel.comm import Comm
127 return Comm(target_name=target_name, data={})
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/util/notebook.py b/bokeh/util/notebook.py
--- a/bokeh/util/notebook.py
+++ b/bokeh/util/notebook.py
@@ -5,8 +5,7 @@
from IPython.display import publish_display_data
-from ..embed import _wrap_in_script_tag
-
+JS_MIME_TYPE = 'application/javascript'
LOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'
EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'
@@ -40,33 +39,14 @@
None
'''
- nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)
- lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)
- if notebook_type=='jupyter':
- publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),
- LOAD_MIME_TYPE: {"script": lab_js, "div": lab_html}})
- else:
- _publish_zeppelin_data(lab_html, lab_js)
-
-FINALIZE_JS = """
-document.getElementById("%s").textContent = "BokehJS is loading...";
-"""
-
-# TODO (bev) This will eventually go away
-def _publish_zeppelin_data(html, js):
- print('%html ' + html)
- print('%html ' + '<script type="text/javascript">' + js + "</script>")
-
-def _load_notebook_html(resources=None, verbose=False, hide_banner=False,
- load_timeout=5000, register_mimetype=True):
global _notebook_loaded
from .. import __version__
- from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD
+ from ..core.templates import NOTEBOOK_LOAD
from ..util.serialization import make_id
- from ..util.compiler import bundle_all_models
from ..resources import CDN
+ from ..util.compiler import bundle_all_models
if resources is None:
resources = CDN
@@ -99,18 +79,48 @@
custom_models_js = bundle_all_models()
+ nb_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=True)
+ jl_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=False)
+
+ if notebook_type=='jupyter':
+
+ if not hide_banner:
+ publish_display_data({'text/html': html})
+
+ publish_display_data({
+ JS_MIME_TYPE : nb_js,
+ LOAD_MIME_TYPE : {"script": jl_js}
+ })
+
+ else:
+ _publish_zeppelin_data(html, jl_js)
+
+
+FINALIZE_JS = """
+document.getElementById("%s").textContent = "BokehJS is loading...";
+"""
+
+# TODO (bev) This will eventually go away
+def _publish_zeppelin_data(html, js):
+ print('%html ' + html)
+ print('%html ' + '<script type="text/javascript">' + js + "</script>")
+
+def _loading_js(resources, element_id, custom_models_js, load_timeout=5000, register_mime=True):
+
+ from ..core.templates import AUTOLOAD_NB_JS
+
js = AUTOLOAD_NB_JS.render(
- elementid = '' if hide_banner else element_id,
- js_urls = resources.js_files,
- css_urls = resources.css_files,
- js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),
- css_raw = resources.css_raw_str,
- force = True,
- timeout = load_timeout,
- register_mimetype = register_mimetype
+ elementid = element_id,
+ js_urls = resources.js_files,
+ css_urls = resources.css_files,
+ js_raw = resources.js_raw + [custom_models_js] + [FINALIZE_JS % element_id],
+ css_raw = resources.css_raw_str,
+ force = True,
+ timeout = load_timeout,
+ register_mime = register_mime
)
- return html, js
+ return js
def get_comms(target_name):
''' Create a Jupyter comms object for a specific target, that can
| {"golden_diff": "diff --git a/bokeh/util/notebook.py b/bokeh/util/notebook.py\n--- a/bokeh/util/notebook.py\n+++ b/bokeh/util/notebook.py\n@@ -5,8 +5,7 @@\n \n from IPython.display import publish_display_data\n \n-from ..embed import _wrap_in_script_tag\n-\n+JS_MIME_TYPE = 'application/javascript'\n LOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'\n EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'\n \n@@ -40,33 +39,14 @@\n None\n \n '''\n- nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)\n- lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)\n- if notebook_type=='jupyter':\n- publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),\n- LOAD_MIME_TYPE: {\"script\": lab_js, \"div\": lab_html}})\n- else:\n- _publish_zeppelin_data(lab_html, lab_js)\n \n-\n-FINALIZE_JS = \"\"\"\n-document.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n-\"\"\"\n-\n-# TODO (bev) This will eventually go away\n-def _publish_zeppelin_data(html, js):\n- print('%html ' + html)\n- print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n-\n-def _load_notebook_html(resources=None, verbose=False, hide_banner=False,\n- load_timeout=5000, register_mimetype=True):\n global _notebook_loaded\n \n from .. import __version__\n- from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD\n+ from ..core.templates import NOTEBOOK_LOAD\n from ..util.serialization import make_id\n- from ..util.compiler import bundle_all_models\n from ..resources import CDN\n+ from ..util.compiler import bundle_all_models\n \n if resources is None:\n resources = CDN\n@@ -99,18 +79,48 @@\n \n custom_models_js = bundle_all_models()\n \n+ nb_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=True)\n+ jl_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=False)\n+\n+ if notebook_type=='jupyter':\n+\n+ if not hide_banner:\n+ publish_display_data({'text/html': html})\n+\n+ publish_display_data({\n+ JS_MIME_TYPE : nb_js,\n+ LOAD_MIME_TYPE : {\"script\": jl_js}\n+ })\n+\n+ else:\n+ _publish_zeppelin_data(html, jl_js)\n+\n+\n+FINALIZE_JS = \"\"\"\n+document.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n+\"\"\"\n+\n+# TODO (bev) This will eventually go away\n+def _publish_zeppelin_data(html, js):\n+ print('%html ' + html)\n+ print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n+\n+def _loading_js(resources, element_id, custom_models_js, load_timeout=5000, register_mime=True):\n+\n+ from ..core.templates import AUTOLOAD_NB_JS\n+\n js = AUTOLOAD_NB_JS.render(\n- elementid = '' if hide_banner else element_id,\n- js_urls = resources.js_files,\n- css_urls = resources.css_files,\n- js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),\n- css_raw = resources.css_raw_str,\n- force = True,\n- timeout = load_timeout,\n- register_mimetype = register_mimetype\n+ elementid = element_id,\n+ js_urls = resources.js_files,\n+ css_urls = resources.css_files,\n+ js_raw = resources.js_raw + [custom_models_js] + [FINALIZE_JS % element_id],\n+ css_raw = resources.css_raw_str,\n+ force = True,\n+ timeout = load_timeout,\n+ register_mime = register_mime\n )\n \n- return html, js\n+ return js\n \n def get_comms(target_name):\n ''' Create a Jupyter comms object for a specific target, that can\n", "issue": "Inline, Minified Resources do not work in classic notebooks\nThis is due to an interaction with the classic notebooks use of JQuery, when output is published as `text/html`. New notebook code published a div and a script together as `text/html`. Propose to solve by publishing a single script as `application/javascript` (which should work) that creates the necessary div itself \n", "before_files": [{"content": "''' Functions useful for loading Bokeh code and data in Jupyter/Zeppelin notebooks.\n\n'''\nfrom __future__ import absolute_import\n\nfrom IPython.display import publish_display_data\n\nfrom ..embed import _wrap_in_script_tag\n\nLOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'\nEXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'\n\n_notebook_loaded = None\n\n# TODO (bev) notebook_type and zeppelin bits should be removed after external zeppelin hook available\ndef load_notebook(resources=None, verbose=False, hide_banner=False, load_timeout=5000, notebook_type='jupyter'):\n ''' Prepare the IPython notebook for displaying Bokeh plots.\n\n Args:\n resources (Resource, optional) :\n how and where to load BokehJS from (default: CDN)\n\n verbose (bool, optional) :\n whether to report detailed settings (default: False)\n\n hide_banner (bool, optional):\n whether to hide the Bokeh banner (default: False)\n\n load_timeout (int, optional) :\n Timeout in milliseconds when plots assume load timed out (default: 5000)\n\n notebook_type (string):\n notebook_type (default: jupyter)\n\n .. warning::\n Clearing the output cell containing the published BokehJS\n resources HTML code may cause Bokeh CSS styling to be removed.\n\n Returns:\n None\n\n '''\n nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)\n lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)\n if notebook_type=='jupyter':\n publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),\n LOAD_MIME_TYPE: {\"script\": lab_js, \"div\": lab_html}})\n else:\n _publish_zeppelin_data(lab_html, lab_js)\n\n\nFINALIZE_JS = \"\"\"\ndocument.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n\"\"\"\n\n# TODO (bev) This will eventually go away\ndef _publish_zeppelin_data(html, js):\n print('%html ' + html)\n print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n\ndef _load_notebook_html(resources=None, verbose=False, hide_banner=False,\n load_timeout=5000, register_mimetype=True):\n global _notebook_loaded\n\n from .. import __version__\n from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD\n from ..util.serialization import make_id\n from ..util.compiler import bundle_all_models\n from ..resources import CDN\n\n if resources is None:\n resources = CDN\n\n if resources.mode == 'inline':\n js_info = 'inline'\n css_info = 'inline'\n else:\n js_info = resources.js_files[0] if len(resources.js_files) == 1 else resources.js_files\n css_info = resources.css_files[0] if len(resources.css_files) == 1 else resources.css_files\n\n warnings = [\"Warning: \" + msg['text'] for msg in resources.messages if msg['type'] == 'warn']\n\n if _notebook_loaded and verbose:\n warnings.append('Warning: BokehJS previously loaded')\n\n _notebook_loaded = resources\n\n element_id = make_id()\n\n html = NOTEBOOK_LOAD.render(\n element_id = element_id,\n verbose = verbose,\n js_info = js_info,\n css_info = css_info,\n bokeh_version = __version__,\n warnings = warnings,\n hide_banner = hide_banner,\n )\n\n custom_models_js = bundle_all_models()\n\n js = AUTOLOAD_NB_JS.render(\n elementid = '' if hide_banner else element_id,\n js_urls = resources.js_files,\n css_urls = resources.css_files,\n js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),\n css_raw = resources.css_raw_str,\n force = True,\n timeout = load_timeout,\n register_mimetype = register_mimetype\n )\n\n return html, js\n\ndef get_comms(target_name):\n ''' Create a Jupyter comms object for a specific target, that can\n be used to update Bokeh documents in the Jupyter notebook.\n\n Args:\n target_name (str) : the target name the Comms object should connect to\n\n Returns\n Jupyter Comms\n\n '''\n from ipykernel.comm import Comm\n return Comm(target_name=target_name, data={})\n", "path": "bokeh/util/notebook.py"}], "after_files": [{"content": "''' Functions useful for loading Bokeh code and data in Jupyter/Zeppelin notebooks.\n\n'''\nfrom __future__ import absolute_import\n\nfrom IPython.display import publish_display_data\n\nJS_MIME_TYPE = 'application/javascript'\nLOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'\nEXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'\n\n_notebook_loaded = None\n\n# TODO (bev) notebook_type and zeppelin bits should be removed after external zeppelin hook available\ndef load_notebook(resources=None, verbose=False, hide_banner=False, load_timeout=5000, notebook_type='jupyter'):\n ''' Prepare the IPython notebook for displaying Bokeh plots.\n\n Args:\n resources (Resource, optional) :\n how and where to load BokehJS from (default: CDN)\n\n verbose (bool, optional) :\n whether to report detailed settings (default: False)\n\n hide_banner (bool, optional):\n whether to hide the Bokeh banner (default: False)\n\n load_timeout (int, optional) :\n Timeout in milliseconds when plots assume load timed out (default: 5000)\n\n notebook_type (string):\n notebook_type (default: jupyter)\n\n .. warning::\n Clearing the output cell containing the published BokehJS\n resources HTML code may cause Bokeh CSS styling to be removed.\n\n Returns:\n None\n\n '''\n\n global _notebook_loaded\n\n from .. import __version__\n from ..core.templates import NOTEBOOK_LOAD\n from ..util.serialization import make_id\n from ..resources import CDN\n from ..util.compiler import bundle_all_models\n\n if resources is None:\n resources = CDN\n\n if resources.mode == 'inline':\n js_info = 'inline'\n css_info = 'inline'\n else:\n js_info = resources.js_files[0] if len(resources.js_files) == 1 else resources.js_files\n css_info = resources.css_files[0] if len(resources.css_files) == 1 else resources.css_files\n\n warnings = [\"Warning: \" + msg['text'] for msg in resources.messages if msg['type'] == 'warn']\n\n if _notebook_loaded and verbose:\n warnings.append('Warning: BokehJS previously loaded')\n\n _notebook_loaded = resources\n\n element_id = make_id()\n\n html = NOTEBOOK_LOAD.render(\n element_id = element_id,\n verbose = verbose,\n js_info = js_info,\n css_info = css_info,\n bokeh_version = __version__,\n warnings = warnings,\n hide_banner = hide_banner,\n )\n\n custom_models_js = bundle_all_models()\n\n nb_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=True)\n jl_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=False)\n\n if notebook_type=='jupyter':\n\n if not hide_banner:\n publish_display_data({'text/html': html})\n\n publish_display_data({\n JS_MIME_TYPE : nb_js,\n LOAD_MIME_TYPE : {\"script\": jl_js}\n })\n\n else:\n _publish_zeppelin_data(html, jl_js)\n\n\nFINALIZE_JS = \"\"\"\ndocument.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n\"\"\"\n\n# TODO (bev) This will eventually go away\ndef _publish_zeppelin_data(html, js):\n print('%html ' + html)\n print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n\ndef _loading_js(resources, element_id, custom_models_js, load_timeout=5000, register_mime=True):\n\n from ..core.templates import AUTOLOAD_NB_JS\n\n js = AUTOLOAD_NB_JS.render(\n elementid = element_id,\n js_urls = resources.js_files,\n css_urls = resources.css_files,\n js_raw = resources.js_raw + [custom_models_js] + [FINALIZE_JS % element_id],\n css_raw = resources.css_raw_str,\n force = True,\n timeout = load_timeout,\n register_mime = register_mime\n )\n\n return js\n\ndef get_comms(target_name):\n ''' Create a Jupyter comms object for a specific target, that can\n be used to update Bokeh documents in the Jupyter notebook.\n\n Args:\n target_name (str) : the target name the Comms object should connect to\n\n Returns\n Jupyter Comms\n\n '''\n from ipykernel.comm import Comm\n return Comm(target_name=target_name, data={})\n", "path": "bokeh/util/notebook.py"}]} | 1,623 | 963 |
gh_patches_debug_59717 | rasdani/github-patches | git_diff | pytorch__audio-1339 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Making `AudioMetaData` print friendly
`AudioMetaData` class reports meta-data of audio source. It is however not print friendly.
```python
print(torchaudio.info(src))
>>> <torchaudio.backend.common.AudioMetaData object at 0x7f1bc5cd2890>
```
It is nice if we can simply print the attributes like `dataclass` objects do.
```python
print(torchaudio.info(src))
>>> AudioMetaData(sample_rate=900, encoding="PCM", ...)
```
## Steps
There are two approaches I can think of
1. Add `__str__` method.
2. Use `dataclasses.dataclass`
For 2, the `info` function has to be TorchScript-compatible. This means that its return type `AudioMetaData` has to be TorchScript-able. For this reason, `dataclass` might not be applicable. This can be checked with the following test;
```bash
(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py)
```
## Build and test
Please refer to the [contribution guide](https://github.com/pytorch/audio/blob/master/CONTRIBUTING.md) for how to setup development environment.
To test,
```bash
(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py torchaudio_unittest/backend/sox_io/info_test.py torchaudio_unittest/backend/soundfile_io/info_test.py)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchaudio/backend/common.py`
Content:
```
1 class AudioMetaData:
2 """Return type of ``torchaudio.info`` function.
3
4 This class is used by :ref:`"sox_io" backend<sox_io_backend>` and
5 :ref:`"soundfile" backend with the new interface<soundfile_backend>`.
6
7 :ivar int sample_rate: Sample rate
8 :ivar int num_frames: The number of frames
9 :ivar int num_channels: The number of channels
10 :ivar int bits_per_sample: The number of bits per sample. This is 0 for lossy formats,
11 or when it cannot be accurately inferred.
12 :ivar str encoding: Audio encoding
13 The values encoding can take are one of the following:
14
15 * ``PCM_S``: Signed integer linear PCM
16 * ``PCM_U``: Unsigned integer linear PCM
17 * ``PCM_F``: Floating point linear PCM
18 * ``FLAC``: Flac, Free Lossless Audio Codec
19 * ``ULAW``: Mu-law
20 * ``ALAW``: A-law
21 * ``MP3`` : MP3, MPEG-1 Audio Layer III
22 * ``VORBIS``: OGG Vorbis
23 * ``AMR_WB``: Adaptive Multi-Rate
24 * ``AMR_NB``: Adaptive Multi-Rate Wideband
25 * ``OPUS``: Opus
26 * ``UNKNOWN`` : None of above
27 """
28 def __init__(
29 self,
30 sample_rate: int,
31 num_frames: int,
32 num_channels: int,
33 bits_per_sample: int,
34 encoding: str,
35 ):
36 self.sample_rate = sample_rate
37 self.num_frames = num_frames
38 self.num_channels = num_channels
39 self.bits_per_sample = bits_per_sample
40 self.encoding = encoding
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchaudio/backend/common.py b/torchaudio/backend/common.py
--- a/torchaudio/backend/common.py
+++ b/torchaudio/backend/common.py
@@ -38,3 +38,14 @@
self.num_channels = num_channels
self.bits_per_sample = bits_per_sample
self.encoding = encoding
+
+ def __str__(self):
+ return (
+ f"AudioMetaData("
+ f"sample_rate={self.sample_rate}, "
+ f"num_frames={self.num_frames}, "
+ f"num_channels={self.num_channels}, "
+ f"bits_per_sample={self.bits_per_sample}, "
+ f"encoding={self.encoding}"
+ f")"
+ )
| {"golden_diff": "diff --git a/torchaudio/backend/common.py b/torchaudio/backend/common.py\n--- a/torchaudio/backend/common.py\n+++ b/torchaudio/backend/common.py\n@@ -38,3 +38,14 @@\n self.num_channels = num_channels\n self.bits_per_sample = bits_per_sample\n self.encoding = encoding\n+\n+ def __str__(self):\n+ return (\n+ f\"AudioMetaData(\"\n+ f\"sample_rate={self.sample_rate}, \"\n+ f\"num_frames={self.num_frames}, \"\n+ f\"num_channels={self.num_channels}, \"\n+ f\"bits_per_sample={self.bits_per_sample}, \"\n+ f\"encoding={self.encoding}\"\n+ f\")\"\n+ )\n", "issue": "Making `AudioMetaData` print friendly\n`AudioMetaData` class reports meta-data of audio source. It is however not print friendly.\r\n\r\n```python\r\nprint(torchaudio.info(src))\r\n>>> <torchaudio.backend.common.AudioMetaData object at 0x7f1bc5cd2890>\r\n```\r\n\r\nIt is nice if we can simply print the attributes like `dataclass` objects do.\r\n\r\n```python\r\nprint(torchaudio.info(src))\r\n>>> AudioMetaData(sample_rate=900, encoding=\"PCM\", ...)\r\n```\r\n\r\n## Steps\r\n\r\nThere are two approaches I can think of\r\n1. Add `__str__` method.\r\n2. Use `dataclasses.dataclass`\r\n\r\nFor 2, the `info` function has to be TorchScript-compatible. This means that its return type `AudioMetaData` has to be TorchScript-able. For this reason, `dataclass` might not be applicable. This can be checked with the following test;\r\n\r\n```bash\r\n(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py)\r\n```\r\n\r\n## Build and test\r\n\r\nPlease refer to the [contribution guide](https://github.com/pytorch/audio/blob/master/CONTRIBUTING.md) for how to setup development environment.\r\n\r\nTo test, \r\n\r\n```bash\r\n(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py torchaudio_unittest/backend/sox_io/info_test.py torchaudio_unittest/backend/soundfile_io/info_test.py)\r\n```\n", "before_files": [{"content": "class AudioMetaData:\n \"\"\"Return type of ``torchaudio.info`` function.\n\n This class is used by :ref:`\"sox_io\" backend<sox_io_backend>` and\n :ref:`\"soundfile\" backend with the new interface<soundfile_backend>`.\n\n :ivar int sample_rate: Sample rate\n :ivar int num_frames: The number of frames\n :ivar int num_channels: The number of channels\n :ivar int bits_per_sample: The number of bits per sample. This is 0 for lossy formats,\n or when it cannot be accurately inferred.\n :ivar str encoding: Audio encoding\n The values encoding can take are one of the following:\n\n * ``PCM_S``: Signed integer linear PCM\n * ``PCM_U``: Unsigned integer linear PCM\n * ``PCM_F``: Floating point linear PCM\n * ``FLAC``: Flac, Free Lossless Audio Codec\n * ``ULAW``: Mu-law\n * ``ALAW``: A-law\n * ``MP3`` : MP3, MPEG-1 Audio Layer III\n * ``VORBIS``: OGG Vorbis\n * ``AMR_WB``: Adaptive Multi-Rate\n * ``AMR_NB``: Adaptive Multi-Rate Wideband\n * ``OPUS``: Opus\n * ``UNKNOWN`` : None of above\n \"\"\"\n def __init__(\n self,\n sample_rate: int,\n num_frames: int,\n num_channels: int,\n bits_per_sample: int,\n encoding: str,\n ):\n self.sample_rate = sample_rate\n self.num_frames = num_frames\n self.num_channels = num_channels\n self.bits_per_sample = bits_per_sample\n self.encoding = encoding\n", "path": "torchaudio/backend/common.py"}], "after_files": [{"content": "class AudioMetaData:\n \"\"\"Return type of ``torchaudio.info`` function.\n\n This class is used by :ref:`\"sox_io\" backend<sox_io_backend>` and\n :ref:`\"soundfile\" backend with the new interface<soundfile_backend>`.\n\n :ivar int sample_rate: Sample rate\n :ivar int num_frames: The number of frames\n :ivar int num_channels: The number of channels\n :ivar int bits_per_sample: The number of bits per sample. This is 0 for lossy formats,\n or when it cannot be accurately inferred.\n :ivar str encoding: Audio encoding\n The values encoding can take are one of the following:\n\n * ``PCM_S``: Signed integer linear PCM\n * ``PCM_U``: Unsigned integer linear PCM\n * ``PCM_F``: Floating point linear PCM\n * ``FLAC``: Flac, Free Lossless Audio Codec\n * ``ULAW``: Mu-law\n * ``ALAW``: A-law\n * ``MP3`` : MP3, MPEG-1 Audio Layer III\n * ``VORBIS``: OGG Vorbis\n * ``AMR_WB``: Adaptive Multi-Rate\n * ``AMR_NB``: Adaptive Multi-Rate Wideband\n * ``OPUS``: Opus\n * ``UNKNOWN`` : None of above\n \"\"\"\n def __init__(\n self,\n sample_rate: int,\n num_frames: int,\n num_channels: int,\n bits_per_sample: int,\n encoding: str,\n ):\n self.sample_rate = sample_rate\n self.num_frames = num_frames\n self.num_channels = num_channels\n self.bits_per_sample = bits_per_sample\n self.encoding = encoding\n\n def __str__(self):\n return (\n f\"AudioMetaData(\"\n f\"sample_rate={self.sample_rate}, \"\n f\"num_frames={self.num_frames}, \"\n f\"num_channels={self.num_channels}, \"\n f\"bits_per_sample={self.bits_per_sample}, \"\n f\"encoding={self.encoding}\"\n f\")\"\n )\n", "path": "torchaudio/backend/common.py"}]} | 1,034 | 163 |
gh_patches_debug_4769 | rasdani/github-patches | git_diff | spotify__luigi-1447 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scheduler only hosts on unix socket when run in the background
Support for hosting the central scheduler on a unix socket was added, which is nice, but the scheduler ignores the `--unix-socket` argument from the command line when `--background` is not also supplied.
This will work properly, and the scheduler will listen on the provided unix socket:
```
luigid --unix-socket /path/to/socket --background
```
With this command, the scheduler will still listen on the default port (8082):
```
luigid --unix-socket /path/to/socket
```
Fixing this would be a simple matter of passing the `unix_socket` argument onto the call to `server.run` in the case where the server is not daemonized, but was there a reason this functionality was left out in the first place? If so, it probably ought to be in the documentation; as is, reading it gives me the impression that I should be able to tell the scheduler to listen on a unix socket regardless of whether it's running in the background.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `luigi/cmdline.py`
Content:
```
1 import os
2 import argparse
3 import logging
4 import sys
5
6 from luigi.retcodes import run_with_retcodes
7
8
9 def luigi_run(argv=sys.argv[1:]):
10 run_with_retcodes(argv)
11
12
13 def luigid(argv=sys.argv[1:]):
14 import luigi.server
15 import luigi.process
16 import luigi.configuration
17 parser = argparse.ArgumentParser(description=u'Central luigi server')
18 parser.add_argument(u'--background', help=u'Run in background mode', action='store_true')
19 parser.add_argument(u'--pidfile', help=u'Write pidfile')
20 parser.add_argument(u'--logdir', help=u'log directory')
21 parser.add_argument(u'--state-path', help=u'Pickled state file')
22 parser.add_argument(u'--address', help=u'Listening interface')
23 parser.add_argument(u'--unix-socket', help=u'Unix socket path')
24 parser.add_argument(u'--port', default=8082, help=u'Listening port')
25
26 opts = parser.parse_args(argv)
27
28 if opts.state_path:
29 config = luigi.configuration.get_config()
30 config.set('scheduler', 'state_path', opts.state_path)
31
32 if opts.background:
33 # daemonize sets up logging to spooled log files
34 logging.getLogger().setLevel(logging.INFO)
35 luigi.process.daemonize(luigi.server.run, api_port=opts.port,
36 address=opts.address, pidfile=opts.pidfile,
37 logdir=opts.logdir, unix_socket=opts.unix_socket)
38 else:
39 if opts.logdir:
40 logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format(),
41 filename=os.path.join(opts.logdir, "luigi-server.log"))
42 else:
43 logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())
44 luigi.server.run(api_port=opts.port, address=opts.address)
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/luigi/cmdline.py b/luigi/cmdline.py
--- a/luigi/cmdline.py
+++ b/luigi/cmdline.py
@@ -41,4 +41,4 @@
filename=os.path.join(opts.logdir, "luigi-server.log"))
else:
logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())
- luigi.server.run(api_port=opts.port, address=opts.address)
+ luigi.server.run(api_port=opts.port, address=opts.address, unix_socket=opts.unix_socket)
| {"golden_diff": "diff --git a/luigi/cmdline.py b/luigi/cmdline.py\n--- a/luigi/cmdline.py\n+++ b/luigi/cmdline.py\n@@ -41,4 +41,4 @@\n filename=os.path.join(opts.logdir, \"luigi-server.log\"))\n else:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())\n- luigi.server.run(api_port=opts.port, address=opts.address)\n+ luigi.server.run(api_port=opts.port, address=opts.address, unix_socket=opts.unix_socket)\n", "issue": "Scheduler only hosts on unix socket when run in the background\nSupport for hosting the central scheduler on a unix socket was added, which is nice, but the scheduler ignores the `--unix-socket` argument from the command line when `--background` is not also supplied. \n\nThis will work properly, and the scheduler will listen on the provided unix socket:\n\n```\nluigid --unix-socket /path/to/socket --background\n```\n\nWith this command, the scheduler will still listen on the default port (8082):\n\n```\nluigid --unix-socket /path/to/socket\n```\n\nFixing this would be a simple matter of passing the `unix_socket` argument onto the call to `server.run` in the case where the server is not daemonized, but was there a reason this functionality was left out in the first place? If so, it probably ought to be in the documentation; as is, reading it gives me the impression that I should be able to tell the scheduler to listen on a unix socket regardless of whether it's running in the background.\n\n", "before_files": [{"content": "import os\nimport argparse\nimport logging\nimport sys\n\nfrom luigi.retcodes import run_with_retcodes\n\n\ndef luigi_run(argv=sys.argv[1:]):\n run_with_retcodes(argv)\n\n\ndef luigid(argv=sys.argv[1:]):\n import luigi.server\n import luigi.process\n import luigi.configuration\n parser = argparse.ArgumentParser(description=u'Central luigi server')\n parser.add_argument(u'--background', help=u'Run in background mode', action='store_true')\n parser.add_argument(u'--pidfile', help=u'Write pidfile')\n parser.add_argument(u'--logdir', help=u'log directory')\n parser.add_argument(u'--state-path', help=u'Pickled state file')\n parser.add_argument(u'--address', help=u'Listening interface')\n parser.add_argument(u'--unix-socket', help=u'Unix socket path')\n parser.add_argument(u'--port', default=8082, help=u'Listening port')\n\n opts = parser.parse_args(argv)\n\n if opts.state_path:\n config = luigi.configuration.get_config()\n config.set('scheduler', 'state_path', opts.state_path)\n\n if opts.background:\n # daemonize sets up logging to spooled log files\n logging.getLogger().setLevel(logging.INFO)\n luigi.process.daemonize(luigi.server.run, api_port=opts.port,\n address=opts.address, pidfile=opts.pidfile,\n logdir=opts.logdir, unix_socket=opts.unix_socket)\n else:\n if opts.logdir:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format(),\n filename=os.path.join(opts.logdir, \"luigi-server.log\"))\n else:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())\n luigi.server.run(api_port=opts.port, address=opts.address)\n", "path": "luigi/cmdline.py"}], "after_files": [{"content": "import os\nimport argparse\nimport logging\nimport sys\n\nfrom luigi.retcodes import run_with_retcodes\n\n\ndef luigi_run(argv=sys.argv[1:]):\n run_with_retcodes(argv)\n\n\ndef luigid(argv=sys.argv[1:]):\n import luigi.server\n import luigi.process\n import luigi.configuration\n parser = argparse.ArgumentParser(description=u'Central luigi server')\n parser.add_argument(u'--background', help=u'Run in background mode', action='store_true')\n parser.add_argument(u'--pidfile', help=u'Write pidfile')\n parser.add_argument(u'--logdir', help=u'log directory')\n parser.add_argument(u'--state-path', help=u'Pickled state file')\n parser.add_argument(u'--address', help=u'Listening interface')\n parser.add_argument(u'--unix-socket', help=u'Unix socket path')\n parser.add_argument(u'--port', default=8082, help=u'Listening port')\n\n opts = parser.parse_args(argv)\n\n if opts.state_path:\n config = luigi.configuration.get_config()\n config.set('scheduler', 'state_path', opts.state_path)\n\n if opts.background:\n # daemonize sets up logging to spooled log files\n logging.getLogger().setLevel(logging.INFO)\n luigi.process.daemonize(luigi.server.run, api_port=opts.port,\n address=opts.address, pidfile=opts.pidfile,\n logdir=opts.logdir, unix_socket=opts.unix_socket)\n else:\n if opts.logdir:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format(),\n filename=os.path.join(opts.logdir, \"luigi-server.log\"))\n else:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())\n luigi.server.run(api_port=opts.port, address=opts.address, unix_socket=opts.unix_socket)\n", "path": "luigi/cmdline.py"}]} | 968 | 124 |
gh_patches_debug_20043 | rasdani/github-patches | git_diff | archlinux__archinstall-66 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue installing package groups (kde-applications for instance)
As mentioned in #61, support for package groups doesn't work.
The idea here is that it should be supported, we simply never verified that the [archinstall.find_package()](https://github.com/Torxed/archinstall/blob/master/archinstall/lib/packages.py#L7-L17) function can verify those, and apparently it can't. So we have to use another API endpoint or multiple to support this.
*The backplane supports it already, as the packages are sent as a unfiltered string to `pacman -S` more or less.*
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/packages.py`
Content:
```
1 import urllib.request, urllib.parse
2 import ssl, json
3 from .exceptions import *
4
5 BASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'
6
7 def find_package(name):
8 """
9 Finds a specific package via the package database.
10 It makes a simple web-request, which might be a bit slow.
11 """
12 ssl_context = ssl.create_default_context()
13 ssl_context.check_hostname = False
14 ssl_context.verify_mode = ssl.CERT_NONE
15 response = urllib.request.urlopen(BASE_URL.format(package=name), context=ssl_context)
16 data = response.read().decode('UTF-8')
17 return json.loads(data)
18
19 def find_packages(*names):
20 """
21 This function returns the search results for many packages.
22 The function itself is rather slow, so consider not sending to
23 many packages to the search query.
24 """
25 result = {}
26 for package in names:
27 result[package] = find_package(package)
28 return result
29
30 def validate_package_list(packages :list):
31 """
32 Validates a list of given packages.
33 Raises `RequirementError` if one or more packages are not found.
34 """
35 invalid_packages = []
36 for package in packages:
37 if not find_package(package)['results']:
38 invalid_packages.append(package)
39
40 if invalid_packages:
41 raise RequirementError(f"Invalid package names: {invalid_packages}")
42
43 return True
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/archinstall/lib/packages.py b/archinstall/lib/packages.py
--- a/archinstall/lib/packages.py
+++ b/archinstall/lib/packages.py
@@ -3,6 +3,23 @@
from .exceptions import *
BASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'
+BASE_GROUP_URL = 'https://www.archlinux.org/groups/x86_64/{group}/'
+
+def find_group(name):
+ ssl_context = ssl.create_default_context()
+ ssl_context.check_hostname = False
+ ssl_context.verify_mode = ssl.CERT_NONE
+ try:
+ response = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)
+ except urllib.error.HTTPError as err:
+ if err.code == 404:
+ return False
+ else:
+ raise err
+
+ # Just to be sure some code didn't slip through the exception
+ if response.code == 200:
+ return True
def find_package(name):
"""
@@ -34,7 +51,7 @@
"""
invalid_packages = []
for package in packages:
- if not find_package(package)['results']:
+ if not find_package(package)['results'] and not find_group(package):
invalid_packages.append(package)
if invalid_packages:
| {"golden_diff": "diff --git a/archinstall/lib/packages.py b/archinstall/lib/packages.py\n--- a/archinstall/lib/packages.py\n+++ b/archinstall/lib/packages.py\n@@ -3,6 +3,23 @@\n from .exceptions import *\n \n BASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'\n+BASE_GROUP_URL = 'https://www.archlinux.org/groups/x86_64/{group}/'\n+\n+def find_group(name):\n+\tssl_context = ssl.create_default_context()\n+\tssl_context.check_hostname = False\n+\tssl_context.verify_mode = ssl.CERT_NONE\n+\ttry:\n+\t\tresponse = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)\n+\texcept urllib.error.HTTPError as err:\n+\t\tif err.code == 404:\n+\t\t\treturn False\n+\t\telse:\n+\t\t\traise err\n+\t\n+\t# Just to be sure some code didn't slip through the exception\n+\tif response.code == 200:\n+\t\treturn True\n \n def find_package(name):\n \t\"\"\"\n@@ -34,7 +51,7 @@\n \t\"\"\"\n \tinvalid_packages = []\n \tfor package in packages:\n-\t\tif not find_package(package)['results']:\n+\t\tif not find_package(package)['results'] and not find_group(package):\n \t\t\tinvalid_packages.append(package)\n \t\n \tif invalid_packages:\n", "issue": "Issue installing package groups (kde-applications for instance)\nAs mentioned in #61, support for package groups doesn't work.\r\nThe idea here is that it should be supported, we simply never verified that the [archinstall.find_package()](https://github.com/Torxed/archinstall/blob/master/archinstall/lib/packages.py#L7-L17) function can verify those, and apparently it can't. So we have to use another API endpoint or multiple to support this.\r\n\r\n*The backplane supports it already, as the packages are sent as a unfiltered string to `pacman -S` more or less.*\n", "before_files": [{"content": "import urllib.request, urllib.parse\nimport ssl, json\nfrom .exceptions import *\n\nBASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'\n\ndef find_package(name):\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\tresponse = urllib.request.urlopen(BASE_URL.format(package=name), context=ssl_context)\n\tdata = response.read().decode('UTF-8')\n\treturn json.loads(data)\n\ndef find_packages(*names):\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tresult[package] = find_package(package)\n\treturn result\n\ndef validate_package_list(packages :list):\n\t\"\"\"\n\tValidates a list of given packages.\n\tRaises `RequirementError` if one or more packages are not found.\n\t\"\"\"\n\tinvalid_packages = []\n\tfor package in packages:\n\t\tif not find_package(package)['results']:\n\t\t\tinvalid_packages.append(package)\n\t\n\tif invalid_packages:\n\t\traise RequirementError(f\"Invalid package names: {invalid_packages}\")\n\n\treturn True", "path": "archinstall/lib/packages.py"}], "after_files": [{"content": "import urllib.request, urllib.parse\nimport ssl, json\nfrom .exceptions import *\n\nBASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'\nBASE_GROUP_URL = 'https://www.archlinux.org/groups/x86_64/{group}/'\n\ndef find_group(name):\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\ttry:\n\t\tresponse = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)\n\texcept urllib.error.HTTPError as err:\n\t\tif err.code == 404:\n\t\t\treturn False\n\t\telse:\n\t\t\traise err\n\t\n\t# Just to be sure some code didn't slip through the exception\n\tif response.code == 200:\n\t\treturn True\n\ndef find_package(name):\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\tresponse = urllib.request.urlopen(BASE_URL.format(package=name), context=ssl_context)\n\tdata = response.read().decode('UTF-8')\n\treturn json.loads(data)\n\ndef find_packages(*names):\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tresult[package] = find_package(package)\n\treturn result\n\ndef validate_package_list(packages :list):\n\t\"\"\"\n\tValidates a list of given packages.\n\tRaises `RequirementError` if one or more packages are not found.\n\t\"\"\"\n\tinvalid_packages = []\n\tfor package in packages:\n\t\tif not find_package(package)['results'] and not find_group(package):\n\t\t\tinvalid_packages.append(package)\n\t\n\tif invalid_packages:\n\t\traise RequirementError(f\"Invalid package names: {invalid_packages}\")\n\n\treturn True", "path": "archinstall/lib/packages.py"}]} | 768 | 291 |
gh_patches_debug_63087 | rasdani/github-patches | git_diff | translate__pootle-5160 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ensure tests can be run with `--reuse-db`
When iterating over a test that require DB access (or a few of them), currently a site-wide setup is made which in such scenario ends up being relatively time-consuming and tedious.
Ideally one could use [pytest-django's `--reuse-db` flag](http://pytest-django.readthedocs.org/en/latest/database.html#reuse-db-reuse-the-testing-database-between-test-runs) to considerably reduce setup time on test iterations, however at the current state of things such feature cannot be used due to the way the Pootle test DB environment is setup.
Let's try to fix that so we can benefit from `--reuse-db`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytest_pootle/plugin.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import os
10 import shutil
11 from pkgutil import iter_modules
12
13 import pytest
14
15 from . import fixtures
16 from .env import PootleTestEnv
17 from .fixtures import models as fixtures_models
18 from .fixtures.core import management as fixtures_core_management
19 from .fixtures.core import utils as fixtures_core_utils
20 from .fixtures import formats as fixtures_formats
21 from .fixtures import pootle_fs as fixtures_fs
22
23
24 def _load_fixtures(*modules):
25 for mod in modules:
26 path = mod.__path__
27 prefix = '%s.' % mod.__name__
28
29 for loader_, name, is_pkg in iter_modules(path, prefix):
30 if not is_pkg:
31 yield name
32
33
34 @pytest.fixture
35 def po_test_dir(request, tmpdir):
36 po_dir = str(tmpdir.mkdir("po"))
37
38 def rm_po_dir():
39 if os.path.exists(po_dir):
40 shutil.rmtree(po_dir)
41
42 request.addfinalizer(rm_po_dir)
43 return po_dir
44
45
46 @pytest.fixture
47 def po_directory(request, po_test_dir, settings):
48 """Sets up a tmp directory for PO files."""
49 from pootle_store.models import fs
50
51 translation_directory = settings.POOTLE_TRANSLATION_DIRECTORY
52
53 # Adjust locations
54 settings.POOTLE_TRANSLATION_DIRECTORY = po_test_dir
55 fs.location = po_test_dir
56
57 def _cleanup():
58 settings.POOTLE_TRANSLATION_DIRECTORY = translation_directory
59
60 request.addfinalizer(_cleanup)
61
62
63 @pytest.fixture(scope='session')
64 def tests_use_db(request):
65 return bool(
66 [item for item in request.node.items
67 if item.get_marker('django_db')])
68
69
70 @pytest.fixture(scope='session')
71 def tests_use_vfolders(request):
72 return bool(
73 [item for item in request.node.items
74 if item.get_marker('pootle_vfolders')])
75
76
77 @pytest.fixture(scope='session')
78 def tests_use_migration(request, tests_use_db):
79 return bool(
80 tests_use_db
81 and [item for item in request.node.items
82 if item.get_marker('django_migration')])
83
84
85 @pytest.fixture(autouse=True, scope='session')
86 def setup_db_if_needed(request, tests_use_db):
87 """Sets up the site DB only if tests requested to use the DB (autouse)."""
88 if tests_use_db:
89 return request.getfuncargvalue('post_db_setup')
90
91
92 @pytest.fixture(scope='session')
93 def post_db_setup(translations_directory, django_db_setup, django_db_blocker,
94 tests_use_db, tests_use_vfolders, request):
95 """Sets up the site DB for the test session."""
96 if tests_use_db:
97 with django_db_blocker.unblock():
98 PootleTestEnv().setup(
99 vfolders=tests_use_vfolders)
100
101
102 @pytest.fixture(scope='session')
103 def django_db_use_migrations(tests_use_migration):
104 return tests_use_migration
105
106
107 pytest_plugins = tuple(
108 _load_fixtures(
109 fixtures,
110 fixtures_core_management,
111 fixtures_core_utils,
112 fixtures_formats,
113 fixtures_models,
114 fixtures_fs))
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytest_pootle/plugin.py b/pytest_pootle/plugin.py
--- a/pytest_pootle/plugin.py
+++ b/pytest_pootle/plugin.py
@@ -85,7 +85,7 @@
@pytest.fixture(autouse=True, scope='session')
def setup_db_if_needed(request, tests_use_db):
"""Sets up the site DB only if tests requested to use the DB (autouse)."""
- if tests_use_db:
+ if tests_use_db and not request.config.getvalue('reuse_db'):
return request.getfuncargvalue('post_db_setup')
| {"golden_diff": "diff --git a/pytest_pootle/plugin.py b/pytest_pootle/plugin.py\n--- a/pytest_pootle/plugin.py\n+++ b/pytest_pootle/plugin.py\n@@ -85,7 +85,7 @@\n @pytest.fixture(autouse=True, scope='session')\n def setup_db_if_needed(request, tests_use_db):\n \"\"\"Sets up the site DB only if tests requested to use the DB (autouse).\"\"\"\n- if tests_use_db:\n+ if tests_use_db and not request.config.getvalue('reuse_db'):\n return request.getfuncargvalue('post_db_setup')\n", "issue": "Ensure tests can be run with `--reuse-db`\nWhen iterating over a test that require DB access (or a few of them), currently a site-wide setup is made which in such scenario ends up being relatively time-consuming and tedious.\n\nIdeally one could use [pytest-django's `--reuse-db` flag](http://pytest-django.readthedocs.org/en/latest/database.html#reuse-db-reuse-the-testing-database-between-test-runs) to considerably reduce setup time on test iterations, however at the current state of things such feature cannot be used due to the way the Pootle test DB environment is setup.\n\nLet's try to fix that so we can benefit from `--reuse-db`.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nimport shutil\nfrom pkgutil import iter_modules\n\nimport pytest\n\nfrom . import fixtures\nfrom .env import PootleTestEnv\nfrom .fixtures import models as fixtures_models\nfrom .fixtures.core import management as fixtures_core_management\nfrom .fixtures.core import utils as fixtures_core_utils\nfrom .fixtures import formats as fixtures_formats\nfrom .fixtures import pootle_fs as fixtures_fs\n\n\ndef _load_fixtures(*modules):\n for mod in modules:\n path = mod.__path__\n prefix = '%s.' % mod.__name__\n\n for loader_, name, is_pkg in iter_modules(path, prefix):\n if not is_pkg:\n yield name\n\n\[email protected]\ndef po_test_dir(request, tmpdir):\n po_dir = str(tmpdir.mkdir(\"po\"))\n\n def rm_po_dir():\n if os.path.exists(po_dir):\n shutil.rmtree(po_dir)\n\n request.addfinalizer(rm_po_dir)\n return po_dir\n\n\[email protected]\ndef po_directory(request, po_test_dir, settings):\n \"\"\"Sets up a tmp directory for PO files.\"\"\"\n from pootle_store.models import fs\n\n translation_directory = settings.POOTLE_TRANSLATION_DIRECTORY\n\n # Adjust locations\n settings.POOTLE_TRANSLATION_DIRECTORY = po_test_dir\n fs.location = po_test_dir\n\n def _cleanup():\n settings.POOTLE_TRANSLATION_DIRECTORY = translation_directory\n\n request.addfinalizer(_cleanup)\n\n\[email protected](scope='session')\ndef tests_use_db(request):\n return bool(\n [item for item in request.node.items\n if item.get_marker('django_db')])\n\n\[email protected](scope='session')\ndef tests_use_vfolders(request):\n return bool(\n [item for item in request.node.items\n if item.get_marker('pootle_vfolders')])\n\n\[email protected](scope='session')\ndef tests_use_migration(request, tests_use_db):\n return bool(\n tests_use_db\n and [item for item in request.node.items\n if item.get_marker('django_migration')])\n\n\[email protected](autouse=True, scope='session')\ndef setup_db_if_needed(request, tests_use_db):\n \"\"\"Sets up the site DB only if tests requested to use the DB (autouse).\"\"\"\n if tests_use_db:\n return request.getfuncargvalue('post_db_setup')\n\n\[email protected](scope='session')\ndef post_db_setup(translations_directory, django_db_setup, django_db_blocker,\n tests_use_db, tests_use_vfolders, request):\n \"\"\"Sets up the site DB for the test session.\"\"\"\n if tests_use_db:\n with django_db_blocker.unblock():\n PootleTestEnv().setup(\n vfolders=tests_use_vfolders)\n\n\[email protected](scope='session')\ndef django_db_use_migrations(tests_use_migration):\n return tests_use_migration\n\n\npytest_plugins = tuple(\n _load_fixtures(\n fixtures,\n fixtures_core_management,\n fixtures_core_utils,\n fixtures_formats,\n fixtures_models,\n fixtures_fs))\n", "path": "pytest_pootle/plugin.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nimport shutil\nfrom pkgutil import iter_modules\n\nimport pytest\n\nfrom . import fixtures\nfrom .env import PootleTestEnv\nfrom .fixtures import models as fixtures_models\nfrom .fixtures.core import management as fixtures_core_management\nfrom .fixtures.core import utils as fixtures_core_utils\nfrom .fixtures import formats as fixtures_formats\nfrom .fixtures import pootle_fs as fixtures_fs\n\n\ndef _load_fixtures(*modules):\n for mod in modules:\n path = mod.__path__\n prefix = '%s.' % mod.__name__\n\n for loader_, name, is_pkg in iter_modules(path, prefix):\n if not is_pkg:\n yield name\n\n\[email protected]\ndef po_test_dir(request, tmpdir):\n po_dir = str(tmpdir.mkdir(\"po\"))\n\n def rm_po_dir():\n if os.path.exists(po_dir):\n shutil.rmtree(po_dir)\n\n request.addfinalizer(rm_po_dir)\n return po_dir\n\n\[email protected]\ndef po_directory(request, po_test_dir, settings):\n \"\"\"Sets up a tmp directory for PO files.\"\"\"\n from pootle_store.models import fs\n\n translation_directory = settings.POOTLE_TRANSLATION_DIRECTORY\n\n # Adjust locations\n settings.POOTLE_TRANSLATION_DIRECTORY = po_test_dir\n fs.location = po_test_dir\n\n def _cleanup():\n settings.POOTLE_TRANSLATION_DIRECTORY = translation_directory\n\n request.addfinalizer(_cleanup)\n\n\[email protected](scope='session')\ndef tests_use_db(request):\n return bool(\n [item for item in request.node.items\n if item.get_marker('django_db')])\n\n\[email protected](scope='session')\ndef tests_use_vfolders(request):\n return bool(\n [item for item in request.node.items\n if item.get_marker('pootle_vfolders')])\n\n\[email protected](scope='session')\ndef tests_use_migration(request, tests_use_db):\n return bool(\n tests_use_db\n and [item for item in request.node.items\n if item.get_marker('django_migration')])\n\n\[email protected](autouse=True, scope='session')\ndef setup_db_if_needed(request, tests_use_db):\n \"\"\"Sets up the site DB only if tests requested to use the DB (autouse).\"\"\"\n if tests_use_db and not request.config.getvalue('reuse_db'):\n return request.getfuncargvalue('post_db_setup')\n\n\[email protected](scope='session')\ndef post_db_setup(translations_directory, django_db_setup, django_db_blocker,\n tests_use_db, tests_use_vfolders, request):\n \"\"\"Sets up the site DB for the test session.\"\"\"\n if tests_use_db:\n with django_db_blocker.unblock():\n PootleTestEnv().setup(\n vfolders=tests_use_vfolders)\n\n\[email protected](scope='session')\ndef django_db_use_migrations(tests_use_migration):\n return tests_use_migration\n\n\npytest_plugins = tuple(\n _load_fixtures(\n fixtures,\n fixtures_core_management,\n fixtures_core_utils,\n fixtures_formats,\n fixtures_models,\n fixtures_fs))\n", "path": "pytest_pootle/plugin.py"}]} | 1,361 | 129 |
gh_patches_debug_15855 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-10668 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Django: adapt admin code for 3.x
It seems that we missed an upgrade to make it fully compatible with Django 3.x
We are using `admin.ACTION_CHECKBOX_NAME` when it was deprecated and it was removed already:
> The compatibility import of django.contrib.admin.helpers.ACTION_CHECKBOX_NAME in django.contrib.admin is removed.
(from https://docs.djangoproject.com/en/4.0/releases/3.1/#id1)
The code lives at https://github.com/readthedocs/readthedocs.org/blob/e94c26074e9abdf7056b4e6502c52f8a6b128055/readthedocs/notifications/views.py#L48
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/notifications/views.py`
Content:
```
1 """Django views for the notifications app."""
2 from django.contrib import admin, messages
3 from django.http import HttpResponseRedirect
4 from django.views.generic import FormView
5
6 from .forms import SendNotificationForm
7
8
9 class SendNotificationView(FormView):
10
11 """
12 Form view for sending notifications to users from admin pages.
13
14 Accepts the following additional parameters:
15
16 :param queryset: Queryset to use to determine the users to send emails to
17 :param action_name: Name of the action to pass to the form template,
18 determines the action to pass back to the admin view
19 :param notification_classes: List of :py:class:`Notification` classes to
20 display in the form
21 """
22
23 form_class = SendNotificationForm
24 template_name = "notifications/send_notification_form.html"
25 action_name = "send_email"
26 notification_classes = []
27
28 def get_form_kwargs(self):
29 """
30 Override form kwargs based on input fields.
31
32 The admin posts to this view initially, so detect the send button on
33 form post variables. Drop additional fields if we see the send button.
34 """
35 kwargs = super().get_form_kwargs()
36 kwargs["notification_classes"] = self.notification_classes
37 if "send" not in self.request.POST:
38 kwargs.pop("data", None)
39 kwargs.pop("files", None)
40 return kwargs
41
42 def get_initial(self):
43 """Add selected ids to initial form data."""
44 initial = super().get_initial()
45 initial["_selected_action"] = self.request.POST.getlist(
46 admin.ACTION_CHECKBOX_NAME,
47 )
48 return initial
49
50 def form_valid(self, form):
51 """If form is valid, send notification to recipients."""
52 count = 0
53 notification_cls = form.cleaned_data["source"]
54 for obj in self.get_queryset().all():
55 for recipient in self.get_object_recipients(obj):
56 notification = notification_cls(
57 context_object=obj,
58 request=self.request,
59 user=recipient,
60 )
61 notification.send()
62 count += 1
63 if count == 0:
64 self.message_user("No recipients to send to", level=messages.ERROR)
65 else:
66 self.message_user("Queued {} messages".format(count))
67 return HttpResponseRedirect(self.request.get_full_path())
68
69 def get_object_recipients(self, obj):
70 """
71 Iterate over queryset objects and return User objects.
72
73 This allows for non-User querysets to pass back a list of Users to send
74 to. By default, assume we're working with :py:class:`User` objects and
75 just yield the single object.
76
77 For example, this could be made to return project owners with::
78
79 for owner in AdminPermission.members(project):
80 yield owner
81
82 :param obj: object from queryset, type is dependent on model class
83 :rtype: django.contrib.auth.models.User
84 """
85 yield obj
86
87 def get_queryset(self):
88 return self.kwargs.get("queryset")
89
90 def get_context_data(self, **kwargs):
91 """Return queryset in context."""
92 context = super().get_context_data(**kwargs)
93 recipients = []
94 for obj in self.get_queryset().all():
95 recipients.extend(self.get_object_recipients(obj))
96 context["recipients"] = recipients
97 context["action_name"] = self.action_name
98 return context
99
100 def message_user(
101 self,
102 message,
103 level=messages.INFO,
104 extra_tags="",
105 fail_silently=False,
106 ):
107 """
108 Implementation of.
109
110 :py:meth:`django.contrib.admin.options.ModelAdmin.message_user`
111
112 Send message through messages framework
113 """
114 # TODO generalize this or check if implementation in ModelAdmin is
115 # usable here
116 messages.add_message(
117 self.request,
118 level,
119 message,
120 extra_tags=extra_tags,
121 fail_silently=fail_silently,
122 )
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/notifications/views.py b/readthedocs/notifications/views.py
--- a/readthedocs/notifications/views.py
+++ b/readthedocs/notifications/views.py
@@ -1,5 +1,5 @@
"""Django views for the notifications app."""
-from django.contrib import admin, messages
+from django.contrib import messages
from django.http import HttpResponseRedirect
from django.views.generic import FormView
@@ -42,9 +42,7 @@
def get_initial(self):
"""Add selected ids to initial form data."""
initial = super().get_initial()
- initial["_selected_action"] = self.request.POST.getlist(
- admin.ACTION_CHECKBOX_NAME,
- )
+ initial["_selected_action"] = self.request.POST.getlist("_selected_action")
return initial
def form_valid(self, form):
| {"golden_diff": "diff --git a/readthedocs/notifications/views.py b/readthedocs/notifications/views.py\n--- a/readthedocs/notifications/views.py\n+++ b/readthedocs/notifications/views.py\n@@ -1,5 +1,5 @@\n \"\"\"Django views for the notifications app.\"\"\"\n-from django.contrib import admin, messages\n+from django.contrib import messages\n from django.http import HttpResponseRedirect\n from django.views.generic import FormView\n \n@@ -42,9 +42,7 @@\n def get_initial(self):\n \"\"\"Add selected ids to initial form data.\"\"\"\n initial = super().get_initial()\n- initial[\"_selected_action\"] = self.request.POST.getlist(\n- admin.ACTION_CHECKBOX_NAME,\n- )\n+ initial[\"_selected_action\"] = self.request.POST.getlist(\"_selected_action\")\n return initial\n \n def form_valid(self, form):\n", "issue": "Django: adapt admin code for 3.x\nIt seems that we missed an upgrade to make it fully compatible with Django 3.x\r\n\r\nWe are using `admin.ACTION_CHECKBOX_NAME` when it was deprecated and it was removed already:\r\n\r\n> The compatibility import of django.contrib.admin.helpers.ACTION_CHECKBOX_NAME in django.contrib.admin is removed.\r\n\r\n(from https://docs.djangoproject.com/en/4.0/releases/3.1/#id1)\r\n\r\nThe code lives at https://github.com/readthedocs/readthedocs.org/blob/e94c26074e9abdf7056b4e6502c52f8a6b128055/readthedocs/notifications/views.py#L48\n", "before_files": [{"content": "\"\"\"Django views for the notifications app.\"\"\"\nfrom django.contrib import admin, messages\nfrom django.http import HttpResponseRedirect\nfrom django.views.generic import FormView\n\nfrom .forms import SendNotificationForm\n\n\nclass SendNotificationView(FormView):\n\n \"\"\"\n Form view for sending notifications to users from admin pages.\n\n Accepts the following additional parameters:\n\n :param queryset: Queryset to use to determine the users to send emails to\n :param action_name: Name of the action to pass to the form template,\n determines the action to pass back to the admin view\n :param notification_classes: List of :py:class:`Notification` classes to\n display in the form\n \"\"\"\n\n form_class = SendNotificationForm\n template_name = \"notifications/send_notification_form.html\"\n action_name = \"send_email\"\n notification_classes = []\n\n def get_form_kwargs(self):\n \"\"\"\n Override form kwargs based on input fields.\n\n The admin posts to this view initially, so detect the send button on\n form post variables. Drop additional fields if we see the send button.\n \"\"\"\n kwargs = super().get_form_kwargs()\n kwargs[\"notification_classes\"] = self.notification_classes\n if \"send\" not in self.request.POST:\n kwargs.pop(\"data\", None)\n kwargs.pop(\"files\", None)\n return kwargs\n\n def get_initial(self):\n \"\"\"Add selected ids to initial form data.\"\"\"\n initial = super().get_initial()\n initial[\"_selected_action\"] = self.request.POST.getlist(\n admin.ACTION_CHECKBOX_NAME,\n )\n return initial\n\n def form_valid(self, form):\n \"\"\"If form is valid, send notification to recipients.\"\"\"\n count = 0\n notification_cls = form.cleaned_data[\"source\"]\n for obj in self.get_queryset().all():\n for recipient in self.get_object_recipients(obj):\n notification = notification_cls(\n context_object=obj,\n request=self.request,\n user=recipient,\n )\n notification.send()\n count += 1\n if count == 0:\n self.message_user(\"No recipients to send to\", level=messages.ERROR)\n else:\n self.message_user(\"Queued {} messages\".format(count))\n return HttpResponseRedirect(self.request.get_full_path())\n\n def get_object_recipients(self, obj):\n \"\"\"\n Iterate over queryset objects and return User objects.\n\n This allows for non-User querysets to pass back a list of Users to send\n to. By default, assume we're working with :py:class:`User` objects and\n just yield the single object.\n\n For example, this could be made to return project owners with::\n\n for owner in AdminPermission.members(project):\n yield owner\n\n :param obj: object from queryset, type is dependent on model class\n :rtype: django.contrib.auth.models.User\n \"\"\"\n yield obj\n\n def get_queryset(self):\n return self.kwargs.get(\"queryset\")\n\n def get_context_data(self, **kwargs):\n \"\"\"Return queryset in context.\"\"\"\n context = super().get_context_data(**kwargs)\n recipients = []\n for obj in self.get_queryset().all():\n recipients.extend(self.get_object_recipients(obj))\n context[\"recipients\"] = recipients\n context[\"action_name\"] = self.action_name\n return context\n\n def message_user(\n self,\n message,\n level=messages.INFO,\n extra_tags=\"\",\n fail_silently=False,\n ):\n \"\"\"\n Implementation of.\n\n :py:meth:`django.contrib.admin.options.ModelAdmin.message_user`\n\n Send message through messages framework\n \"\"\"\n # TODO generalize this or check if implementation in ModelAdmin is\n # usable here\n messages.add_message(\n self.request,\n level,\n message,\n extra_tags=extra_tags,\n fail_silently=fail_silently,\n )\n", "path": "readthedocs/notifications/views.py"}], "after_files": [{"content": "\"\"\"Django views for the notifications app.\"\"\"\nfrom django.contrib import messages\nfrom django.http import HttpResponseRedirect\nfrom django.views.generic import FormView\n\nfrom .forms import SendNotificationForm\n\n\nclass SendNotificationView(FormView):\n\n \"\"\"\n Form view for sending notifications to users from admin pages.\n\n Accepts the following additional parameters:\n\n :param queryset: Queryset to use to determine the users to send emails to\n :param action_name: Name of the action to pass to the form template,\n determines the action to pass back to the admin view\n :param notification_classes: List of :py:class:`Notification` classes to\n display in the form\n \"\"\"\n\n form_class = SendNotificationForm\n template_name = \"notifications/send_notification_form.html\"\n action_name = \"send_email\"\n notification_classes = []\n\n def get_form_kwargs(self):\n \"\"\"\n Override form kwargs based on input fields.\n\n The admin posts to this view initially, so detect the send button on\n form post variables. Drop additional fields if we see the send button.\n \"\"\"\n kwargs = super().get_form_kwargs()\n kwargs[\"notification_classes\"] = self.notification_classes\n if \"send\" not in self.request.POST:\n kwargs.pop(\"data\", None)\n kwargs.pop(\"files\", None)\n return kwargs\n\n def get_initial(self):\n \"\"\"Add selected ids to initial form data.\"\"\"\n initial = super().get_initial()\n initial[\"_selected_action\"] = self.request.POST.getlist(\"_selected_action\")\n return initial\n\n def form_valid(self, form):\n \"\"\"If form is valid, send notification to recipients.\"\"\"\n count = 0\n notification_cls = form.cleaned_data[\"source\"]\n for obj in self.get_queryset().all():\n for recipient in self.get_object_recipients(obj):\n notification = notification_cls(\n context_object=obj,\n request=self.request,\n user=recipient,\n )\n notification.send()\n count += 1\n if count == 0:\n self.message_user(\"No recipients to send to\", level=messages.ERROR)\n else:\n self.message_user(\"Queued {} messages\".format(count))\n return HttpResponseRedirect(self.request.get_full_path())\n\n def get_object_recipients(self, obj):\n \"\"\"\n Iterate over queryset objects and return User objects.\n\n This allows for non-User querysets to pass back a list of Users to send\n to. By default, assume we're working with :py:class:`User` objects and\n just yield the single object.\n\n For example, this could be made to return project owners with::\n\n for owner in AdminPermission.members(project):\n yield owner\n\n :param obj: object from queryset, type is dependent on model class\n :rtype: django.contrib.auth.models.User\n \"\"\"\n yield obj\n\n def get_queryset(self):\n return self.kwargs.get(\"queryset\")\n\n def get_context_data(self, **kwargs):\n \"\"\"Return queryset in context.\"\"\"\n context = super().get_context_data(**kwargs)\n recipients = []\n for obj in self.get_queryset().all():\n recipients.extend(self.get_object_recipients(obj))\n context[\"recipients\"] = recipients\n context[\"action_name\"] = self.action_name\n return context\n\n def message_user(\n self,\n message,\n level=messages.INFO,\n extra_tags=\"\",\n fail_silently=False,\n ):\n \"\"\"\n Implementation of.\n\n :py:meth:`django.contrib.admin.options.ModelAdmin.message_user`\n\n Send message through messages framework\n \"\"\"\n # TODO generalize this or check if implementation in ModelAdmin is\n # usable here\n messages.add_message(\n self.request,\n level,\n message,\n extra_tags=extra_tags,\n fail_silently=fail_silently,\n )\n", "path": "readthedocs/notifications/views.py"}]} | 1,492 | 178 |
gh_patches_debug_27494 | rasdani/github-patches | git_diff | shuup__shuup-1977 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Admin UI: Fix media browser file upload
An exception is raised when you manually select the file while uploading it. To reproduce:
- Go to Products
- Select/Create a product
- Go to Files section
- Click over the dropzone area
- In the media browser window, click Upload
- Select a file and check the console (error)

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shuup/admin/browser_config.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This file is part of Shuup.
3 #
4 # Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.
5 #
6 # This source code is licensed under the OSL-3.0 license found in the
7 # LICENSE file in the root directory of this source tree.
8 from django.conf import settings
9
10 from shuup.utils.i18n import get_current_babel_locale
11
12
13 class BaseBrowserConfigProvider(object):
14 @classmethod
15 def get_browser_urls(cls, request, **kwargs):
16 return {}
17
18 @classmethod
19 def get_gettings(cls, request, **kwargs):
20 return {}
21
22
23 class DefaultBrowserConfigProvider(BaseBrowserConfigProvider):
24 @classmethod
25 def get_browser_urls(cls, request, **kwargs):
26 return {
27 "edit": "shuup_admin:edit",
28 "select": "shuup_admin:select",
29 "media": "shuup_admin:media.browse",
30 "product": "shuup_admin:shop_product.list",
31 "contact": "shuup_admin:contact.list",
32 "setLanguage": "shuup_admin:set-language",
33 "tour": "shuup_admin:tour",
34 "menu_toggle": "shuup_admin:menu_toggle"
35 }
36
37 @classmethod
38 def get_gettings(cls, request, **kwargs):
39 return {
40 "minSearchInputLength": settings.SHUUP_ADMIN_MINIMUM_INPUT_LENGTH_SEARCH or 1,
41 "dateInputFormat": settings.SHUUP_ADMIN_DATE_INPUT_FORMAT,
42 "datetimeInputFormat": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,
43 "timeInputFormat": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,
44 "datetimeInputStep": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,
45 "dateInputLocale": get_current_babel_locale().language
46 }
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shuup/admin/browser_config.py b/shuup/admin/browser_config.py
--- a/shuup/admin/browser_config.py
+++ b/shuup/admin/browser_config.py
@@ -7,6 +7,7 @@
# LICENSE file in the root directory of this source tree.
from django.conf import settings
+from shuup.admin.utils.permissions import has_permission
from shuup.utils.i18n import get_current_babel_locale
@@ -26,7 +27,7 @@
return {
"edit": "shuup_admin:edit",
"select": "shuup_admin:select",
- "media": "shuup_admin:media.browse",
+ "media": ("shuup_admin:media.browse" if has_permission(request.user, "shuup_admin:media.browse") else None),
"product": "shuup_admin:shop_product.list",
"contact": "shuup_admin:contact.list",
"setLanguage": "shuup_admin:set-language",
@@ -42,5 +43,6 @@
"datetimeInputFormat": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,
"timeInputFormat": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,
"datetimeInputStep": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,
- "dateInputLocale": get_current_babel_locale().language
+ "dateInputLocale": get_current_babel_locale().language,
+ "staticPrefix": settings.STATIC_URL,
}
| {"golden_diff": "diff --git a/shuup/admin/browser_config.py b/shuup/admin/browser_config.py\n--- a/shuup/admin/browser_config.py\n+++ b/shuup/admin/browser_config.py\n@@ -7,6 +7,7 @@\n # LICENSE file in the root directory of this source tree.\n from django.conf import settings\n \n+from shuup.admin.utils.permissions import has_permission\n from shuup.utils.i18n import get_current_babel_locale\n \n \n@@ -26,7 +27,7 @@\n return {\n \"edit\": \"shuup_admin:edit\",\n \"select\": \"shuup_admin:select\",\n- \"media\": \"shuup_admin:media.browse\",\n+ \"media\": (\"shuup_admin:media.browse\" if has_permission(request.user, \"shuup_admin:media.browse\") else None),\n \"product\": \"shuup_admin:shop_product.list\",\n \"contact\": \"shuup_admin:contact.list\",\n \"setLanguage\": \"shuup_admin:set-language\",\n@@ -42,5 +43,6 @@\n \"datetimeInputFormat\": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,\n \"timeInputFormat\": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,\n \"datetimeInputStep\": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,\n- \"dateInputLocale\": get_current_babel_locale().language\n+ \"dateInputLocale\": get_current_babel_locale().language,\n+ \"staticPrefix\": settings.STATIC_URL,\n }\n", "issue": " Admin UI: Fix media browser file upload\nAn exception is raised when you manually select the file while uploading it. To reproduce:\r\n- Go to Products\r\n- Select/Create a product\r\n- Go to Files section\r\n- Click over the dropzone area\r\n- In the media browser window, click Upload\r\n- Select a file and check the console (error)\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.conf import settings\n\nfrom shuup.utils.i18n import get_current_babel_locale\n\n\nclass BaseBrowserConfigProvider(object):\n @classmethod\n def get_browser_urls(cls, request, **kwargs):\n return {}\n\n @classmethod\n def get_gettings(cls, request, **kwargs):\n return {}\n\n\nclass DefaultBrowserConfigProvider(BaseBrowserConfigProvider):\n @classmethod\n def get_browser_urls(cls, request, **kwargs):\n return {\n \"edit\": \"shuup_admin:edit\",\n \"select\": \"shuup_admin:select\",\n \"media\": \"shuup_admin:media.browse\",\n \"product\": \"shuup_admin:shop_product.list\",\n \"contact\": \"shuup_admin:contact.list\",\n \"setLanguage\": \"shuup_admin:set-language\",\n \"tour\": \"shuup_admin:tour\",\n \"menu_toggle\": \"shuup_admin:menu_toggle\"\n }\n\n @classmethod\n def get_gettings(cls, request, **kwargs):\n return {\n \"minSearchInputLength\": settings.SHUUP_ADMIN_MINIMUM_INPUT_LENGTH_SEARCH or 1,\n \"dateInputFormat\": settings.SHUUP_ADMIN_DATE_INPUT_FORMAT,\n \"datetimeInputFormat\": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,\n \"timeInputFormat\": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,\n \"datetimeInputStep\": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,\n \"dateInputLocale\": get_current_babel_locale().language\n }\n", "path": "shuup/admin/browser_config.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.conf import settings\n\nfrom shuup.admin.utils.permissions import has_permission\nfrom shuup.utils.i18n import get_current_babel_locale\n\n\nclass BaseBrowserConfigProvider(object):\n @classmethod\n def get_browser_urls(cls, request, **kwargs):\n return {}\n\n @classmethod\n def get_gettings(cls, request, **kwargs):\n return {}\n\n\nclass DefaultBrowserConfigProvider(BaseBrowserConfigProvider):\n @classmethod\n def get_browser_urls(cls, request, **kwargs):\n return {\n \"edit\": \"shuup_admin:edit\",\n \"select\": \"shuup_admin:select\",\n \"media\": (\"shuup_admin:media.browse\" if has_permission(request.user, \"shuup_admin:media.browse\") else None),\n \"product\": \"shuup_admin:shop_product.list\",\n \"contact\": \"shuup_admin:contact.list\",\n \"setLanguage\": \"shuup_admin:set-language\",\n \"tour\": \"shuup_admin:tour\",\n \"menu_toggle\": \"shuup_admin:menu_toggle\"\n }\n\n @classmethod\n def get_gettings(cls, request, **kwargs):\n return {\n \"minSearchInputLength\": settings.SHUUP_ADMIN_MINIMUM_INPUT_LENGTH_SEARCH or 1,\n \"dateInputFormat\": settings.SHUUP_ADMIN_DATE_INPUT_FORMAT,\n \"datetimeInputFormat\": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,\n \"timeInputFormat\": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,\n \"datetimeInputStep\": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,\n \"dateInputLocale\": get_current_babel_locale().language,\n \"staticPrefix\": settings.STATIC_URL,\n }\n", "path": "shuup/admin/browser_config.py"}]} | 887 | 327 |
gh_patches_debug_12240 | rasdani/github-patches | git_diff | GPflow__GPflow-1355 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setup.py depends on external dataclasses package for python >= 3.8
Setup.py has a check
```python
is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
```
and adds the PyPI `dataclasses` package to the requirements when `not is_py37`. (`dataclasses` has been incorporated in the stdlib in python 3.7.) With python 3.8 released, this check is inaccurate, as setup.py currently adds the dependency on dataclasses when the python version is 3.8 or later, not just when it's less than 3.7.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # pylint: skip-file
5
6 import os
7 import sys
8 from pathlib import Path
9
10 from pkg_resources import parse_version
11 from setuptools import find_packages, setup
12
13 is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
14 on_rtd = os.environ.get("READTHEDOCS", None) == "True" # copied from the docs
15
16 # Dependencies of GPflow
17 requirements = ["numpy>=1.10.0", "scipy>=0.18.0", "multipledispatch>=0.4.9", "tabulate"]
18
19 if not is_py37:
20 requirements.append("dataclasses")
21
22 if not on_rtd:
23 requirements.append("tensorflow-probability>=0.9")
24
25 min_tf_version = "2.1.0"
26 tf_cpu = "tensorflow"
27 tf_gpu = "tensorflow-gpu"
28
29
30 # for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:
31 def latest_version(package_name):
32 import json
33 from urllib import request
34 import re
35
36 url = f"https://pypi.python.org/pypi/{package_name}/json"
37 data = json.load(request.urlopen(url))
38 # filter out rc and beta releases and, more generally, any releases that
39 # do not contain exclusively numbers and dots.
40 versions = [parse_version(v) for v in data["releases"].keys() if re.match("^[0-9.]+$", v)]
41 versions.sort()
42 return versions[-1] # return latest version
43
44
45 # Only detect TF if not installed or outdated. If not, do not do not list as
46 # requirement to avoid installing over e.g. tensorflow-gpu
47 # To avoid this, rely on importing rather than the package name (like pip).
48
49 try:
50 # If tf not installed, import raises ImportError
51 import tensorflow as tf
52
53 if parse_version(tf.__version__) < parse_version(min_tf_version):
54 # TF pre-installed, but below the minimum required version
55 raise DeprecationWarning("TensorFlow version below minimum requirement")
56 except (ImportError, DeprecationWarning):
57 # Add TensorFlow to dependencies to trigger installation/update
58 if not on_rtd:
59 # Do not add TF if we are installing GPflow on readthedocs
60 requirements.append(tf_cpu)
61 gast_requirement = (
62 "gast>=0.2.2,<0.3"
63 if latest_version("tensorflow") < parse_version("2.2")
64 else "gast>=0.3.3"
65 )
66 requirements.append(gast_requirement)
67
68
69 with open(str(Path(".", "VERSION").absolute())) as version_file:
70 version = version_file.read().strip()
71
72 packages = find_packages(".", exclude=["tests"])
73
74 setup(
75 name="gpflow",
76 version=version,
77 author="James Hensman, Alex Matthews",
78 author_email="[email protected]",
79 description="Gaussian process methods in TensorFlow",
80 license="Apache License 2.0",
81 keywords="machine-learning gaussian-processes kernels tensorflow",
82 url="http://github.com/GPflow/GPflow",
83 packages=packages,
84 include_package_data=True,
85 install_requires=requirements,
86 extras_require={"Tensorflow with GPU": [tf_gpu]},
87 python_requires=">=3.6",
88 classifiers=[
89 "License :: OSI Approved :: Apache Software License",
90 "Natural Language :: English",
91 "Operating System :: MacOS :: MacOS X",
92 "Operating System :: Microsoft :: Windows",
93 "Operating System :: POSIX :: Linux",
94 "Programming Language :: Python :: 3.6",
95 "Topic :: Scientific/Engineering :: Artificial Intelligence",
96 ],
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,13 +10,13 @@
from pkg_resources import parse_version
from setuptools import find_packages, setup
-is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
on_rtd = os.environ.get("READTHEDOCS", None) == "True" # copied from the docs
# Dependencies of GPflow
requirements = ["numpy>=1.10.0", "scipy>=0.18.0", "multipledispatch>=0.4.9", "tabulate"]
-if not is_py37:
+if sys.version_info < (3, 7):
+ # became part of stdlib in python 3.7
requirements.append("dataclasses")
if not on_rtd:
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,13 +10,13 @@\n from pkg_resources import parse_version\n from setuptools import find_packages, setup\n \n-is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\n on_rtd = os.environ.get(\"READTHEDOCS\", None) == \"True\" # copied from the docs\n \n # Dependencies of GPflow\n requirements = [\"numpy>=1.10.0\", \"scipy>=0.18.0\", \"multipledispatch>=0.4.9\", \"tabulate\"]\n \n-if not is_py37:\n+if sys.version_info < (3, 7):\n+ # became part of stdlib in python 3.7\n requirements.append(\"dataclasses\")\n \n if not on_rtd:\n", "issue": "setup.py depends on external dataclasses package for python >= 3.8\nSetup.py has a check\r\n```python\r\nis_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\r\n```\r\nand adds the PyPI `dataclasses` package to the requirements when `not is_py37`. (`dataclasses` has been incorporated in the stdlib in python 3.7.) With python 3.8 released, this check is inaccurate, as setup.py currently adds the dependency on dataclasses when the python version is 3.8 or later, not just when it's less than 3.7.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\nfrom pathlib import Path\n\nfrom pkg_resources import parse_version\nfrom setuptools import find_packages, setup\n\nis_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\non_rtd = os.environ.get(\"READTHEDOCS\", None) == \"True\" # copied from the docs\n\n# Dependencies of GPflow\nrequirements = [\"numpy>=1.10.0\", \"scipy>=0.18.0\", \"multipledispatch>=0.4.9\", \"tabulate\"]\n\nif not is_py37:\n requirements.append(\"dataclasses\")\n\nif not on_rtd:\n requirements.append(\"tensorflow-probability>=0.9\")\n\nmin_tf_version = \"2.1.0\"\ntf_cpu = \"tensorflow\"\ntf_gpu = \"tensorflow-gpu\"\n\n\n# for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:\ndef latest_version(package_name):\n import json\n from urllib import request\n import re\n\n url = f\"https://pypi.python.org/pypi/{package_name}/json\"\n data = json.load(request.urlopen(url))\n # filter out rc and beta releases and, more generally, any releases that\n # do not contain exclusively numbers and dots.\n versions = [parse_version(v) for v in data[\"releases\"].keys() if re.match(\"^[0-9.]+$\", v)]\n versions.sort()\n return versions[-1] # return latest version\n\n\n# Only detect TF if not installed or outdated. If not, do not do not list as\n# requirement to avoid installing over e.g. tensorflow-gpu\n# To avoid this, rely on importing rather than the package name (like pip).\n\ntry:\n # If tf not installed, import raises ImportError\n import tensorflow as tf\n\n if parse_version(tf.__version__) < parse_version(min_tf_version):\n # TF pre-installed, but below the minimum required version\n raise DeprecationWarning(\"TensorFlow version below minimum requirement\")\nexcept (ImportError, DeprecationWarning):\n # Add TensorFlow to dependencies to trigger installation/update\n if not on_rtd:\n # Do not add TF if we are installing GPflow on readthedocs\n requirements.append(tf_cpu)\n gast_requirement = (\n \"gast>=0.2.2,<0.3\"\n if latest_version(\"tensorflow\") < parse_version(\"2.2\")\n else \"gast>=0.3.3\"\n )\n requirements.append(gast_requirement)\n\n\nwith open(str(Path(\".\", \"VERSION\").absolute())) as version_file:\n version = version_file.read().strip()\n\npackages = find_packages(\".\", exclude=[\"tests\"])\n\nsetup(\n name=\"gpflow\",\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/GPflow/GPflow\",\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={\"Tensorflow with GPU\": [tf_gpu]},\n python_requires=\">=3.6\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\nfrom pathlib import Path\n\nfrom pkg_resources import parse_version\nfrom setuptools import find_packages, setup\n\non_rtd = os.environ.get(\"READTHEDOCS\", None) == \"True\" # copied from the docs\n\n# Dependencies of GPflow\nrequirements = [\"numpy>=1.10.0\", \"scipy>=0.18.0\", \"multipledispatch>=0.4.9\", \"tabulate\"]\n\nif sys.version_info < (3, 7):\n # became part of stdlib in python 3.7\n requirements.append(\"dataclasses\")\n\nif not on_rtd:\n requirements.append(\"tensorflow-probability>=0.9\")\n\nmin_tf_version = \"2.1.0\"\ntf_cpu = \"tensorflow\"\ntf_gpu = \"tensorflow-gpu\"\n\n\n# for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:\ndef latest_version(package_name):\n import json\n from urllib import request\n import re\n\n url = f\"https://pypi.python.org/pypi/{package_name}/json\"\n data = json.load(request.urlopen(url))\n # filter out rc and beta releases and, more generally, any releases that\n # do not contain exclusively numbers and dots.\n versions = [parse_version(v) for v in data[\"releases\"].keys() if re.match(\"^[0-9.]+$\", v)]\n versions.sort()\n return versions[-1] # return latest version\n\n\n# Only detect TF if not installed or outdated. If not, do not do not list as\n# requirement to avoid installing over e.g. tensorflow-gpu\n# To avoid this, rely on importing rather than the package name (like pip).\n\ntry:\n # If tf not installed, import raises ImportError\n import tensorflow as tf\n\n if parse_version(tf.__version__) < parse_version(min_tf_version):\n # TF pre-installed, but below the minimum required version\n raise DeprecationWarning(\"TensorFlow version below minimum requirement\")\nexcept (ImportError, DeprecationWarning):\n # Add TensorFlow to dependencies to trigger installation/update\n if not on_rtd:\n # Do not add TF if we are installing GPflow on readthedocs\n requirements.append(tf_cpu)\n gast_requirement = (\n \"gast>=0.2.2,<0.3\"\n if latest_version(\"tensorflow\") < parse_version(\"2.2\")\n else \"gast>=0.3.3\"\n )\n requirements.append(gast_requirement)\n\n\nwith open(str(Path(\".\", \"VERSION\").absolute())) as version_file:\n version = version_file.read().strip()\n\npackages = find_packages(\".\", exclude=[\"tests\"])\n\nsetup(\n name=\"gpflow\",\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/GPflow/GPflow\",\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={\"Tensorflow with GPU\": [tf_gpu]},\n python_requires=\">=3.6\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}]} | 1,389 | 192 |
gh_patches_debug_9864 | rasdani/github-patches | git_diff | getpelican__pelican-2720 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fixing some warnings and errors in the sample content.
The current sample content is not up-to-date with the current Pelican mechanism.
This will help new comers to understand better how Pelican works.
* More valid articles.
* More translations.
* Images are now correctly displayed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samples/pelican.conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 AUTHOR = 'Alexis Métaireau'
4 SITENAME = "Alexis' log"
5 SITESUBTITLE = 'A personal blog.'
6 SITEURL = 'http://blog.notmyidea.org'
7 TIMEZONE = "Europe/Paris"
8
9 # can be useful in development, but set to False when you're ready to publish
10 RELATIVE_URLS = True
11
12 GITHUB_URL = 'http://github.com/ametaireau/'
13 DISQUS_SITENAME = "blog-notmyidea"
14 REVERSE_CATEGORY_ORDER = True
15 LOCALE = "C"
16 DEFAULT_PAGINATION = 4
17 DEFAULT_DATE = (2012, 3, 2, 14, 1, 1)
18
19 FEED_ALL_RSS = 'feeds/all.rss.xml'
20 CATEGORY_FEED_RSS = 'feeds/{slug}.rss.xml'
21
22 LINKS = (('Biologeek', 'http://biologeek.org'),
23 ('Filyb', "http://filyb.info/"),
24 ('Libert-fr', "http://www.libert-fr.com"),
25 ('N1k0', "http://prendreuncafe.com/blog/"),
26 ('Tarek Ziadé', "http://ziade.org/blog"),
27 ('Zubin Mithra', "http://zubin71.wordpress.com/"),)
28
29 SOCIAL = (('twitter', 'http://twitter.com/ametaireau'),
30 ('lastfm', 'http://lastfm.com/user/akounet'),
31 ('github', 'http://github.com/ametaireau'),)
32
33 # global metadata to all the contents
34 DEFAULT_METADATA = {'yeah': 'it is'}
35
36 # path-specific metadata
37 EXTRA_PATH_METADATA = {
38 'extra/robots.txt': {'path': 'robots.txt'},
39 }
40
41 # static paths will be copied without parsing their contents
42 STATIC_PATHS = [
43 'pictures',
44 'extra/robots.txt',
45 ]
46
47 # custom page generated with a jinja2 template
48 TEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}
49
50 # code blocks with line numbers
51 PYGMENTS_RST_OPTIONS = {'linenos': 'table'}
52
53 # foobar will not be used, because it's not in caps. All configuration keys
54 # have to be in caps
55 foobar = "barbaz"
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/samples/pelican.conf.py b/samples/pelican.conf.py
--- a/samples/pelican.conf.py
+++ b/samples/pelican.conf.py
@@ -40,13 +40,16 @@
# static paths will be copied without parsing their contents
STATIC_PATHS = [
- 'pictures',
+ 'images',
'extra/robots.txt',
]
# custom page generated with a jinja2 template
TEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}
+# there is no other HTML content
+READERS = {'html': None}
+
# code blocks with line numbers
PYGMENTS_RST_OPTIONS = {'linenos': 'table'}
| {"golden_diff": "diff --git a/samples/pelican.conf.py b/samples/pelican.conf.py\n--- a/samples/pelican.conf.py\n+++ b/samples/pelican.conf.py\n@@ -40,13 +40,16 @@\n \n # static paths will be copied without parsing their contents\n STATIC_PATHS = [\n- 'pictures',\n+ 'images',\n 'extra/robots.txt',\n ]\n \n # custom page generated with a jinja2 template\n TEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}\n \n+# there is no other HTML content\n+READERS = {'html': None}\n+\n # code blocks with line numbers\n PYGMENTS_RST_OPTIONS = {'linenos': 'table'}\n", "issue": "Fixing some warnings and errors in the sample content.\nThe current sample content is not up-to-date with the current Pelican mechanism.\r\n\r\nThis will help new comers to understand better how Pelican works.\r\n\r\n* More valid articles.\r\n* More translations.\r\n* Images are now correctly displayed.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nAUTHOR = 'Alexis M\u00e9taireau'\nSITENAME = \"Alexis' log\"\nSITESUBTITLE = 'A personal blog.'\nSITEURL = 'http://blog.notmyidea.org'\nTIMEZONE = \"Europe/Paris\"\n\n# can be useful in development, but set to False when you're ready to publish\nRELATIVE_URLS = True\n\nGITHUB_URL = 'http://github.com/ametaireau/'\nDISQUS_SITENAME = \"blog-notmyidea\"\nREVERSE_CATEGORY_ORDER = True\nLOCALE = \"C\"\nDEFAULT_PAGINATION = 4\nDEFAULT_DATE = (2012, 3, 2, 14, 1, 1)\n\nFEED_ALL_RSS = 'feeds/all.rss.xml'\nCATEGORY_FEED_RSS = 'feeds/{slug}.rss.xml'\n\nLINKS = (('Biologeek', 'http://biologeek.org'),\n ('Filyb', \"http://filyb.info/\"),\n ('Libert-fr', \"http://www.libert-fr.com\"),\n ('N1k0', \"http://prendreuncafe.com/blog/\"),\n ('Tarek Ziad\u00e9', \"http://ziade.org/blog\"),\n ('Zubin Mithra', \"http://zubin71.wordpress.com/\"),)\n\nSOCIAL = (('twitter', 'http://twitter.com/ametaireau'),\n ('lastfm', 'http://lastfm.com/user/akounet'),\n ('github', 'http://github.com/ametaireau'),)\n\n# global metadata to all the contents\nDEFAULT_METADATA = {'yeah': 'it is'}\n\n# path-specific metadata\nEXTRA_PATH_METADATA = {\n 'extra/robots.txt': {'path': 'robots.txt'},\n }\n\n# static paths will be copied without parsing their contents\nSTATIC_PATHS = [\n 'pictures',\n 'extra/robots.txt',\n ]\n\n# custom page generated with a jinja2 template\nTEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}\n\n# code blocks with line numbers\nPYGMENTS_RST_OPTIONS = {'linenos': 'table'}\n\n# foobar will not be used, because it's not in caps. All configuration keys\n# have to be in caps\nfoobar = \"barbaz\"\n", "path": "samples/pelican.conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nAUTHOR = 'Alexis M\u00e9taireau'\nSITENAME = \"Alexis' log\"\nSITESUBTITLE = 'A personal blog.'\nSITEURL = 'http://blog.notmyidea.org'\nTIMEZONE = \"Europe/Paris\"\n\n# can be useful in development, but set to False when you're ready to publish\nRELATIVE_URLS = True\n\nGITHUB_URL = 'http://github.com/ametaireau/'\nDISQUS_SITENAME = \"blog-notmyidea\"\nREVERSE_CATEGORY_ORDER = True\nLOCALE = \"C\"\nDEFAULT_PAGINATION = 4\nDEFAULT_DATE = (2012, 3, 2, 14, 1, 1)\n\nFEED_ALL_RSS = 'feeds/all.rss.xml'\nCATEGORY_FEED_RSS = 'feeds/{slug}.rss.xml'\n\nLINKS = (('Biologeek', 'http://biologeek.org'),\n ('Filyb', \"http://filyb.info/\"),\n ('Libert-fr', \"http://www.libert-fr.com\"),\n ('N1k0', \"http://prendreuncafe.com/blog/\"),\n ('Tarek Ziad\u00e9', \"http://ziade.org/blog\"),\n ('Zubin Mithra', \"http://zubin71.wordpress.com/\"),)\n\nSOCIAL = (('twitter', 'http://twitter.com/ametaireau'),\n ('lastfm', 'http://lastfm.com/user/akounet'),\n ('github', 'http://github.com/ametaireau'),)\n\n# global metadata to all the contents\nDEFAULT_METADATA = {'yeah': 'it is'}\n\n# path-specific metadata\nEXTRA_PATH_METADATA = {\n 'extra/robots.txt': {'path': 'robots.txt'},\n }\n\n# static paths will be copied without parsing their contents\nSTATIC_PATHS = [\n 'images',\n 'extra/robots.txt',\n ]\n\n# custom page generated with a jinja2 template\nTEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}\n\n# there is no other HTML content\nREADERS = {'html': None}\n\n# code blocks with line numbers\nPYGMENTS_RST_OPTIONS = {'linenos': 'table'}\n\n# foobar will not be used, because it's not in caps. All configuration keys\n# have to be in caps\nfoobar = \"barbaz\"\n", "path": "samples/pelican.conf.py"}]} | 921 | 159 |
gh_patches_debug_12299 | rasdani/github-patches | git_diff | medtagger__MedTagger-145 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Race condition between API & Worker containers that updates fixtures on start
## Expected Behavior
Containers should start without any issues.
## Actual Behavior
Any of these two containers may fail to start due to unexpected exceptions while applying fixtures.
## Steps to Reproduce the Problem
1. Run `docker-compose up`.
2. Be lucky.
## Additional comment
Both of these containers start script for applying fixtures. Maybe only one should do it (preferrably API)? Maybe this script should be better protected for errors?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/medtagger/database/fixtures.py`
Content:
```
1 """Insert all database fixtures."""
2 import logging.config
3
4 from sqlalchemy import exists
5
6 from medtagger.database import db_session
7 from medtagger.database.models import ScanCategory, Role
8
9 logging.config.fileConfig('logging.conf')
10 logger = logging.getLogger(__name__)
11
12 CATEGORIES = [{
13 'key': 'KIDNEYS',
14 'name': 'Kidneys',
15 'image_path': '../../../assets/icon/kidneys_category_icon.svg',
16 }, {
17 'key': 'LIVER',
18 'name': 'Liver',
19 'image_path': '../../../assets/icon/liver_category_icon.svg',
20 }, {
21 'key': 'HEART',
22 'name': 'Hearth',
23 'image_path': '../../../assets/icon/heart_category_icon.svg',
24 }, {
25 'key': 'LUNGS',
26 'name': 'Lungs',
27 'image_path': '../../../assets/icon/lungs_category_icon.svg',
28 }]
29
30 ROLES = [
31 {
32 'name': 'admin',
33 },
34 {
35 'name': 'doctor',
36 },
37 {
38 'name': 'volunteer',
39 },
40 ]
41
42
43 def insert_scan_categories() -> None:
44 """Insert all default Scan Categories if don't exist."""
45 with db_session() as session:
46 for row in CATEGORIES:
47 category_key = row.get('key', '')
48 category_exists = session.query(exists().where(ScanCategory.key == category_key)).scalar()
49 if category_exists:
50 logger.info('Scan Category exists with key "%s"', category_key)
51 continue
52
53 category = ScanCategory(**row)
54 session.add(category)
55 logger.info('Scan Category added for key "%s"', category_key)
56
57
58 def insert_user_roles() -> None:
59 """Insert default user Roles."""
60 with db_session() as session:
61 for row in ROLES:
62 role_name = row.get('name', '')
63 role_exists = session.query(exists().where(Role.name == role_name)).scalar()
64 if role_exists:
65 logger.info('Role exists with name "%s"', role_name)
66 continue
67
68 role = Role(**row)
69 session.add(role)
70 logger.info('Role added for name "%s"', role_name)
71
72
73 def apply_all_fixtures() -> None:
74 """Apply all available fixtures."""
75 logger.info('Applying fixtures for Scan Categories...')
76 insert_scan_categories()
77 logger.info('Applying fixtures for user Roles...')
78 insert_user_roles()
79
80
81 if __name__ == '__main__':
82 apply_all_fixtures()
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/medtagger/database/fixtures.py b/backend/medtagger/database/fixtures.py
--- a/backend/medtagger/database/fixtures.py
+++ b/backend/medtagger/database/fixtures.py
@@ -2,6 +2,7 @@
import logging.config
from sqlalchemy import exists
+from sqlalchemy.exc import IntegrityError
from medtagger.database import db_session
from medtagger.database.models import ScanCategory, Role
@@ -79,4 +80,8 @@
if __name__ == '__main__':
- apply_all_fixtures()
+ try:
+ apply_all_fixtures()
+ except IntegrityError:
+ logger.error('An error occurred while applying fixtures! It is highly possible that there was'
+ 'a race condition between multiple processes applying fixtures at the same time.')
| {"golden_diff": "diff --git a/backend/medtagger/database/fixtures.py b/backend/medtagger/database/fixtures.py\n--- a/backend/medtagger/database/fixtures.py\n+++ b/backend/medtagger/database/fixtures.py\n@@ -2,6 +2,7 @@\n import logging.config\n \n from sqlalchemy import exists\n+from sqlalchemy.exc import IntegrityError\n \n from medtagger.database import db_session\n from medtagger.database.models import ScanCategory, Role\n@@ -79,4 +80,8 @@\n \n \n if __name__ == '__main__':\n- apply_all_fixtures()\n+ try:\n+ apply_all_fixtures()\n+ except IntegrityError:\n+ logger.error('An error occurred while applying fixtures! It is highly possible that there was'\n+ 'a race condition between multiple processes applying fixtures at the same time.')\n", "issue": "Race condition between API & Worker containers that updates fixtures on start\n## Expected Behavior\r\n\r\nContainers should start without any issues.\r\n\r\n## Actual Behavior\r\n\r\nAny of these two containers may fail to start due to unexpected exceptions while applying fixtures.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Run `docker-compose up`.\r\n 2. Be lucky.\r\n\r\n## Additional comment\r\n\r\nBoth of these containers start script for applying fixtures. Maybe only one should do it (preferrably API)? Maybe this script should be better protected for errors?\r\n\n", "before_files": [{"content": "\"\"\"Insert all database fixtures.\"\"\"\nimport logging.config\n\nfrom sqlalchemy import exists\n\nfrom medtagger.database import db_session\nfrom medtagger.database.models import ScanCategory, Role\n\nlogging.config.fileConfig('logging.conf')\nlogger = logging.getLogger(__name__)\n\nCATEGORIES = [{\n 'key': 'KIDNEYS',\n 'name': 'Kidneys',\n 'image_path': '../../../assets/icon/kidneys_category_icon.svg',\n}, {\n 'key': 'LIVER',\n 'name': 'Liver',\n 'image_path': '../../../assets/icon/liver_category_icon.svg',\n}, {\n 'key': 'HEART',\n 'name': 'Hearth',\n 'image_path': '../../../assets/icon/heart_category_icon.svg',\n}, {\n 'key': 'LUNGS',\n 'name': 'Lungs',\n 'image_path': '../../../assets/icon/lungs_category_icon.svg',\n}]\n\nROLES = [\n {\n 'name': 'admin',\n },\n {\n 'name': 'doctor',\n },\n {\n 'name': 'volunteer',\n },\n]\n\n\ndef insert_scan_categories() -> None:\n \"\"\"Insert all default Scan Categories if don't exist.\"\"\"\n with db_session() as session:\n for row in CATEGORIES:\n category_key = row.get('key', '')\n category_exists = session.query(exists().where(ScanCategory.key == category_key)).scalar()\n if category_exists:\n logger.info('Scan Category exists with key \"%s\"', category_key)\n continue\n\n category = ScanCategory(**row)\n session.add(category)\n logger.info('Scan Category added for key \"%s\"', category_key)\n\n\ndef insert_user_roles() -> None:\n \"\"\"Insert default user Roles.\"\"\"\n with db_session() as session:\n for row in ROLES:\n role_name = row.get('name', '')\n role_exists = session.query(exists().where(Role.name == role_name)).scalar()\n if role_exists:\n logger.info('Role exists with name \"%s\"', role_name)\n continue\n\n role = Role(**row)\n session.add(role)\n logger.info('Role added for name \"%s\"', role_name)\n\n\ndef apply_all_fixtures() -> None:\n \"\"\"Apply all available fixtures.\"\"\"\n logger.info('Applying fixtures for Scan Categories...')\n insert_scan_categories()\n logger.info('Applying fixtures for user Roles...')\n insert_user_roles()\n\n\nif __name__ == '__main__':\n apply_all_fixtures()\n", "path": "backend/medtagger/database/fixtures.py"}], "after_files": [{"content": "\"\"\"Insert all database fixtures.\"\"\"\nimport logging.config\n\nfrom sqlalchemy import exists\nfrom sqlalchemy.exc import IntegrityError\n\nfrom medtagger.database import db_session\nfrom medtagger.database.models import ScanCategory, Role\n\nlogging.config.fileConfig('logging.conf')\nlogger = logging.getLogger(__name__)\n\nCATEGORIES = [{\n 'key': 'KIDNEYS',\n 'name': 'Kidneys',\n 'image_path': '../../../assets/icon/kidneys_category_icon.svg',\n}, {\n 'key': 'LIVER',\n 'name': 'Liver',\n 'image_path': '../../../assets/icon/liver_category_icon.svg',\n}, {\n 'key': 'HEART',\n 'name': 'Hearth',\n 'image_path': '../../../assets/icon/heart_category_icon.svg',\n}, {\n 'key': 'LUNGS',\n 'name': 'Lungs',\n 'image_path': '../../../assets/icon/lungs_category_icon.svg',\n}]\n\nROLES = [\n {\n 'name': 'admin',\n },\n {\n 'name': 'doctor',\n },\n {\n 'name': 'volunteer',\n },\n]\n\n\ndef insert_scan_categories() -> None:\n \"\"\"Insert all default Scan Categories if don't exist.\"\"\"\n with db_session() as session:\n for row in CATEGORIES:\n category_key = row.get('key', '')\n category_exists = session.query(exists().where(ScanCategory.key == category_key)).scalar()\n if category_exists:\n logger.info('Scan Category exists with key \"%s\"', category_key)\n continue\n\n category = ScanCategory(**row)\n session.add(category)\n logger.info('Scan Category added for key \"%s\"', category_key)\n\n\ndef insert_user_roles() -> None:\n \"\"\"Insert default user Roles.\"\"\"\n with db_session() as session:\n for row in ROLES:\n role_name = row.get('name', '')\n role_exists = session.query(exists().where(Role.name == role_name)).scalar()\n if role_exists:\n logger.info('Role exists with name \"%s\"', role_name)\n continue\n\n role = Role(**row)\n session.add(role)\n logger.info('Role added for name \"%s\"', role_name)\n\n\ndef apply_all_fixtures() -> None:\n \"\"\"Apply all available fixtures.\"\"\"\n logger.info('Applying fixtures for Scan Categories...')\n insert_scan_categories()\n logger.info('Applying fixtures for user Roles...')\n insert_user_roles()\n\n\nif __name__ == '__main__':\n try:\n apply_all_fixtures()\n except IntegrityError:\n logger.error('An error occurred while applying fixtures! It is highly possible that there was'\n 'a race condition between multiple processes applying fixtures at the same time.')\n", "path": "backend/medtagger/database/fixtures.py"}]} | 1,051 | 173 |
gh_patches_debug_20920 | rasdani/github-patches | git_diff | spyder-ide__spyder-20450 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`spyder-line-profiler` fails silently with latest 5.x due to missing import in `py3compat`
<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->
## Problem Description
Running Spyder 5.x from `bootstrap.py` in an env with `spyder-line-profiler` 0.3.1 shows a traceback in the cmd/terminal from where Spyder was launched and the plugin doesn't load.
Maybe related with https://github.com/spyder-ide/spyder/pull/20366 and probably also affects other external plugins which rely on the `py3compat` module
### What steps reproduce the problem?
1. Install `spyder-line-profiler` 0.3.1
2. Run Spyder from `bootstrap.py`
### What is the expected output? What do you see instead?
The plugin to be able to load
### Paste Traceback/Error Below (if applicable)
<!--- Copy from error dialog or View > Panes > Internal Console --->
```python-traceback
spyder_line_profiler: cannot import name 'pickle' from 'spyder.py3compat' (e:\acer\documentos\spyder\spyder\spyder\py3compat.py)
Traceback (most recent call last):
File "e:\acer\documentos\spyder\spyder\spyder\app\find_plugins.py", line 67, in find_external_plugins
mod = importlib.import_module(entry_point.module_name)
File "C:\Users\dalth\anaconda3\envs\spyder-env-manager\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\dalth\anaconda3\envs\spyder-env-manager\lib\site-packages\spyder_line_profiler\spyder\plugin.py", line 29, in <module>
from spyder_line_profiler.spyder.confpage import SpyderLineProfilerConfigPage
File "C:\Users\dalth\anaconda3\envs\spyder-env-manager\lib\site-packages\spyder_line_profiler\spyder\confpage.py", line 19, in <module>
from .widgets import SpyderLineProfilerWidget
File "C:\Users\dalth\anaconda3\envs\spyder-env-manager\lib\site-packages\spyder_line_profiler\spyder\widgets.py", line 38, in <module>
from spyder.py3compat import to_text_string, pickle
ImportError: cannot import name 'pickle' from 'spyder.py3compat' (e:\acer\documentos\spyder\spyder\spyder\py3compat.py)
```
## Versions
<!--- You can get this information from Help > About Spyder...
or (if Spyder won't launch) the "conda list" command
from the Anaconda Prompt/Terminal/command line. --->
* Spyder version: 5.5.0.dev0 4a5d86ecd (conda)
* Python version: 3.9.15 64-bit
* Qt version: 5.15.2
* PyQt5 version: 5.15.7
* Operating System: Windows 10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spyder/py3compat.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """
8 spyder.py3compat
9 ----------------
10
11 Transitional module providing compatibility functions intended to help
12 migrating from Python 2 to Python 3.
13
14 This module should be fully compatible with:
15 * Python >=v2.6
16 * Python 3
17 """
18
19 import operator
20
21
22 #==============================================================================
23 # Data types
24 #==============================================================================
25 # Python 3
26 TEXT_TYPES = (str,)
27 INT_TYPES = (int,)
28
29
30 #==============================================================================
31 # Strings
32 #==============================================================================
33 def is_type_text_string(obj):
34 """Return True if `obj` is type text string, False if it is anything else,
35 like an instance of a class that extends the basestring class."""
36 return type(obj) in [str, bytes]
37
38 def is_text_string(obj):
39 """Return True if `obj` is a text string, False if it is anything else,
40 like binary data (Python 3) or QString (PyQt API #1)"""
41 return isinstance(obj, str)
42
43 def is_binary_string(obj):
44 """Return True if `obj` is a binary string, False if it is anything else"""
45 return isinstance(obj, bytes)
46
47 def is_string(obj):
48 """Return True if `obj` is a text or binary Python string object,
49 False if it is anything else, like a QString (PyQt API #1)"""
50 return is_text_string(obj) or is_binary_string(obj)
51
52 def to_text_string(obj, encoding=None):
53 """Convert `obj` to (unicode) text string"""
54 if encoding is None:
55 return str(obj)
56 elif isinstance(obj, str):
57 # In case this function is not used properly, this could happen
58 return obj
59 else:
60 return str(obj, encoding)
61
62 def to_binary_string(obj, encoding='utf-8'):
63 """Convert `obj` to binary string (bytes)"""
64 return bytes(obj, encoding)
65
66 #==============================================================================
67 # Misc.
68 #==============================================================================
69 # Python 3
70
71 def qbytearray_to_str(qba):
72 """Convert QByteArray object to str in a way compatible with Python 3"""
73 return str(bytes(qba.toHex().data()).decode())
74
75 # =============================================================================
76 # Dict funcs
77 # =============================================================================
78 def iterkeys(d, **kw):
79 return iter(d.keys(**kw))
80
81 def itervalues(d, **kw):
82 return iter(d.values(**kw))
83
84 def iteritems(d, **kw):
85 return iter(d.items(**kw))
86
87 def iterlists(d, **kw):
88 return iter(d.lists(**kw))
89
90 viewkeys = operator.methodcaller("keys")
91
92 viewvalues = operator.methodcaller("values")
93
94 viewitems = operator.methodcaller("items")
95
96
97 if __name__ == '__main__':
98 pass
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spyder/py3compat.py b/spyder/py3compat.py
--- a/spyder/py3compat.py
+++ b/spyder/py3compat.py
@@ -10,13 +10,10 @@
Transitional module providing compatibility functions intended to help
migrating from Python 2 to Python 3.
-
-This module should be fully compatible with:
- * Python >=v2.6
- * Python 3
"""
import operator
+import pickle # noqa. For compatibility with spyder-line-profiler
#==============================================================================
@@ -66,8 +63,6 @@
#==============================================================================
# Misc.
#==============================================================================
-# Python 3
-
def qbytearray_to_str(qba):
"""Convert QByteArray object to str in a way compatible with Python 3"""
return str(bytes(qba.toHex().data()).decode())
| {"golden_diff": "diff --git a/spyder/py3compat.py b/spyder/py3compat.py\n--- a/spyder/py3compat.py\n+++ b/spyder/py3compat.py\n@@ -10,13 +10,10 @@\n \n Transitional module providing compatibility functions intended to help\n migrating from Python 2 to Python 3.\n-\n-This module should be fully compatible with:\n- * Python >=v2.6\n- * Python 3\n \"\"\"\n \n import operator\n+import pickle # noqa. For compatibility with spyder-line-profiler\n \n \n #==============================================================================\n@@ -66,8 +63,6 @@\n #==============================================================================\n # Misc.\n #==============================================================================\n-# Python 3\n-\n def qbytearray_to_str(qba):\n \"\"\"Convert QByteArray object to str in a way compatible with Python 3\"\"\"\n return str(bytes(qba.toHex().data()).decode())\n", "issue": "`spyder-line-profiler` fails silently with latest 5.x due to missing import in `py3compat`\n<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->\r\n\r\n## Problem Description\r\n\r\nRunning Spyder 5.x from `bootstrap.py` in an env with `spyder-line-profiler` 0.3.1 shows a traceback in the cmd/terminal from where Spyder was launched and the plugin doesn't load.\r\n\r\nMaybe related with https://github.com/spyder-ide/spyder/pull/20366 and probably also affects other external plugins which rely on the `py3compat` module\r\n\r\n### What steps reproduce the problem?\r\n\r\n1. Install `spyder-line-profiler` 0.3.1 \r\n2. Run Spyder from `bootstrap.py`\r\n\r\n### What is the expected output? What do you see instead?\r\n\r\nThe plugin to be able to load\r\n\r\n### Paste Traceback/Error Below (if applicable)\r\n<!--- Copy from error dialog or View > Panes > Internal Console --->\r\n\r\n```python-traceback\r\nspyder_line_profiler: cannot import name 'pickle' from 'spyder.py3compat' (e:\\acer\\documentos\\spyder\\spyder\\spyder\\py3compat.py)\r\nTraceback (most recent call last):\r\n File \"e:\\acer\\documentos\\spyder\\spyder\\spyder\\app\\find_plugins.py\", line 67, in find_external_plugins\r\n mod = importlib.import_module(entry_point.module_name)\r\n File \"C:\\Users\\dalth\\anaconda3\\envs\\spyder-env-manager\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"C:\\Users\\dalth\\anaconda3\\envs\\spyder-env-manager\\lib\\site-packages\\spyder_line_profiler\\spyder\\plugin.py\", line 29, in <module>\r\n from spyder_line_profiler.spyder.confpage import SpyderLineProfilerConfigPage\r\n File \"C:\\Users\\dalth\\anaconda3\\envs\\spyder-env-manager\\lib\\site-packages\\spyder_line_profiler\\spyder\\confpage.py\", line 19, in <module>\r\n from .widgets import SpyderLineProfilerWidget\r\n File \"C:\\Users\\dalth\\anaconda3\\envs\\spyder-env-manager\\lib\\site-packages\\spyder_line_profiler\\spyder\\widgets.py\", line 38, in <module>\r\n from spyder.py3compat import to_text_string, pickle\r\nImportError: cannot import name 'pickle' from 'spyder.py3compat' (e:\\acer\\documentos\\spyder\\spyder\\spyder\\py3compat.py)\r\n```\r\n\r\n## Versions\r\n<!--- You can get this information from Help > About Spyder...\r\nor (if Spyder won't launch) the \"conda list\" command\r\nfrom the Anaconda Prompt/Terminal/command line. --->\r\n\r\n* Spyder version: 5.5.0.dev0 4a5d86ecd (conda)\r\n* Python version: 3.9.15 64-bit\r\n* Qt version: 5.15.2\r\n* PyQt5 version: 5.15.7\r\n* Operating System: Windows 10\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nspyder.py3compat\n----------------\n\nTransitional module providing compatibility functions intended to help\nmigrating from Python 2 to Python 3.\n\nThis module should be fully compatible with:\n * Python >=v2.6\n * Python 3\n\"\"\"\n\nimport operator\n\n\n#==============================================================================\n# Data types\n#==============================================================================\n# Python 3\nTEXT_TYPES = (str,)\nINT_TYPES = (int,)\n\n\n#==============================================================================\n# Strings\n#==============================================================================\ndef is_type_text_string(obj):\n \"\"\"Return True if `obj` is type text string, False if it is anything else,\n like an instance of a class that extends the basestring class.\"\"\"\n return type(obj) in [str, bytes]\n\ndef is_text_string(obj):\n \"\"\"Return True if `obj` is a text string, False if it is anything else,\n like binary data (Python 3) or QString (PyQt API #1)\"\"\"\n return isinstance(obj, str)\n\ndef is_binary_string(obj):\n \"\"\"Return True if `obj` is a binary string, False if it is anything else\"\"\"\n return isinstance(obj, bytes)\n\ndef is_string(obj):\n \"\"\"Return True if `obj` is a text or binary Python string object,\n False if it is anything else, like a QString (PyQt API #1)\"\"\"\n return is_text_string(obj) or is_binary_string(obj)\n\ndef to_text_string(obj, encoding=None):\n \"\"\"Convert `obj` to (unicode) text string\"\"\"\n if encoding is None:\n return str(obj)\n elif isinstance(obj, str):\n # In case this function is not used properly, this could happen\n return obj\n else:\n return str(obj, encoding)\n\ndef to_binary_string(obj, encoding='utf-8'):\n \"\"\"Convert `obj` to binary string (bytes)\"\"\"\n return bytes(obj, encoding)\n\n#==============================================================================\n# Misc.\n#==============================================================================\n# Python 3\n\ndef qbytearray_to_str(qba):\n \"\"\"Convert QByteArray object to str in a way compatible with Python 3\"\"\"\n return str(bytes(qba.toHex().data()).decode())\n\n# =============================================================================\n# Dict funcs\n# =============================================================================\ndef iterkeys(d, **kw):\n return iter(d.keys(**kw))\n\ndef itervalues(d, **kw):\n return iter(d.values(**kw))\n\ndef iteritems(d, **kw):\n return iter(d.items(**kw))\n\ndef iterlists(d, **kw):\n return iter(d.lists(**kw))\n\nviewkeys = operator.methodcaller(\"keys\")\n\nviewvalues = operator.methodcaller(\"values\")\n\nviewitems = operator.methodcaller(\"items\")\n\n\nif __name__ == '__main__':\n pass\n", "path": "spyder/py3compat.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nspyder.py3compat\n----------------\n\nTransitional module providing compatibility functions intended to help\nmigrating from Python 2 to Python 3.\n\"\"\"\n\nimport operator\nimport pickle # noqa. For compatibility with spyder-line-profiler\n\n\n#==============================================================================\n# Data types\n#==============================================================================\n# Python 3\nTEXT_TYPES = (str,)\nINT_TYPES = (int,)\n\n\n#==============================================================================\n# Strings\n#==============================================================================\ndef is_type_text_string(obj):\n \"\"\"Return True if `obj` is type text string, False if it is anything else,\n like an instance of a class that extends the basestring class.\"\"\"\n return type(obj) in [str, bytes]\n\ndef is_text_string(obj):\n \"\"\"Return True if `obj` is a text string, False if it is anything else,\n like binary data (Python 3) or QString (PyQt API #1)\"\"\"\n return isinstance(obj, str)\n\ndef is_binary_string(obj):\n \"\"\"Return True if `obj` is a binary string, False if it is anything else\"\"\"\n return isinstance(obj, bytes)\n\ndef is_string(obj):\n \"\"\"Return True if `obj` is a text or binary Python string object,\n False if it is anything else, like a QString (PyQt API #1)\"\"\"\n return is_text_string(obj) or is_binary_string(obj)\n\ndef to_text_string(obj, encoding=None):\n \"\"\"Convert `obj` to (unicode) text string\"\"\"\n if encoding is None:\n return str(obj)\n elif isinstance(obj, str):\n # In case this function is not used properly, this could happen\n return obj\n else:\n return str(obj, encoding)\n\ndef to_binary_string(obj, encoding='utf-8'):\n \"\"\"Convert `obj` to binary string (bytes)\"\"\"\n return bytes(obj, encoding)\n\n#==============================================================================\n# Misc.\n#==============================================================================\ndef qbytearray_to_str(qba):\n \"\"\"Convert QByteArray object to str in a way compatible with Python 3\"\"\"\n return str(bytes(qba.toHex().data()).decode())\n\n# =============================================================================\n# Dict funcs\n# =============================================================================\ndef iterkeys(d, **kw):\n return iter(d.keys(**kw))\n\ndef itervalues(d, **kw):\n return iter(d.values(**kw))\n\ndef iteritems(d, **kw):\n return iter(d.items(**kw))\n\ndef iterlists(d, **kw):\n return iter(d.lists(**kw))\n\nviewkeys = operator.methodcaller(\"keys\")\n\nviewvalues = operator.methodcaller(\"values\")\n\nviewitems = operator.methodcaller(\"items\")\n\n\nif __name__ == '__main__':\n pass\n", "path": "spyder/py3compat.py"}]} | 1,939 | 192 |
gh_patches_debug_33821 | rasdani/github-patches | git_diff | airctic__icevision-993 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SSD model doesn't work
## 🐛 Bug
SSD model doesn't work anymore. It seems related to MMDetection updates made here:
https://github.com/open-mmlab/mmdetection/pull/5789/files
Refer to discussion on our Discord forum:
https://discord.com/channels/735877944085446747/780951885485965352/920249646964670464
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/mmdet/utils.py`
Content:
```
1 __all__ = [
2 "MMDetBackboneConfig",
3 "mmdet_configs_path",
4 "param_groups",
5 "MMDetBackboneConfig",
6 "create_model_config",
7 ]
8
9 from icevision.imports import *
10 from icevision.utils import *
11 from icevision.backbones import BackboneConfig
12 from icevision.models.mmdet.download_configs import download_mmdet_configs
13 from mmdet.models.detectors import *
14 from mmcv import Config
15 from mmdet.models.backbones.ssd_vgg import SSDVGG
16 from mmdet.models.backbones.csp_darknet import CSPDarknet
17
18
19 mmdet_configs_path = download_mmdet_configs()
20
21
22 class MMDetBackboneConfig(BackboneConfig):
23 def __init__(self, model_name, config_path, weights_url):
24 self.model_name = model_name
25 self.config_path = config_path
26 self.weights_url = weights_url
27 self.pretrained: bool
28
29 def __call__(self, pretrained: bool = True) -> "MMDetBackboneConfig":
30 self.pretrained = pretrained
31 return self
32
33
34 def param_groups(model):
35 body = model.backbone
36
37 layers = []
38 if isinstance(body, SSDVGG):
39 layers += [body.features]
40 layers += [body.extra, body.l2_norm]
41 elif isinstance(body, CSPDarknet):
42 layers += [body.stem.conv.conv, body.stem.conv.bn]
43 layers += [body.stage1, body.stage2, body.stage3, body.stage4]
44 layers += [model.neck]
45 else:
46 layers += [nn.Sequential(body.conv1, body.bn1)]
47 layers += [getattr(body, l) for l in body.res_layers]
48 layers += [model.neck]
49
50 if isinstance(model, SingleStageDetector):
51 layers += [model.bbox_head]
52 elif isinstance(model, TwoStageDetector):
53 layers += [nn.Sequential(model.rpn_head, model.roi_head)]
54 else:
55 raise RuntimeError(
56 "{model} must inherit either from SingleStageDetector or TwoStageDetector class"
57 )
58
59 _param_groups = [list(layer.parameters()) for layer in layers]
60 check_all_model_params_in_groups2(model, _param_groups)
61 return _param_groups
62
63
64 def create_model_config(
65 backbone: MMDetBackboneConfig,
66 pretrained: bool = True,
67 checkpoints_path: Optional[Union[str, Path]] = "checkpoints",
68 force_download=False,
69 cfg_options=None,
70 ):
71
72 model_name = backbone.model_name
73 config_path = backbone.config_path
74 weights_url = backbone.weights_url
75
76 # download weights
77 weights_path = None
78 if pretrained and weights_url:
79 save_dir = Path(checkpoints_path) / model_name
80 save_dir.mkdir(exist_ok=True, parents=True)
81
82 fname = Path(weights_url).name
83 weights_path = save_dir / fname
84
85 if not weights_path.exists() or force_download:
86 download_url(url=weights_url, save_path=str(weights_path))
87
88 cfg = Config.fromfile(config_path)
89
90 if cfg_options is not None:
91 cfg.merge_from_dict(cfg_options)
92
93 return cfg, weights_path
94
```
Path: `icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py`
Content:
```
1 __all__ = [
2 "ssd300",
3 "ssd512",
4 ]
5
6 from icevision.imports import *
7 from icevision.models.mmdet.utils import *
8
9
10 class MMDetSSDBackboneConfig(MMDetBackboneConfig):
11 def __init__(self, **kwargs):
12 super().__init__(model_name="ssd", **kwargs)
13
14
15 base_config_path = mmdet_configs_path / "ssd"
16 base_weights_url = "http://download.openmmlab.com/mmdetection/v2.0/ssd"
17
18 ssd300 = MMDetSSDBackboneConfig(
19 config_path=base_config_path / "ssd300_coco.py",
20 weights_url=f"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth",
21 )
22
23 ssd512 = MMDetSSDBackboneConfig(
24 config_path=base_config_path / "ssd512_coco.py",
25 weights_url=f"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth",
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py
--- a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py
+++ b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py
@@ -1,6 +1,7 @@
__all__ = [
"ssd300",
"ssd512",
+ "ssdlite_mobilenetv2",
]
from icevision.imports import *
@@ -17,10 +18,15 @@
ssd300 = MMDetSSDBackboneConfig(
config_path=base_config_path / "ssd300_coco.py",
- weights_url=f"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth",
+ weights_url=f"{base_weights_url}/ssd300_coco/ssd300_coco_20210803_015428-d231a06e.pth",
)
ssd512 = MMDetSSDBackboneConfig(
config_path=base_config_path / "ssd512_coco.py",
- weights_url=f"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth",
+ weights_url=f"{base_weights_url}/ssd512_coco/ssd512_coco_20210803_022849-0a47a1ca.pth",
+)
+
+ssdlite_mobilenetv2 = MMDetSSDBackboneConfig(
+ config_path=base_config_path / "ssdlite_mobilenetv2_scratch_600e_coco.py",
+ weights_url=f"{base_weights_url}/ssd512_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth",
)
diff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py
--- a/icevision/models/mmdet/utils.py
+++ b/icevision/models/mmdet/utils.py
@@ -35,18 +35,21 @@
body = model.backbone
layers = []
+
+ # add the backbone
if isinstance(body, SSDVGG):
layers += [body.features]
- layers += [body.extra, body.l2_norm]
elif isinstance(body, CSPDarknet):
layers += [body.stem.conv.conv, body.stem.conv.bn]
layers += [body.stage1, body.stage2, body.stage3, body.stage4]
- layers += [model.neck]
else:
layers += [nn.Sequential(body.conv1, body.bn1)]
layers += [getattr(body, l) for l in body.res_layers]
- layers += [model.neck]
+ # add the neck
+ layers += [model.neck]
+
+ # add the head
if isinstance(model, SingleStageDetector):
layers += [model.bbox_head]
elif isinstance(model, TwoStageDetector):
| {"golden_diff": "diff --git a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py\n--- a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py\n+++ b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py\n@@ -1,6 +1,7 @@\n __all__ = [\n \"ssd300\",\n \"ssd512\",\n+ \"ssdlite_mobilenetv2\",\n ]\n \n from icevision.imports import *\n@@ -17,10 +18,15 @@\n \n ssd300 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd300_coco.py\",\n- weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth\",\n+ weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20210803_015428-d231a06e.pth\",\n )\n \n ssd512 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd512_coco.py\",\n- weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth\",\n+ weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20210803_022849-0a47a1ca.pth\",\n+)\n+\n+ssdlite_mobilenetv2 = MMDetSSDBackboneConfig(\n+ config_path=base_config_path / \"ssdlite_mobilenetv2_scratch_600e_coco.py\",\n+ weights_url=f\"{base_weights_url}/ssd512_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth\",\n )\ndiff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py\n--- a/icevision/models/mmdet/utils.py\n+++ b/icevision/models/mmdet/utils.py\n@@ -35,18 +35,21 @@\n body = model.backbone\n \n layers = []\n+\n+ # add the backbone\n if isinstance(body, SSDVGG):\n layers += [body.features]\n- layers += [body.extra, body.l2_norm]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n- layers += [model.neck]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n- layers += [model.neck]\n \n+ # add the neck\n+ layers += [model.neck]\n+\n+ # add the head\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n elif isinstance(model, TwoStageDetector):\n", "issue": "SSD model doesn't work\n## \ud83d\udc1b Bug\r\n\r\nSSD model doesn't work anymore. It seems related to MMDetection updates made here:\r\nhttps://github.com/open-mmlab/mmdetection/pull/5789/files\r\n\r\nRefer to discussion on our Discord forum:\r\nhttps://discord.com/channels/735877944085446747/780951885485965352/920249646964670464\n", "before_files": [{"content": "__all__ = [\n \"MMDetBackboneConfig\",\n \"mmdet_configs_path\",\n \"param_groups\",\n \"MMDetBackboneConfig\",\n \"create_model_config\",\n]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.backbones import BackboneConfig\nfrom icevision.models.mmdet.download_configs import download_mmdet_configs\nfrom mmdet.models.detectors import *\nfrom mmcv import Config\nfrom mmdet.models.backbones.ssd_vgg import SSDVGG\nfrom mmdet.models.backbones.csp_darknet import CSPDarknet\n\n\nmmdet_configs_path = download_mmdet_configs()\n\n\nclass MMDetBackboneConfig(BackboneConfig):\n def __init__(self, model_name, config_path, weights_url):\n self.model_name = model_name\n self.config_path = config_path\n self.weights_url = weights_url\n self.pretrained: bool\n\n def __call__(self, pretrained: bool = True) -> \"MMDetBackboneConfig\":\n self.pretrained = pretrained\n return self\n\n\ndef param_groups(model):\n body = model.backbone\n\n layers = []\n if isinstance(body, SSDVGG):\n layers += [body.features]\n layers += [body.extra, body.l2_norm]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n layers += [model.neck]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n layers += [model.neck]\n\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n elif isinstance(model, TwoStageDetector):\n layers += [nn.Sequential(model.rpn_head, model.roi_head)]\n else:\n raise RuntimeError(\n \"{model} must inherit either from SingleStageDetector or TwoStageDetector class\"\n )\n\n _param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, _param_groups)\n return _param_groups\n\n\ndef create_model_config(\n backbone: MMDetBackboneConfig,\n pretrained: bool = True,\n checkpoints_path: Optional[Union[str, Path]] = \"checkpoints\",\n force_download=False,\n cfg_options=None,\n):\n\n model_name = backbone.model_name\n config_path = backbone.config_path\n weights_url = backbone.weights_url\n\n # download weights\n weights_path = None\n if pretrained and weights_url:\n save_dir = Path(checkpoints_path) / model_name\n save_dir.mkdir(exist_ok=True, parents=True)\n\n fname = Path(weights_url).name\n weights_path = save_dir / fname\n\n if not weights_path.exists() or force_download:\n download_url(url=weights_url, save_path=str(weights_path))\n\n cfg = Config.fromfile(config_path)\n\n if cfg_options is not None:\n cfg.merge_from_dict(cfg_options)\n\n return cfg, weights_path\n", "path": "icevision/models/mmdet/utils.py"}, {"content": "__all__ = [\n \"ssd300\",\n \"ssd512\",\n]\n\nfrom icevision.imports import *\nfrom icevision.models.mmdet.utils import *\n\n\nclass MMDetSSDBackboneConfig(MMDetBackboneConfig):\n def __init__(self, **kwargs):\n super().__init__(model_name=\"ssd\", **kwargs)\n\n\nbase_config_path = mmdet_configs_path / \"ssd\"\nbase_weights_url = \"http://download.openmmlab.com/mmdetection/v2.0/ssd\"\n\nssd300 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd300_coco.py\",\n weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth\",\n)\n\nssd512 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd512_coco.py\",\n weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth\",\n)\n", "path": "icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py"}], "after_files": [{"content": "__all__ = [\n \"MMDetBackboneConfig\",\n \"mmdet_configs_path\",\n \"param_groups\",\n \"MMDetBackboneConfig\",\n \"create_model_config\",\n]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.backbones import BackboneConfig\nfrom icevision.models.mmdet.download_configs import download_mmdet_configs\nfrom mmdet.models.detectors import *\nfrom mmcv import Config\nfrom mmdet.models.backbones.ssd_vgg import SSDVGG\nfrom mmdet.models.backbones.csp_darknet import CSPDarknet\n\n\nmmdet_configs_path = download_mmdet_configs()\n\n\nclass MMDetBackboneConfig(BackboneConfig):\n def __init__(self, model_name, config_path, weights_url):\n self.model_name = model_name\n self.config_path = config_path\n self.weights_url = weights_url\n self.pretrained: bool\n\n def __call__(self, pretrained: bool = True) -> \"MMDetBackboneConfig\":\n self.pretrained = pretrained\n return self\n\n\ndef param_groups(model):\n body = model.backbone\n\n layers = []\n\n # add the backbone\n if isinstance(body, SSDVGG):\n layers += [body.features]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n\n # add the neck\n layers += [model.neck]\n\n # add the head\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n elif isinstance(model, TwoStageDetector):\n layers += [nn.Sequential(model.rpn_head, model.roi_head)]\n else:\n raise RuntimeError(\n \"{model} must inherit either from SingleStageDetector or TwoStageDetector class\"\n )\n\n _param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, _param_groups)\n return _param_groups\n\n\ndef create_model_config(\n backbone: MMDetBackboneConfig,\n pretrained: bool = True,\n checkpoints_path: Optional[Union[str, Path]] = \"checkpoints\",\n force_download=False,\n cfg_options=None,\n):\n\n model_name = backbone.model_name\n config_path = backbone.config_path\n weights_url = backbone.weights_url\n\n # download weights\n weights_path = None\n if pretrained and weights_url:\n save_dir = Path(checkpoints_path) / model_name\n save_dir.mkdir(exist_ok=True, parents=True)\n\n fname = Path(weights_url).name\n weights_path = save_dir / fname\n\n if not weights_path.exists() or force_download:\n download_url(url=weights_url, save_path=str(weights_path))\n\n cfg = Config.fromfile(config_path)\n\n if cfg_options is not None:\n cfg.merge_from_dict(cfg_options)\n\n return cfg, weights_path\n", "path": "icevision/models/mmdet/utils.py"}, {"content": "__all__ = [\n \"ssd300\",\n \"ssd512\",\n \"ssdlite_mobilenetv2\",\n]\n\nfrom icevision.imports import *\nfrom icevision.models.mmdet.utils import *\n\n\nclass MMDetSSDBackboneConfig(MMDetBackboneConfig):\n def __init__(self, **kwargs):\n super().__init__(model_name=\"ssd\", **kwargs)\n\n\nbase_config_path = mmdet_configs_path / \"ssd\"\nbase_weights_url = \"http://download.openmmlab.com/mmdetection/v2.0/ssd\"\n\nssd300 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd300_coco.py\",\n weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20210803_015428-d231a06e.pth\",\n)\n\nssd512 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd512_coco.py\",\n weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20210803_022849-0a47a1ca.pth\",\n)\n\nssdlite_mobilenetv2 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssdlite_mobilenetv2_scratch_600e_coco.py\",\n weights_url=f\"{base_weights_url}/ssd512_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth\",\n)\n", "path": "icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py"}]} | 1,596 | 771 |
gh_patches_debug_41100 | rasdani/github-patches | git_diff | buildbot__buildbot-422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expose builder.tags to web templates.
This tiny PR exposes the builders' "tags" attribute to the templates, allowing me to customize the templates using that feature.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/steps/master.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 import os, types, re
17 from twisted.python import runtime
18 from twisted.internet import reactor
19 from buildbot.process.buildstep import BuildStep
20 from buildbot.process.buildstep import SUCCESS, FAILURE
21 from twisted.internet.protocol import ProcessProtocol
22
23 class MasterShellCommand(BuildStep):
24 """
25 Run a shell command locally - on the buildmaster. The shell command
26 COMMAND is specified just as for a RemoteShellCommand. Note that extra
27 logfiles are not supported.
28 """
29 name='MasterShellCommand'
30 description='Running'
31 descriptionDone='Ran'
32 descriptionSuffix = None
33 renderables = [ 'command', 'env', 'description', 'descriptionDone', 'descriptionSuffix' ]
34 haltOnFailure = True
35 flunkOnFailure = True
36
37 def __init__(self, command,
38 description=None, descriptionDone=None, descriptionSuffix=None,
39 env=None, path=None, usePTY=0,
40 **kwargs):
41 BuildStep.__init__(self, **kwargs)
42
43 self.command=command
44 if description:
45 self.description = description
46 if isinstance(self.description, str):
47 self.description = [self.description]
48 if descriptionDone:
49 self.descriptionDone = descriptionDone
50 if isinstance(self.descriptionDone, str):
51 self.descriptionDone = [self.descriptionDone]
52 if descriptionSuffix:
53 self.descriptionSuffix = descriptionSuffix
54 if isinstance(self.descriptionSuffix, str):
55 self.descriptionSuffix = [self.descriptionSuffix]
56 self.env=env
57 self.path=path
58 self.usePTY=usePTY
59
60 class LocalPP(ProcessProtocol):
61 def __init__(self, step):
62 self.step = step
63
64 def outReceived(self, data):
65 self.step.stdio_log.addStdout(data)
66
67 def errReceived(self, data):
68 self.step.stdio_log.addStderr(data)
69
70 def processEnded(self, status_object):
71 self.step.stdio_log.addHeader("exit status %d\n" % status_object.value.exitCode)
72 self.step.processEnded(status_object)
73
74 def start(self):
75 # render properties
76 command = self.command
77 # set up argv
78 if type(command) in types.StringTypes:
79 if runtime.platformType == 'win32':
80 argv = os.environ['COMSPEC'].split() # allow %COMSPEC% to have args
81 if '/c' not in argv: argv += ['/c']
82 argv += [command]
83 else:
84 # for posix, use /bin/sh. for other non-posix, well, doesn't
85 # hurt to try
86 argv = ['/bin/sh', '-c', command]
87 else:
88 if runtime.platformType == 'win32':
89 argv = os.environ['COMSPEC'].split() # allow %COMSPEC% to have args
90 if '/c' not in argv: argv += ['/c']
91 argv += list(command)
92 else:
93 argv = command
94
95 self.stdio_log = stdio_log = self.addLog("stdio")
96
97 if type(command) in types.StringTypes:
98 stdio_log.addHeader(command.strip() + "\n\n")
99 else:
100 stdio_log.addHeader(" ".join(command) + "\n\n")
101 stdio_log.addHeader("** RUNNING ON BUILDMASTER **\n")
102 stdio_log.addHeader(" in dir %s\n" % os.getcwd())
103 stdio_log.addHeader(" argv: %s\n" % (argv,))
104 self.step_status.setText(self.describe())
105
106 if self.env is None:
107 env = os.environ
108 else:
109 assert isinstance(self.env, dict)
110 env = self.env
111
112 # do substitution on variable values matching pattern: ${name}
113 p = re.compile('\${([0-9a-zA-Z_]*)}')
114 def subst(match):
115 return os.environ.get(match.group(1), "")
116 newenv = {}
117 for key in env.keys():
118 if env[key] is not None:
119 newenv[key] = p.sub(subst, env[key])
120 env = newenv
121 stdio_log.addHeader(" env: %r\n" % (env,))
122
123 # TODO add a timeout?
124 reactor.spawnProcess(self.LocalPP(self), argv[0], argv,
125 path=self.path, usePTY=self.usePTY, env=env )
126 # (the LocalPP object will call processEnded for us)
127
128 def processEnded(self, status_object):
129 if status_object.value.exitCode != 0:
130 self.descriptionDone = ["failed (%d)" % status_object.value.exitCode]
131 self.step_status.setText(self.describe(done=True))
132 self.finished(FAILURE)
133 else:
134 self.step_status.setText(self.describe(done=True))
135 self.finished(SUCCESS)
136
137 def describe(self, done=False):
138 desc = self.descriptionDone if done else self.description
139 if self.descriptionSuffix:
140 desc = desc[:]
141 desc.extend(self.descriptionSuffix)
142 return desc
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/steps/master.py b/master/buildbot/steps/master.py
--- a/master/buildbot/steps/master.py
+++ b/master/buildbot/steps/master.py
@@ -18,6 +18,7 @@
from twisted.internet import reactor
from buildbot.process.buildstep import BuildStep
from buildbot.process.buildstep import SUCCESS, FAILURE
+from twisted.internet import error
from twisted.internet.protocol import ProcessProtocol
class MasterShellCommand(BuildStep):
@@ -36,7 +37,7 @@
def __init__(self, command,
description=None, descriptionDone=None, descriptionSuffix=None,
- env=None, path=None, usePTY=0,
+ env=None, path=None, usePTY=0, interruptSignal="KILL",
**kwargs):
BuildStep.__init__(self, **kwargs)
@@ -56,6 +57,7 @@
self.env=env
self.path=path
self.usePTY=usePTY
+ self.interruptSignal = interruptSignal
class LocalPP(ProcessProtocol):
def __init__(self, step):
@@ -68,7 +70,10 @@
self.step.stdio_log.addStderr(data)
def processEnded(self, status_object):
- self.step.stdio_log.addHeader("exit status %d\n" % status_object.value.exitCode)
+ if status_object.value.exitCode is not None:
+ self.step.stdio_log.addHeader("exit status %d\n" % status_object.value.exitCode)
+ if status_object.value.signal is not None:
+ self.step.stdio_log.addHeader("signal %s\n" % status_object.value.signal)
self.step.processEnded(status_object)
def start(self):
@@ -121,12 +126,16 @@
stdio_log.addHeader(" env: %r\n" % (env,))
# TODO add a timeout?
- reactor.spawnProcess(self.LocalPP(self), argv[0], argv,
+ self.process = reactor.spawnProcess(self.LocalPP(self), argv[0], argv,
path=self.path, usePTY=self.usePTY, env=env )
# (the LocalPP object will call processEnded for us)
def processEnded(self, status_object):
- if status_object.value.exitCode != 0:
+ if status_object.value.signal is not None:
+ self.descriptionDone = ["killed (%s)" % status_object.value.signal]
+ self.step_status.setText(self.describe(done=True))
+ self.finished(FAILURE)
+ elif status_object.value.exitCode != 0:
self.descriptionDone = ["failed (%d)" % status_object.value.exitCode]
self.step_status.setText(self.describe(done=True))
self.finished(FAILURE)
@@ -140,3 +149,12 @@
desc = desc[:]
desc.extend(self.descriptionSuffix)
return desc
+
+ def interrupt(self, reason):
+ try:
+ self.process.signalProcess(self.interruptSignal)
+ except KeyError: # Process not started yet
+ pass
+ except error.ProcessExitedAlready:
+ pass
+ BuildStep.interrupt(self, reason)
| {"golden_diff": "diff --git a/master/buildbot/steps/master.py b/master/buildbot/steps/master.py\n--- a/master/buildbot/steps/master.py\n+++ b/master/buildbot/steps/master.py\n@@ -18,6 +18,7 @@\n from twisted.internet import reactor\n from buildbot.process.buildstep import BuildStep\n from buildbot.process.buildstep import SUCCESS, FAILURE\n+from twisted.internet import error\n from twisted.internet.protocol import ProcessProtocol\n \n class MasterShellCommand(BuildStep):\n@@ -36,7 +37,7 @@\n \n def __init__(self, command,\n description=None, descriptionDone=None, descriptionSuffix=None,\n- env=None, path=None, usePTY=0,\n+ env=None, path=None, usePTY=0, interruptSignal=\"KILL\",\n **kwargs):\n BuildStep.__init__(self, **kwargs)\n \n@@ -56,6 +57,7 @@\n self.env=env\n self.path=path\n self.usePTY=usePTY\n+ self.interruptSignal = interruptSignal\n \n class LocalPP(ProcessProtocol):\n def __init__(self, step):\n@@ -68,7 +70,10 @@\n self.step.stdio_log.addStderr(data)\n \n def processEnded(self, status_object):\n- self.step.stdio_log.addHeader(\"exit status %d\\n\" % status_object.value.exitCode)\n+ if status_object.value.exitCode is not None:\n+ self.step.stdio_log.addHeader(\"exit status %d\\n\" % status_object.value.exitCode)\n+ if status_object.value.signal is not None:\n+ self.step.stdio_log.addHeader(\"signal %s\\n\" % status_object.value.signal)\n self.step.processEnded(status_object)\n \n def start(self):\n@@ -121,12 +126,16 @@\n stdio_log.addHeader(\" env: %r\\n\" % (env,))\n \n # TODO add a timeout?\n- reactor.spawnProcess(self.LocalPP(self), argv[0], argv,\n+ self.process = reactor.spawnProcess(self.LocalPP(self), argv[0], argv,\n path=self.path, usePTY=self.usePTY, env=env )\n # (the LocalPP object will call processEnded for us)\n \n def processEnded(self, status_object):\n- if status_object.value.exitCode != 0:\n+ if status_object.value.signal is not None:\n+ self.descriptionDone = [\"killed (%s)\" % status_object.value.signal]\n+ self.step_status.setText(self.describe(done=True))\n+ self.finished(FAILURE)\n+ elif status_object.value.exitCode != 0:\n self.descriptionDone = [\"failed (%d)\" % status_object.value.exitCode]\n self.step_status.setText(self.describe(done=True))\n self.finished(FAILURE)\n@@ -140,3 +149,12 @@\n desc = desc[:]\n desc.extend(self.descriptionSuffix)\n return desc\n+\n+ def interrupt(self, reason):\n+ try:\n+ self.process.signalProcess(self.interruptSignal)\n+ except KeyError: # Process not started yet\n+ pass\n+ except error.ProcessExitedAlready:\n+ pass\n+ BuildStep.interrupt(self, reason)\n", "issue": "Expose builder.tags to web templates.\nThis tiny PR exposes the builders' \"tags\" attribute to the templates, allowing me to customize the templates using that feature.\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport os, types, re\nfrom twisted.python import runtime\nfrom twisted.internet import reactor\nfrom buildbot.process.buildstep import BuildStep\nfrom buildbot.process.buildstep import SUCCESS, FAILURE\nfrom twisted.internet.protocol import ProcessProtocol\n\nclass MasterShellCommand(BuildStep):\n \"\"\"\n Run a shell command locally - on the buildmaster. The shell command\n COMMAND is specified just as for a RemoteShellCommand. Note that extra\n logfiles are not supported.\n \"\"\"\n name='MasterShellCommand'\n description='Running'\n descriptionDone='Ran'\n descriptionSuffix = None\n renderables = [ 'command', 'env', 'description', 'descriptionDone', 'descriptionSuffix' ]\n haltOnFailure = True\n flunkOnFailure = True\n\n def __init__(self, command,\n description=None, descriptionDone=None, descriptionSuffix=None,\n env=None, path=None, usePTY=0,\n **kwargs):\n BuildStep.__init__(self, **kwargs)\n\n self.command=command\n if description:\n self.description = description\n if isinstance(self.description, str):\n self.description = [self.description]\n if descriptionDone:\n self.descriptionDone = descriptionDone\n if isinstance(self.descriptionDone, str):\n self.descriptionDone = [self.descriptionDone]\n if descriptionSuffix:\n self.descriptionSuffix = descriptionSuffix\n if isinstance(self.descriptionSuffix, str):\n self.descriptionSuffix = [self.descriptionSuffix]\n self.env=env\n self.path=path\n self.usePTY=usePTY\n\n class LocalPP(ProcessProtocol):\n def __init__(self, step):\n self.step = step\n\n def outReceived(self, data):\n self.step.stdio_log.addStdout(data)\n\n def errReceived(self, data):\n self.step.stdio_log.addStderr(data)\n\n def processEnded(self, status_object):\n self.step.stdio_log.addHeader(\"exit status %d\\n\" % status_object.value.exitCode)\n self.step.processEnded(status_object)\n\n def start(self):\n # render properties\n command = self.command\n # set up argv\n if type(command) in types.StringTypes:\n if runtime.platformType == 'win32':\n argv = os.environ['COMSPEC'].split() # allow %COMSPEC% to have args\n if '/c' not in argv: argv += ['/c']\n argv += [command]\n else:\n # for posix, use /bin/sh. for other non-posix, well, doesn't\n # hurt to try\n argv = ['/bin/sh', '-c', command]\n else:\n if runtime.platformType == 'win32':\n argv = os.environ['COMSPEC'].split() # allow %COMSPEC% to have args\n if '/c' not in argv: argv += ['/c']\n argv += list(command)\n else:\n argv = command\n\n self.stdio_log = stdio_log = self.addLog(\"stdio\")\n\n if type(command) in types.StringTypes:\n stdio_log.addHeader(command.strip() + \"\\n\\n\")\n else:\n stdio_log.addHeader(\" \".join(command) + \"\\n\\n\")\n stdio_log.addHeader(\"** RUNNING ON BUILDMASTER **\\n\")\n stdio_log.addHeader(\" in dir %s\\n\" % os.getcwd())\n stdio_log.addHeader(\" argv: %s\\n\" % (argv,))\n self.step_status.setText(self.describe())\n\n if self.env is None:\n env = os.environ\n else:\n assert isinstance(self.env, dict)\n env = self.env\n\n # do substitution on variable values matching pattern: ${name}\n p = re.compile('\\${([0-9a-zA-Z_]*)}')\n def subst(match):\n return os.environ.get(match.group(1), \"\")\n newenv = {}\n for key in env.keys():\n if env[key] is not None:\n newenv[key] = p.sub(subst, env[key])\n env = newenv\n stdio_log.addHeader(\" env: %r\\n\" % (env,))\n\n # TODO add a timeout?\n reactor.spawnProcess(self.LocalPP(self), argv[0], argv,\n path=self.path, usePTY=self.usePTY, env=env )\n # (the LocalPP object will call processEnded for us)\n\n def processEnded(self, status_object):\n if status_object.value.exitCode != 0:\n self.descriptionDone = [\"failed (%d)\" % status_object.value.exitCode]\n self.step_status.setText(self.describe(done=True))\n self.finished(FAILURE)\n else:\n self.step_status.setText(self.describe(done=True))\n self.finished(SUCCESS)\n\n def describe(self, done=False):\n desc = self.descriptionDone if done else self.description\n if self.descriptionSuffix:\n desc = desc[:]\n desc.extend(self.descriptionSuffix)\n return desc\n", "path": "master/buildbot/steps/master.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport os, types, re\nfrom twisted.python import runtime\nfrom twisted.internet import reactor\nfrom buildbot.process.buildstep import BuildStep\nfrom buildbot.process.buildstep import SUCCESS, FAILURE\nfrom twisted.internet import error\nfrom twisted.internet.protocol import ProcessProtocol\n\nclass MasterShellCommand(BuildStep):\n \"\"\"\n Run a shell command locally - on the buildmaster. The shell command\n COMMAND is specified just as for a RemoteShellCommand. Note that extra\n logfiles are not supported.\n \"\"\"\n name='MasterShellCommand'\n description='Running'\n descriptionDone='Ran'\n descriptionSuffix = None\n renderables = [ 'command', 'env', 'description', 'descriptionDone', 'descriptionSuffix' ]\n haltOnFailure = True\n flunkOnFailure = True\n\n def __init__(self, command,\n description=None, descriptionDone=None, descriptionSuffix=None,\n env=None, path=None, usePTY=0, interruptSignal=\"KILL\",\n **kwargs):\n BuildStep.__init__(self, **kwargs)\n\n self.command=command\n if description:\n self.description = description\n if isinstance(self.description, str):\n self.description = [self.description]\n if descriptionDone:\n self.descriptionDone = descriptionDone\n if isinstance(self.descriptionDone, str):\n self.descriptionDone = [self.descriptionDone]\n if descriptionSuffix:\n self.descriptionSuffix = descriptionSuffix\n if isinstance(self.descriptionSuffix, str):\n self.descriptionSuffix = [self.descriptionSuffix]\n self.env=env\n self.path=path\n self.usePTY=usePTY\n self.interruptSignal = interruptSignal\n\n class LocalPP(ProcessProtocol):\n def __init__(self, step):\n self.step = step\n\n def outReceived(self, data):\n self.step.stdio_log.addStdout(data)\n\n def errReceived(self, data):\n self.step.stdio_log.addStderr(data)\n\n def processEnded(self, status_object):\n if status_object.value.exitCode is not None:\n self.step.stdio_log.addHeader(\"exit status %d\\n\" % status_object.value.exitCode)\n if status_object.value.signal is not None:\n self.step.stdio_log.addHeader(\"signal %s\\n\" % status_object.value.signal)\n self.step.processEnded(status_object)\n\n def start(self):\n # render properties\n command = self.command\n # set up argv\n if type(command) in types.StringTypes:\n if runtime.platformType == 'win32':\n argv = os.environ['COMSPEC'].split() # allow %COMSPEC% to have args\n if '/c' not in argv: argv += ['/c']\n argv += [command]\n else:\n # for posix, use /bin/sh. for other non-posix, well, doesn't\n # hurt to try\n argv = ['/bin/sh', '-c', command]\n else:\n if runtime.platformType == 'win32':\n argv = os.environ['COMSPEC'].split() # allow %COMSPEC% to have args\n if '/c' not in argv: argv += ['/c']\n argv += list(command)\n else:\n argv = command\n\n self.stdio_log = stdio_log = self.addLog(\"stdio\")\n\n if type(command) in types.StringTypes:\n stdio_log.addHeader(command.strip() + \"\\n\\n\")\n else:\n stdio_log.addHeader(\" \".join(command) + \"\\n\\n\")\n stdio_log.addHeader(\"** RUNNING ON BUILDMASTER **\\n\")\n stdio_log.addHeader(\" in dir %s\\n\" % os.getcwd())\n stdio_log.addHeader(\" argv: %s\\n\" % (argv,))\n self.step_status.setText(self.describe())\n\n if self.env is None:\n env = os.environ\n else:\n assert isinstance(self.env, dict)\n env = self.env\n\n # do substitution on variable values matching pattern: ${name}\n p = re.compile('\\${([0-9a-zA-Z_]*)}')\n def subst(match):\n return os.environ.get(match.group(1), \"\")\n newenv = {}\n for key in env.keys():\n if env[key] is not None:\n newenv[key] = p.sub(subst, env[key])\n env = newenv\n stdio_log.addHeader(\" env: %r\\n\" % (env,))\n\n # TODO add a timeout?\n self.process = reactor.spawnProcess(self.LocalPP(self), argv[0], argv,\n path=self.path, usePTY=self.usePTY, env=env )\n # (the LocalPP object will call processEnded for us)\n\n def processEnded(self, status_object):\n if status_object.value.signal is not None:\n self.descriptionDone = [\"killed (%s)\" % status_object.value.signal]\n self.step_status.setText(self.describe(done=True))\n self.finished(FAILURE)\n elif status_object.value.exitCode != 0:\n self.descriptionDone = [\"failed (%d)\" % status_object.value.exitCode]\n self.step_status.setText(self.describe(done=True))\n self.finished(FAILURE)\n else:\n self.step_status.setText(self.describe(done=True))\n self.finished(SUCCESS)\n\n def describe(self, done=False):\n desc = self.descriptionDone if done else self.description\n if self.descriptionSuffix:\n desc = desc[:]\n desc.extend(self.descriptionSuffix)\n return desc\n\n def interrupt(self, reason):\n try:\n self.process.signalProcess(self.interruptSignal)\n except KeyError: # Process not started yet\n pass\n except error.ProcessExitedAlready:\n pass\n BuildStep.interrupt(self, reason)\n", "path": "master/buildbot/steps/master.py"}]} | 1,829 | 689 |
gh_patches_debug_2569 | rasdani/github-patches | git_diff | ephios-dev__ephios-1244 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API: `/api/users/by_email` returns 404 error for email addresses with dots before the @
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to `[ephios-url]/api/users/by_email/[email protected]/`
**Expected behaviour**
Assuming the user exists, the information about the user should be returned.
**Screenshots**
Instead the page 404s.
<img width="1511" alt="Screenshot 2024-03-27 at 18 54 08" src="https://github.com/ephios-dev/ephios/assets/2546622/1383feee-28b0-4825-a31e-c39e2cc3f2ab">
**Environment**
State which device, operating system, browser and browser version you are using.
MacOS 14.2.1 (23C71), Version 17.2.1 (19617.1.17.11.12)
**Additional context**
* The problem does not appear for the test emails `usaaa@localhost/`, `admin@localhost/` or `[email protected]`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/api/views/users.py`
Content:
```
1 from django.db.models import Q
2 from django.utils import timezone
3 from django_filters.rest_framework import DjangoFilterBackend
4 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
5 from rest_framework import viewsets
6 from rest_framework.exceptions import PermissionDenied
7 from rest_framework.fields import SerializerMethodField
8 from rest_framework.filters import SearchFilter
9 from rest_framework.generics import RetrieveAPIView
10 from rest_framework.mixins import RetrieveModelMixin
11 from rest_framework.permissions import DjangoObjectPermissions
12 from rest_framework.relations import SlugRelatedField
13 from rest_framework.schemas.openapi import AutoSchema
14 from rest_framework.serializers import ModelSerializer
15 from rest_framework.viewsets import GenericViewSet
16 from rest_framework_guardian.filters import ObjectPermissionsFilter
17
18 from ephios.api.views.events import ParticipationSerializer
19 from ephios.core.models import LocalParticipation, Qualification, UserProfile
20 from ephios.core.services.qualification import collect_all_included_qualifications
21
22
23 class QualificationSerializer(ModelSerializer):
24 category = SlugRelatedField(slug_field="uuid", read_only=True)
25 includes = SerializerMethodField()
26
27 class Meta:
28 model = Qualification
29 fields = [
30 "uuid",
31 "title",
32 "abbreviation",
33 "category",
34 "includes",
35 ]
36
37 def get_includes(self, obj):
38 return [q.uuid for q in collect_all_included_qualifications(obj.includes.all())]
39
40
41 class UserProfileSerializer(ModelSerializer):
42 qualifications = SerializerMethodField()
43
44 class Meta:
45 model = UserProfile
46 fields = [
47 "id",
48 "display_name",
49 "date_of_birth",
50 "email",
51 "qualifications",
52 ]
53
54 def get_qualifications(self, obj):
55 return QualificationSerializer(
56 Qualification.objects.filter(
57 Q(grants__user=obj)
58 & (Q(grants__expires__gte=timezone.now()) | Q(grants__expires__isnull=True))
59 ),
60 many=True,
61 ).data
62
63
64 class UserProfileMeView(RetrieveAPIView):
65 serializer_class = UserProfileSerializer
66 queryset = UserProfile.objects.all()
67 permission_classes = [IsAuthenticatedOrTokenHasScope]
68 required_scopes = ["ME_READ"]
69 schema = AutoSchema(operation_id_base="OwnUserProfile")
70
71 def get_object(self):
72 if self.request.user is None:
73 raise PermissionDenied()
74 return self.request.user
75
76
77 class UserViewSet(viewsets.ReadOnlyModelViewSet):
78 serializer_class = UserProfileSerializer
79 queryset = UserProfile.objects.all()
80 permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]
81 required_scopes = ["CONFIDENTIAL_READ"]
82 search_fields = ["display_name", "email"]
83
84 filter_backends = [
85 DjangoFilterBackend,
86 SearchFilter,
87 ObjectPermissionsFilter,
88 ]
89
90
91 class UserByMailView(RetrieveModelMixin, GenericViewSet):
92 serializer_class = UserProfileSerializer
93 queryset = UserProfile.objects.all()
94 permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]
95 required_scopes = ["CONFIDENTIAL_READ"]
96 filter_backends = [ObjectPermissionsFilter]
97 lookup_url_kwarg = "email"
98 lookup_field = "email"
99 schema = AutoSchema(operation_id_base="UserProfileByMail")
100
101
102 class UserParticipationView(viewsets.ReadOnlyModelViewSet):
103 serializer_class = ParticipationSerializer
104 permission_classes = [IsAuthenticatedOrTokenHasScope]
105 filter_backends = [ObjectPermissionsFilter, DjangoFilterBackend]
106 filterset_fields = ["state"]
107 required_scopes = ["CONFIDENTIAL_READ"]
108
109 def get_queryset(self):
110 return LocalParticipation.objects.filter(user=self.kwargs.get("user"))
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/api/views/users.py b/ephios/api/views/users.py
--- a/ephios/api/views/users.py
+++ b/ephios/api/views/users.py
@@ -96,6 +96,7 @@
filter_backends = [ObjectPermissionsFilter]
lookup_url_kwarg = "email"
lookup_field = "email"
+ lookup_value_regex = "[^/]+" # customize to allow dots (".") in the lookup value
schema = AutoSchema(operation_id_base="UserProfileByMail")
| {"golden_diff": "diff --git a/ephios/api/views/users.py b/ephios/api/views/users.py\n--- a/ephios/api/views/users.py\n+++ b/ephios/api/views/users.py\n@@ -96,6 +96,7 @@\n filter_backends = [ObjectPermissionsFilter]\n lookup_url_kwarg = \"email\"\n lookup_field = \"email\"\n+ lookup_value_regex = \"[^/]+\" # customize to allow dots (\".\") in the lookup value\n schema = AutoSchema(operation_id_base=\"UserProfileByMail\")\n", "issue": "API: `/api/users/by_email` returns 404 error for email addresses with dots before the @\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to `[ephios-url]/api/users/by_email/[email protected]/`\r\n\r\n**Expected behaviour**\r\nAssuming the user exists, the information about the user should be returned.\r\n\r\n**Screenshots**\r\nInstead the page 404s.\r\n\r\n<img width=\"1511\" alt=\"Screenshot 2024-03-27 at 18 54 08\" src=\"https://github.com/ephios-dev/ephios/assets/2546622/1383feee-28b0-4825-a31e-c39e2cc3f2ab\">\r\n\r\n**Environment**\r\nState which device, operating system, browser and browser version you are using.\r\nMacOS 14.2.1 (23C71), Version 17.2.1 (19617.1.17.11.12)\r\n\r\n**Additional context**\r\n* The problem does not appear for the test emails `usaaa@localhost/`, `admin@localhost/` or `[email protected]`.\n", "before_files": [{"content": "from django.db.models import Q\nfrom django.utils import timezone\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import viewsets\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.fields import SerializerMethodField\nfrom rest_framework.filters import SearchFilter\nfrom rest_framework.generics import RetrieveAPIView\nfrom rest_framework.mixins import RetrieveModelMixin\nfrom rest_framework.permissions import DjangoObjectPermissions\nfrom rest_framework.relations import SlugRelatedField\nfrom rest_framework.schemas.openapi import AutoSchema\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import GenericViewSet\nfrom rest_framework_guardian.filters import ObjectPermissionsFilter\n\nfrom ephios.api.views.events import ParticipationSerializer\nfrom ephios.core.models import LocalParticipation, Qualification, UserProfile\nfrom ephios.core.services.qualification import collect_all_included_qualifications\n\n\nclass QualificationSerializer(ModelSerializer):\n category = SlugRelatedField(slug_field=\"uuid\", read_only=True)\n includes = SerializerMethodField()\n\n class Meta:\n model = Qualification\n fields = [\n \"uuid\",\n \"title\",\n \"abbreviation\",\n \"category\",\n \"includes\",\n ]\n\n def get_includes(self, obj):\n return [q.uuid for q in collect_all_included_qualifications(obj.includes.all())]\n\n\nclass UserProfileSerializer(ModelSerializer):\n qualifications = SerializerMethodField()\n\n class Meta:\n model = UserProfile\n fields = [\n \"id\",\n \"display_name\",\n \"date_of_birth\",\n \"email\",\n \"qualifications\",\n ]\n\n def get_qualifications(self, obj):\n return QualificationSerializer(\n Qualification.objects.filter(\n Q(grants__user=obj)\n & (Q(grants__expires__gte=timezone.now()) | Q(grants__expires__isnull=True))\n ),\n many=True,\n ).data\n\n\nclass UserProfileMeView(RetrieveAPIView):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"ME_READ\"]\n schema = AutoSchema(operation_id_base=\"OwnUserProfile\")\n\n def get_object(self):\n if self.request.user is None:\n raise PermissionDenied()\n return self.request.user\n\n\nclass UserViewSet(viewsets.ReadOnlyModelViewSet):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n search_fields = [\"display_name\", \"email\"]\n\n filter_backends = [\n DjangoFilterBackend,\n SearchFilter,\n ObjectPermissionsFilter,\n ]\n\n\nclass UserByMailView(RetrieveModelMixin, GenericViewSet):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n filter_backends = [ObjectPermissionsFilter]\n lookup_url_kwarg = \"email\"\n lookup_field = \"email\"\n schema = AutoSchema(operation_id_base=\"UserProfileByMail\")\n\n\nclass UserParticipationView(viewsets.ReadOnlyModelViewSet):\n serializer_class = ParticipationSerializer\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n filter_backends = [ObjectPermissionsFilter, DjangoFilterBackend]\n filterset_fields = [\"state\"]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n\n def get_queryset(self):\n return LocalParticipation.objects.filter(user=self.kwargs.get(\"user\"))\n", "path": "ephios/api/views/users.py"}], "after_files": [{"content": "from django.db.models import Q\nfrom django.utils import timezone\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import viewsets\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.fields import SerializerMethodField\nfrom rest_framework.filters import SearchFilter\nfrom rest_framework.generics import RetrieveAPIView\nfrom rest_framework.mixins import RetrieveModelMixin\nfrom rest_framework.permissions import DjangoObjectPermissions\nfrom rest_framework.relations import SlugRelatedField\nfrom rest_framework.schemas.openapi import AutoSchema\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import GenericViewSet\nfrom rest_framework_guardian.filters import ObjectPermissionsFilter\n\nfrom ephios.api.views.events import ParticipationSerializer\nfrom ephios.core.models import LocalParticipation, Qualification, UserProfile\nfrom ephios.core.services.qualification import collect_all_included_qualifications\n\n\nclass QualificationSerializer(ModelSerializer):\n category = SlugRelatedField(slug_field=\"uuid\", read_only=True)\n includes = SerializerMethodField()\n\n class Meta:\n model = Qualification\n fields = [\n \"uuid\",\n \"title\",\n \"abbreviation\",\n \"category\",\n \"includes\",\n ]\n\n def get_includes(self, obj):\n return [q.uuid for q in collect_all_included_qualifications(obj.includes.all())]\n\n\nclass UserProfileSerializer(ModelSerializer):\n qualifications = SerializerMethodField()\n\n class Meta:\n model = UserProfile\n fields = [\n \"id\",\n \"display_name\",\n \"date_of_birth\",\n \"email\",\n \"qualifications\",\n ]\n\n def get_qualifications(self, obj):\n return QualificationSerializer(\n Qualification.objects.filter(\n Q(grants__user=obj)\n & (Q(grants__expires__gte=timezone.now()) | Q(grants__expires__isnull=True))\n ),\n many=True,\n ).data\n\n\nclass UserProfileMeView(RetrieveAPIView):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"ME_READ\"]\n schema = AutoSchema(operation_id_base=\"OwnUserProfile\")\n\n def get_object(self):\n if self.request.user is None:\n raise PermissionDenied()\n return self.request.user\n\n\nclass UserViewSet(viewsets.ReadOnlyModelViewSet):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n search_fields = [\"display_name\", \"email\"]\n\n filter_backends = [\n DjangoFilterBackend,\n SearchFilter,\n ObjectPermissionsFilter,\n ]\n\n\nclass UserByMailView(RetrieveModelMixin, GenericViewSet):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n filter_backends = [ObjectPermissionsFilter]\n lookup_url_kwarg = \"email\"\n lookup_field = \"email\"\n lookup_value_regex = \"[^/]+\" # customize to allow dots (\".\") in the lookup value\n schema = AutoSchema(operation_id_base=\"UserProfileByMail\")\n\n\nclass UserParticipationView(viewsets.ReadOnlyModelViewSet):\n serializer_class = ParticipationSerializer\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n filter_backends = [ObjectPermissionsFilter, DjangoFilterBackend]\n filterset_fields = [\"state\"]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n\n def get_queryset(self):\n return LocalParticipation.objects.filter(user=self.kwargs.get(\"user\"))\n", "path": "ephios/api/views/users.py"}]} | 1,533 | 115 |
gh_patches_debug_8988 | rasdani/github-patches | git_diff | beeware__toga-1454 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Your first Toga app" (helloworld) does not work as shown in the docs
I copy-pasted the code [found here](https://toga.readthedocs.io/en/latest/tutorial/tutorial-0.html). When I ran it, I got this error message:
```
$ python -m helloworld
Traceback (most recent call last):
File "C:\Users\brendan\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\brendan\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\bjk\tmp\beeware-toga\helloworld.py", line 2, in <module>
from tutorial import __version__
ModuleNotFoundError: No module named 'tutorial'
```
If I comment out the line `from tutorial import __version__` and delete the kwarg `version=__version__` in the call to `toga.App`, the module does run; i.e., the GUI window pops up and seems to work. However, during the run, I get a warning:
```
$ python -m helloworld
C:\Users\brendan\AppData\Local\Programs\Python\Python39\lib\site-packages\clr_loader\wrappers.py:20: DeprecationWarning:
builtin type GC Offset Base has no __module__ attribute
return self._callable(ffi.cast("void*", buf_arr), len(buf_arr))
```
Maybe it's just out-of-date documentation?
P.S. FWIW, a straight copy-paste of the second tutorial, "[A slightly less toy example](https://toga.readthedocs.io/en/latest/tutorial/tutorial-1.html)" works as is, although it does produce the same DeprecationWarning.
P.P.S. Ditto the fourth tutorial, "[Let’s build a browser!](https://toga.readthedocs.io/en/latest/tutorial/tutorial-3.html)"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/tutorial0/tutorial/app.py`
Content:
```
1 import toga
2 from tutorial import __version__
3
4
5 def button_handler(widget):
6 print("hello")
7
8
9 def build(app):
10 box = toga.Box()
11
12 button = toga.Button('Hello world', on_press=button_handler)
13 button.style.padding = 50
14 button.style.flex = 1
15 box.add(button)
16
17 return box
18
19
20 def main():
21 return toga.App(
22 'First App',
23 'org.beeware.helloworld',
24 author='Tiberius Yak',
25 description="A testing app",
26 version=__version__,
27 home_page="https://beeware.org",
28 startup=build
29 )
30
31
32 if __name__ == '__main__':
33 main().main_loop()
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/tutorial0/tutorial/app.py b/examples/tutorial0/tutorial/app.py
--- a/examples/tutorial0/tutorial/app.py
+++ b/examples/tutorial0/tutorial/app.py
@@ -1,5 +1,4 @@
import toga
-from tutorial import __version__
def button_handler(widget):
@@ -18,15 +17,7 @@
def main():
- return toga.App(
- 'First App',
- 'org.beeware.helloworld',
- author='Tiberius Yak',
- description="A testing app",
- version=__version__,
- home_page="https://beeware.org",
- startup=build
- )
+ return toga.App('First App', 'org.beeware.helloworld', startup=build)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/examples/tutorial0/tutorial/app.py b/examples/tutorial0/tutorial/app.py\n--- a/examples/tutorial0/tutorial/app.py\n+++ b/examples/tutorial0/tutorial/app.py\n@@ -1,5 +1,4 @@\n import toga\n-from tutorial import __version__\n \n \n def button_handler(widget):\n@@ -18,15 +17,7 @@\n \n \n def main():\n- return toga.App(\n- 'First App',\n- 'org.beeware.helloworld',\n- author='Tiberius Yak',\n- description=\"A testing app\",\n- version=__version__,\n- home_page=\"https://beeware.org\",\n- startup=build\n- )\n+ return toga.App('First App', 'org.beeware.helloworld', startup=build)\n \n \n if __name__ == '__main__':\n", "issue": "\"Your first Toga app\" (helloworld) does not work as shown in the docs\nI copy-pasted the code [found here](https://toga.readthedocs.io/en/latest/tutorial/tutorial-0.html). When I ran it, I got this error message:\r\n\r\n```\r\n$ python -m helloworld\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\brendan\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\brendan\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\bjk\\tmp\\beeware-toga\\helloworld.py\", line 2, in <module>\r\n from tutorial import __version__\r\nModuleNotFoundError: No module named 'tutorial'\r\n```\r\n\r\nIf I comment out the line `from tutorial import __version__` and delete the kwarg `version=__version__` in the call to `toga.App`, the module does run; i.e., the GUI window pops up and seems to work. However, during the run, I get a warning:\r\n\r\n```\r\n$ python -m helloworld\r\nC:\\Users\\brendan\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\clr_loader\\wrappers.py:20: DeprecationWarning:\r\n builtin type GC Offset Base has no __module__ attribute\r\n return self._callable(ffi.cast(\"void*\", buf_arr), len(buf_arr))\r\n```\r\n\r\nMaybe it's just out-of-date documentation?\r\n\r\nP.S. FWIW, a straight copy-paste of the second tutorial, \"[A slightly less toy example](https://toga.readthedocs.io/en/latest/tutorial/tutorial-1.html)\" works as is, although it does produce the same DeprecationWarning.\r\n\r\nP.P.S. Ditto the fourth tutorial, \"[Let\u2019s build a browser!](https://toga.readthedocs.io/en/latest/tutorial/tutorial-3.html)\"\n", "before_files": [{"content": "import toga\nfrom tutorial import __version__\n\n\ndef button_handler(widget):\n print(\"hello\")\n\n\ndef build(app):\n box = toga.Box()\n\n button = toga.Button('Hello world', on_press=button_handler)\n button.style.padding = 50\n button.style.flex = 1\n box.add(button)\n\n return box\n\n\ndef main():\n return toga.App(\n 'First App',\n 'org.beeware.helloworld',\n author='Tiberius Yak',\n description=\"A testing app\",\n version=__version__,\n home_page=\"https://beeware.org\",\n startup=build\n )\n\n\nif __name__ == '__main__':\n main().main_loop()\n", "path": "examples/tutorial0/tutorial/app.py"}], "after_files": [{"content": "import toga\n\n\ndef button_handler(widget):\n print(\"hello\")\n\n\ndef build(app):\n box = toga.Box()\n\n button = toga.Button('Hello world', on_press=button_handler)\n button.style.padding = 50\n button.style.flex = 1\n box.add(button)\n\n return box\n\n\ndef main():\n return toga.App('First App', 'org.beeware.helloworld', startup=build)\n\n\nif __name__ == '__main__':\n main().main_loop()\n", "path": "examples/tutorial0/tutorial/app.py"}]} | 936 | 174 |
gh_patches_debug_3755 | rasdani/github-patches | git_diff | dotkom__onlineweb4-486 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API should not show marks publicly
The API shows all marks for all users publicly. Should be unregistered from API if it is not utterly necessary by some client-side ajax call.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/api/v0/urls.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from django.conf.urls import patterns, url, include
4
5 from tastypie.api import Api
6
7 from apps.api.v0.article import ArticleResource, ArticleLatestResource
8 from apps.api.v0.authentication import UserResource
9 from apps.api.v0.events import EventResource, AttendanceEventResource, AttendeeResource, CompanyResource, CompanyEventResource
10 from apps.api.v0.marks import MarkResource, EntryResource, MyMarksResource, MyActiveMarksResource
11 from apps.api.v0.offline import IssueResource
12
13 v0_api = Api(api_name='v0')
14
15 # users
16 v0_api.register(UserResource())
17
18 # event
19 v0_api.register(EventResource())
20 v0_api.register(AttendanceEventResource())
21 v0_api.register(CompanyResource())
22 v0_api.register(CompanyEventResource())
23
24 # article
25 v0_api.register(ArticleResource())
26 v0_api.register(ArticleLatestResource())
27
28 # marks
29 v0_api.register(MarkResource())
30 v0_api.register(EntryResource())
31 v0_api.register(MyMarksResource())
32 v0_api.register(MyActiveMarksResource())
33
34 # offline
35 v0_api.register(IssueResource())
36
37 # Set the urls to be included.
38 urlpatterns = patterns('',
39 url(r'^', include(v0_api.urls)),
40 )
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/api/v0/urls.py b/apps/api/v0/urls.py
--- a/apps/api/v0/urls.py
+++ b/apps/api/v0/urls.py
@@ -26,10 +26,10 @@
v0_api.register(ArticleLatestResource())
# marks
-v0_api.register(MarkResource())
-v0_api.register(EntryResource())
-v0_api.register(MyMarksResource())
-v0_api.register(MyActiveMarksResource())
+#v0_api.register(MarkResource())
+#v0_api.register(EntryResource())
+#v0_api.register(MyMarksResource())
+#v0_api.register(MyActiveMarksResource())
# offline
v0_api.register(IssueResource())
| {"golden_diff": "diff --git a/apps/api/v0/urls.py b/apps/api/v0/urls.py\n--- a/apps/api/v0/urls.py\n+++ b/apps/api/v0/urls.py\n@@ -26,10 +26,10 @@\n v0_api.register(ArticleLatestResource())\n \n # marks\n-v0_api.register(MarkResource())\n-v0_api.register(EntryResource())\n-v0_api.register(MyMarksResource())\n-v0_api.register(MyActiveMarksResource())\n+#v0_api.register(MarkResource())\n+#v0_api.register(EntryResource())\n+#v0_api.register(MyMarksResource())\n+#v0_api.register(MyActiveMarksResource())\n \n # offline\n v0_api.register(IssueResource())\n", "issue": "API should not show marks publicly\nThe API shows all marks for all users publicly. Should be unregistered from API if it is not utterly necessary by some client-side ajax call.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf.urls import patterns, url, include\n\nfrom tastypie.api import Api\n\nfrom apps.api.v0.article import ArticleResource, ArticleLatestResource\nfrom apps.api.v0.authentication import UserResource\nfrom apps.api.v0.events import EventResource, AttendanceEventResource, AttendeeResource, CompanyResource, CompanyEventResource\nfrom apps.api.v0.marks import MarkResource, EntryResource, MyMarksResource, MyActiveMarksResource\nfrom apps.api.v0.offline import IssueResource\n\nv0_api = Api(api_name='v0')\n\n# users\nv0_api.register(UserResource())\n\n# event\nv0_api.register(EventResource())\nv0_api.register(AttendanceEventResource())\nv0_api.register(CompanyResource())\nv0_api.register(CompanyEventResource())\n\n# article\nv0_api.register(ArticleResource())\nv0_api.register(ArticleLatestResource())\n\n# marks\nv0_api.register(MarkResource())\nv0_api.register(EntryResource())\nv0_api.register(MyMarksResource())\nv0_api.register(MyActiveMarksResource())\n\n# offline\nv0_api.register(IssueResource())\n\n# Set the urls to be included.\nurlpatterns = patterns('',\n url(r'^', include(v0_api.urls)),\n)\n", "path": "apps/api/v0/urls.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf.urls import patterns, url, include\n\nfrom tastypie.api import Api\n\nfrom apps.api.v0.article import ArticleResource, ArticleLatestResource\nfrom apps.api.v0.authentication import UserResource\nfrom apps.api.v0.events import EventResource, AttendanceEventResource, AttendeeResource, CompanyResource, CompanyEventResource\nfrom apps.api.v0.marks import MarkResource, EntryResource, MyMarksResource, MyActiveMarksResource\nfrom apps.api.v0.offline import IssueResource\n\nv0_api = Api(api_name='v0')\n\n# users\nv0_api.register(UserResource())\n\n# event\nv0_api.register(EventResource())\nv0_api.register(AttendanceEventResource())\nv0_api.register(CompanyResource())\nv0_api.register(CompanyEventResource())\n\n# article\nv0_api.register(ArticleResource())\nv0_api.register(ArticleLatestResource())\n\n# marks\n#v0_api.register(MarkResource())\n#v0_api.register(EntryResource())\n#v0_api.register(MyMarksResource())\n#v0_api.register(MyActiveMarksResource())\n\n# offline\nv0_api.register(IssueResource())\n\n# Set the urls to be included.\nurlpatterns = patterns('',\n url(r'^', include(v0_api.urls)),\n)\n", "path": "apps/api/v0/urls.py"}]} | 640 | 149 |
gh_patches_debug_5292 | rasdani/github-patches | git_diff | freedomofpress__securedrop-6881 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release SecureDrop 2.6.0
This is a tracking issue for the release of SecureDrop 2.6.0
Tentatively scheduled as follows:
**Pre-release announcement:** 06-15-2023
**Release date:** 06-22-2023
**Release manager:** @legoktm
**Deputy release manager:** @zenmonkeykstop
**Localization manager:** @cfm
**Communications manager:** @nathandyer
_SecureDrop maintainers and testers:_ As you QA 2.6.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them "QA: Release".
Test debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).
# [QA Matrix for 2.6.0](https://docs.google.com/spreadsheets/d/1j-F9e45O9TWkbWZVzKUdbyYIP69yfALzlDx9bR49UHI/edit#gid=361662860)
# [Test Plan for 2.6.0](https://github.com/freedomofpress/securedrop/wiki/2.6.0-Test-Plan)
# Prepare release candidate (2.6.0~rc1)
- [ ] Link to latest version of Tails, including release candidates, to test against during QA
- [x] Prepare 2.6.0~rc1 release changelog
- [x] Branch off release/2.6.0 from develop
- [x] Prepare 2.6.0
- [x] Build debs, preserving build log, and put up `2.6.0~rc1` on test apt server
- [x] Commit build log.
After each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.
# Final release
- [x] Ensure builder in release branch is updated and/or update builder image
- [x] Push signed tag
- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)
- [x] Build final Debian packages(and preserve build log)
- [x] Commit package build log to https://github.com/freedomofpress/build-logs
- [x] Pre-Flight: Test that install and upgrade from 2.5.2 to 2.6.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)
- [x] Flip apt QA server to prod status (merge to `main` in the LFS repo)
- [x] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo
- [x] Prepare release messaging
# Post release
- [x] Create GitHub release object
- [x] Once release object is created, update versions in `securedrop-docs` and Wagtail
- [x] Verify new docs show up on https://docs.securedrop.org
- [x] Publish announcements
- [ ] Merge changelog back to `develop`
- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/version.py`
Content:
```
1 __version__ = "2.6.0~rc1"
2
```
Path: `securedrop/setup.py`
Content:
```
1 import setuptools
2
3 long_description = "The SecureDrop whistleblower platform."
4
5 setuptools.setup(
6 name="securedrop-app-code",
7 version="2.6.0~rc1",
8 author="Freedom of the Press Foundation",
9 author_email="[email protected]",
10 description="SecureDrop Server",
11 long_description=long_description,
12 long_description_content_type="text/markdown",
13 license="AGPLv3+",
14 python_requires=">=3.8",
15 url="https://github.com/freedomofpress/securedrop",
16 classifiers=[
17 "Development Status :: 5 - Stable",
18 "Programming Language :: Python :: 3",
19 "Topic :: Software Development :: Libraries :: Python Modules",
20 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
21 "Intended Audience :: Developers",
22 "Operating System :: OS Independent",
23 ],
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/setup.py b/securedrop/setup.py
--- a/securedrop/setup.py
+++ b/securedrop/setup.py
@@ -4,7 +4,7 @@
setuptools.setup(
name="securedrop-app-code",
- version="2.6.0~rc1",
+ version="2.7.0~rc1",
author="Freedom of the Press Foundation",
author_email="[email protected]",
description="SecureDrop Server",
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = "2.6.0~rc1"
+__version__ = "2.7.0~rc1"
| {"golden_diff": "diff --git a/securedrop/setup.py b/securedrop/setup.py\n--- a/securedrop/setup.py\n+++ b/securedrop/setup.py\n@@ -4,7 +4,7 @@\n \n setuptools.setup(\n name=\"securedrop-app-code\",\n- version=\"2.6.0~rc1\",\n+ version=\"2.7.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = \"2.6.0~rc1\"\n+__version__ = \"2.7.0~rc1\"\n", "issue": "Release SecureDrop 2.6.0\nThis is a tracking issue for the release of SecureDrop 2.6.0\r\n\r\nTentatively scheduled as follows:\r\n\r\n**Pre-release announcement:** 06-15-2023\r\n**Release date:** 06-22-2023\r\n\r\n**Release manager:** @legoktm \r\n**Deputy release manager:** @zenmonkeykstop \r\n**Localization manager:** @cfm\r\n**Communications manager:** @nathandyer \r\n\r\n_SecureDrop maintainers and testers:_ As you QA 2.6.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them \"QA: Release\".\r\n\r\nTest debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).\r\n\r\n# [QA Matrix for 2.6.0](https://docs.google.com/spreadsheets/d/1j-F9e45O9TWkbWZVzKUdbyYIP69yfALzlDx9bR49UHI/edit#gid=361662860)\r\n# [Test Plan for 2.6.0](https://github.com/freedomofpress/securedrop/wiki/2.6.0-Test-Plan)\r\n\r\n# Prepare release candidate (2.6.0~rc1)\r\n- [ ] Link to latest version of Tails, including release candidates, to test against during QA\r\n- [x] Prepare 2.6.0~rc1 release changelog\r\n- [x] Branch off release/2.6.0 from develop\r\n- [x] Prepare 2.6.0\r\n- [x] Build debs, preserving build log, and put up `2.6.0~rc1` on test apt server\r\n- [x] Commit build log.\r\n\r\nAfter each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.\r\n\r\n# Final release\r\n- [x] Ensure builder in release branch is updated and/or update builder image\n- [x] Push signed tag \n- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)\n- [x] Build final Debian packages(and preserve build log)\n- [x] Commit package build log to https://github.com/freedomofpress/build-logs\n- [x] Pre-Flight: Test that install and upgrade from 2.5.2 to 2.6.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)\n- [x] Flip apt QA server to prod status (merge to `main` in the LFS repo)\n- [x] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo\n- [x] Prepare release messaging\n\r\n# Post release\r\n- [x] Create GitHub release object \n- [x] Once release object is created, update versions in `securedrop-docs` and Wagtail\r\n- [x] Verify new docs show up on https://docs.securedrop.org\r\n- [x] Publish announcements\r\n- [ ] Merge changelog back to `develop`\r\n- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap\n", "before_files": [{"content": "__version__ = \"2.6.0~rc1\"\n", "path": "securedrop/version.py"}, {"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.6.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=[\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ],\n)\n", "path": "securedrop/setup.py"}], "after_files": [{"content": "__version__ = \"2.7.0~rc1\"\n", "path": "securedrop/version.py"}, {"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.7.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=[\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ],\n)\n", "path": "securedrop/setup.py"}]} | 1,352 | 175 |
gh_patches_debug_2507 | rasdani/github-patches | git_diff | spotify__luigi-1494 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.5 support
Luigi may already work with Python 3.5, but since the README doesn't mention it I thought I'd ask.
Does Luigi support Python 3.5?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) 2012 Spotify AB
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may not
4 # use this file except in compliance with the License. You may obtain a copy of
5 # the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations under
13 # the License.
14
15 import os
16
17 from setuptools import setup
18
19
20 def get_static_files(path):
21 return [os.path.join(dirpath.replace("luigi/", ""), ext)
22 for (dirpath, dirnames, filenames) in os.walk(path)
23 for ext in ["*.html", "*.js", "*.css", "*.png",
24 "*.eot", "*.svg", "*.ttf", "*.woff", "*.woff2"]]
25
26
27 luigi_package_data = sum(map(get_static_files, ["luigi/static", "luigi/templates"]), [])
28
29 readme_note = """\
30 .. note::
31
32 For the latest source, discussion, etc, please visit the
33 `GitHub repository <https://github.com/spotify/luigi>`_\n\n
34 """
35
36 with open('README.rst') as fobj:
37 long_description = readme_note + fobj.read()
38
39 install_requires = [
40 'tornado>=4.0,<5',
41 'python-daemon<3.0',
42 ]
43
44 if os.environ.get('READTHEDOCS', None) == 'True':
45 # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla
46 install_requires.append('sqlalchemy')
47 # readthedocs don't like python-daemon, see #1342
48 install_requires.remove('python-daemon<3.0')
49
50 setup(
51 name='luigi',
52 version='2.0.1',
53 description='Workflow mgmgt + task scheduling + dependency resolution',
54 long_description=long_description,
55 author='Erik Bernhardsson',
56 url='https://github.com/spotify/luigi',
57 license='Apache License 2.0',
58 packages=[
59 'luigi',
60 'luigi.contrib',
61 'luigi.contrib.hdfs',
62 'luigi.tools'
63 ],
64 package_data={
65 'luigi': luigi_package_data
66 },
67 entry_points={
68 'console_scripts': [
69 'luigi = luigi.cmdline:luigi_run',
70 'luigid = luigi.cmdline:luigid',
71 'luigi-grep = luigi.tools.luigi_grep:main',
72 'luigi-deps = luigi.tools.deps:main',
73 'luigi-migrate = luigi.tools.migrate:main'
74 ]
75 },
76 install_requires=install_requires,
77 classifiers=[
78 'Development Status :: 5 - Production/Stable',
79 'Environment :: Console',
80 'Environment :: Web Environment',
81 'Intended Audience :: Developers',
82 'Intended Audience :: System Administrators',
83 'License :: OSI Approved :: Apache Software License',
84 'Programming Language :: Python :: 2.7',
85 'Programming Language :: Python :: 3.3',
86 'Programming Language :: Python :: 3.4',
87 'Topic :: System :: Monitoring',
88 ],
89 )
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -84,6 +84,7 @@
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
'Topic :: System :: Monitoring',
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -84,6 +84,7 @@\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n+ 'Programming Language :: Python :: 3.5',\n 'Topic :: System :: Monitoring',\n ],\n )\n", "issue": "Python 3.5 support\nLuigi may already work with Python 3.5, but since the README doesn't mention it I thought I'd ask.\n\nDoes Luigi support Python 3.5?\n\n", "before_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\n\nfrom setuptools import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext)\n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\",\n \"*.eot\", \"*.svg\", \"*.ttf\", \"*.woff\", \"*.woff2\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'tornado>=4.0,<5',\n 'python-daemon<3.0',\n]\n\nif os.environ.get('READTHEDOCS', None) == 'True':\n # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla\n install_requires.append('sqlalchemy')\n # readthedocs don't like python-daemon, see #1342\n install_requires.remove('python-daemon<3.0')\n\nsetup(\n name='luigi',\n version='2.0.1',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='Erik Bernhardsson',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.contrib',\n 'luigi.contrib.hdfs',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n entry_points={\n 'console_scripts': [\n 'luigi = luigi.cmdline:luigi_run',\n 'luigid = luigi.cmdline:luigid',\n 'luigi-grep = luigi.tools.luigi_grep:main',\n 'luigi-deps = luigi.tools.deps:main',\n 'luigi-migrate = luigi.tools.migrate:main'\n ]\n },\n install_requires=install_requires,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: System :: Monitoring',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\n\nfrom setuptools import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext)\n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\",\n \"*.eot\", \"*.svg\", \"*.ttf\", \"*.woff\", \"*.woff2\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'tornado>=4.0,<5',\n 'python-daemon<3.0',\n]\n\nif os.environ.get('READTHEDOCS', None) == 'True':\n # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla\n install_requires.append('sqlalchemy')\n # readthedocs don't like python-daemon, see #1342\n install_requires.remove('python-daemon<3.0')\n\nsetup(\n name='luigi',\n version='2.0.1',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='Erik Bernhardsson',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.contrib',\n 'luigi.contrib.hdfs',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n entry_points={\n 'console_scripts': [\n 'luigi = luigi.cmdline:luigi_run',\n 'luigid = luigi.cmdline:luigid',\n 'luigi-grep = luigi.tools.luigi_grep:main',\n 'luigi-deps = luigi.tools.deps:main',\n 'luigi-migrate = luigi.tools.migrate:main'\n ]\n },\n install_requires=install_requires,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: System :: Monitoring',\n ],\n)\n", "path": "setup.py"}]} | 1,204 | 92 |
gh_patches_debug_12339 | rasdani/github-patches | git_diff | nextcloud__appstore-73 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Nightly support
- nightlies don't have a separate version number but a flag:
```
curl -X POST -u "user:password" http://localhost:8000/api/v1/apps/releases -H "Content-Type: application/json" -d '{"download":"https://example.com/release.tar.gz", "nightly":true }'
```
- this is also listed in the "get all apps" API with a `nightly: true` attribute https://nextcloudappstore.readthedocs.io/en/latest/restapi.html#get-all-apps-and-releases
- upload of a new nightly will delete the previous one for that app
- this allows to upgrade to a nightly (needs to be invoked by the admin and can be undone -> next regular release of the app will be installed)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nextcloudappstore/core/api/v1/urls.py`
Content:
```
1 from django.conf.urls import url
2 from django.views.decorators.http import etag
3 from nextcloudappstore.core.api.v1.views import Apps, AppReleases, \
4 app_api_etag, Categories, category_api_etag
5
6 urlpatterns = [
7 url(r'^platform/(?P<version>\d+\.\d+\.\d+)/apps\.json$',
8 etag(app_api_etag)(Apps.as_view()), name='apps'),
9 url(r'^apps/releases/?$', AppReleases.as_view(),
10 name='app-release-create'),
11 url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),
12 url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\d+\.\d+\.\d+)/?$',
13 AppReleases.as_view(), name='app-release-delete'),
14 url(r'^categories.json$',
15 etag(category_api_etag)(Categories.as_view()), name='categories'),
16 ]
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nextcloudappstore/core/api/v1/urls.py b/nextcloudappstore/core/api/v1/urls.py
--- a/nextcloudappstore/core/api/v1/urls.py
+++ b/nextcloudappstore/core/api/v1/urls.py
@@ -9,7 +9,8 @@
url(r'^apps/releases/?$', AppReleases.as_view(),
name='app-release-create'),
url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),
- url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\d+\.\d+\.\d+)/?$',
+ url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\d+\.\d+\.\d+'
+ r'(?:-nightly)?)/?$',
AppReleases.as_view(), name='app-release-delete'),
url(r'^categories.json$',
etag(category_api_etag)(Categories.as_view()), name='categories'),
| {"golden_diff": "diff --git a/nextcloudappstore/core/api/v1/urls.py b/nextcloudappstore/core/api/v1/urls.py\n--- a/nextcloudappstore/core/api/v1/urls.py\n+++ b/nextcloudappstore/core/api/v1/urls.py\n@@ -9,7 +9,8 @@\n url(r'^apps/releases/?$', AppReleases.as_view(),\n name='app-release-create'),\n url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),\n- url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+)/?$',\n+ url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+'\n+ r'(?:-nightly)?)/?$',\n AppReleases.as_view(), name='app-release-delete'),\n url(r'^categories.json$',\n etag(category_api_etag)(Categories.as_view()), name='categories'),\n", "issue": "Nightly support\n- nightlies don't have a separate version number but a flag:\n\n```\ncurl -X POST -u \"user:password\" http://localhost:8000/api/v1/apps/releases -H \"Content-Type: application/json\" -d '{\"download\":\"https://example.com/release.tar.gz\", \"nightly\":true }'\n```\n- this is also listed in the \"get all apps\" API with a `nightly: true` attribute https://nextcloudappstore.readthedocs.io/en/latest/restapi.html#get-all-apps-and-releases\n- upload of a new nightly will delete the previous one for that app\n- this allows to upgrade to a nightly (needs to be invoked by the admin and can be undone -> next regular release of the app will be installed)\n\n", "before_files": [{"content": "from django.conf.urls import url\nfrom django.views.decorators.http import etag\nfrom nextcloudappstore.core.api.v1.views import Apps, AppReleases, \\\n app_api_etag, Categories, category_api_etag\n\nurlpatterns = [\n url(r'^platform/(?P<version>\\d+\\.\\d+\\.\\d+)/apps\\.json$',\n etag(app_api_etag)(Apps.as_view()), name='apps'),\n url(r'^apps/releases/?$', AppReleases.as_view(),\n name='app-release-create'),\n url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),\n url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+)/?$',\n AppReleases.as_view(), name='app-release-delete'),\n url(r'^categories.json$',\n etag(category_api_etag)(Categories.as_view()), name='categories'),\n]\n", "path": "nextcloudappstore/core/api/v1/urls.py"}], "after_files": [{"content": "from django.conf.urls import url\nfrom django.views.decorators.http import etag\nfrom nextcloudappstore.core.api.v1.views import Apps, AppReleases, \\\n app_api_etag, Categories, category_api_etag\n\nurlpatterns = [\n url(r'^platform/(?P<version>\\d+\\.\\d+\\.\\d+)/apps\\.json$',\n etag(app_api_etag)(Apps.as_view()), name='apps'),\n url(r'^apps/releases/?$', AppReleases.as_view(),\n name='app-release-create'),\n url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),\n url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+'\n r'(?:-nightly)?)/?$',\n AppReleases.as_view(), name='app-release-delete'),\n url(r'^categories.json$',\n etag(category_api_etag)(Categories.as_view()), name='categories'),\n]\n", "path": "nextcloudappstore/core/api/v1/urls.py"}]} | 661 | 228 |
gh_patches_debug_19020 | rasdani/github-patches | git_diff | iterative__dvc-5888 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
list: Add --show-json (or similar flag)
In the vs code project we have a view that uses `dvc list . --dvc-only` to show all paths that are tracked by DVC in a tree view. Reasons for this view, some discussion around it and a short demo are shown here: https://github.com/iterative/vscode-dvc/issues/318.
At the moment we take the stdout from the command, split the string into a list (using `\n` as a delimiter) and then post process to work out whether or not the paths relate to files or directories. I can see from the output of the command that directories are already highlighted:

From the above I assume that the work to determine what the path is (file or dir) has already been done by the cli. Rather than working out this information again it would be ideal if the cli could pass us json that contains the aforementioned information.
This will reduce the amount of code required in the extension and should increase performance (ever so slightly).
Please let me know if any of the above is unclear.
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/ls/__init__.py`
Content:
```
1 import argparse
2 import logging
3 import sys
4
5 from dvc.command import completion
6 from dvc.command.base import CmdBaseNoRepo, append_doc_link
7 from dvc.command.ls.ls_colors import LsColors
8 from dvc.exceptions import DvcException
9
10 logger = logging.getLogger(__name__)
11
12
13 def _prettify(entries, with_color=False):
14 if with_color:
15 ls_colors = LsColors()
16 fmt = ls_colors.format
17 else:
18
19 def fmt(entry):
20 return entry["path"]
21
22 return [fmt(entry) for entry in entries]
23
24
25 class CmdList(CmdBaseNoRepo):
26 def run(self):
27 from dvc.repo import Repo
28
29 try:
30 entries = Repo.ls(
31 self.args.url,
32 self.args.path,
33 rev=self.args.rev,
34 recursive=self.args.recursive,
35 dvc_only=self.args.dvc_only,
36 )
37 if entries:
38 entries = _prettify(entries, sys.stdout.isatty())
39 logger.info("\n".join(entries))
40 return 0
41 except DvcException:
42 logger.exception(f"failed to list '{self.args.url}'")
43 return 1
44
45
46 def add_parser(subparsers, parent_parser):
47 LIST_HELP = (
48 "List repository contents, including files"
49 " and directories tracked by DVC and by Git."
50 )
51 list_parser = subparsers.add_parser(
52 "list",
53 parents=[parent_parser],
54 description=append_doc_link(LIST_HELP, "list"),
55 help=LIST_HELP,
56 formatter_class=argparse.RawTextHelpFormatter,
57 )
58 list_parser.add_argument("url", help="Location of DVC repository to list")
59 list_parser.add_argument(
60 "-R",
61 "--recursive",
62 action="store_true",
63 help="Recursively list files.",
64 )
65 list_parser.add_argument(
66 "--dvc-only", action="store_true", help="Show only DVC outputs."
67 )
68 list_parser.add_argument(
69 "--rev",
70 nargs="?",
71 help="Git revision (e.g. SHA, branch, tag)",
72 metavar="<commit>",
73 )
74 list_parser.add_argument(
75 "path",
76 nargs="?",
77 help="Path to directory within the repository to list outputs for",
78 ).complete = completion.DIR
79 list_parser.set_defaults(func=CmdList)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py
--- a/dvc/command/ls/__init__.py
+++ b/dvc/command/ls/__init__.py
@@ -34,7 +34,11 @@
recursive=self.args.recursive,
dvc_only=self.args.dvc_only,
)
- if entries:
+ if self.args.show_json:
+ import json
+
+ logger.info(json.dumps(entries))
+ elif entries:
entries = _prettify(entries, sys.stdout.isatty())
logger.info("\n".join(entries))
return 0
@@ -65,6 +69,9 @@
list_parser.add_argument(
"--dvc-only", action="store_true", help="Show only DVC outputs."
)
+ list_parser.add_argument(
+ "--show-json", action="store_true", help="Show output in JSON format."
+ )
list_parser.add_argument(
"--rev",
nargs="?",
| {"golden_diff": "diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py\n--- a/dvc/command/ls/__init__.py\n+++ b/dvc/command/ls/__init__.py\n@@ -34,7 +34,11 @@\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n- if entries:\n+ if self.args.show_json:\n+ import json\n+\n+ logger.info(json.dumps(entries))\n+ elif entries:\n entries = _prettify(entries, sys.stdout.isatty())\n logger.info(\"\\n\".join(entries))\n return 0\n@@ -65,6 +69,9 @@\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n+ list_parser.add_argument(\n+ \"--show-json\", action=\"store_true\", help=\"Show output in JSON format.\"\n+ )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n", "issue": "list: Add --show-json (or similar flag)\nIn the vs code project we have a view that uses `dvc list . --dvc-only` to show all paths that are tracked by DVC in a tree view. Reasons for this view, some discussion around it and a short demo are shown here: https://github.com/iterative/vscode-dvc/issues/318.\r\n\r\nAt the moment we take the stdout from the command, split the string into a list (using `\\n` as a delimiter) and then post process to work out whether or not the paths relate to files or directories. I can see from the output of the command that directories are already highlighted: \r\n\r\n\r\n\r\nFrom the above I assume that the work to determine what the path is (file or dir) has already been done by the cli. Rather than working out this information again it would be ideal if the cli could pass us json that contains the aforementioned information.\r\n\r\nThis will reduce the amount of code required in the extension and should increase performance (ever so slightly).\r\n\r\nPlease let me know if any of the above is unclear.\r\n\r\nThanks\n", "before_files": [{"content": "import argparse\nimport logging\nimport sys\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.command.ls.ls_colors import LsColors\nfrom dvc.exceptions import DvcException\n\nlogger = logging.getLogger(__name__)\n\n\ndef _prettify(entries, with_color=False):\n if with_color:\n ls_colors = LsColors()\n fmt = ls_colors.format\n else:\n\n def fmt(entry):\n return entry[\"path\"]\n\n return [fmt(entry) for entry in entries]\n\n\nclass CmdList(CmdBaseNoRepo):\n def run(self):\n from dvc.repo import Repo\n\n try:\n entries = Repo.ls(\n self.args.url,\n self.args.path,\n rev=self.args.rev,\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n if entries:\n entries = _prettify(entries, sys.stdout.isatty())\n logger.info(\"\\n\".join(entries))\n return 0\n except DvcException:\n logger.exception(f\"failed to list '{self.args.url}'\")\n return 1\n\n\ndef add_parser(subparsers, parent_parser):\n LIST_HELP = (\n \"List repository contents, including files\"\n \" and directories tracked by DVC and by Git.\"\n )\n list_parser = subparsers.add_parser(\n \"list\",\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n list_parser.add_argument(\"url\", help=\"Location of DVC repository to list\")\n list_parser.add_argument(\n \"-R\",\n \"--recursive\",\n action=\"store_true\",\n help=\"Recursively list files.\",\n )\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n help=\"Git revision (e.g. SHA, branch, tag)\",\n metavar=\"<commit>\",\n )\n list_parser.add_argument(\n \"path\",\n nargs=\"?\",\n help=\"Path to directory within the repository to list outputs for\",\n ).complete = completion.DIR\n list_parser.set_defaults(func=CmdList)\n", "path": "dvc/command/ls/__init__.py"}], "after_files": [{"content": "import argparse\nimport logging\nimport sys\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.command.ls.ls_colors import LsColors\nfrom dvc.exceptions import DvcException\n\nlogger = logging.getLogger(__name__)\n\n\ndef _prettify(entries, with_color=False):\n if with_color:\n ls_colors = LsColors()\n fmt = ls_colors.format\n else:\n\n def fmt(entry):\n return entry[\"path\"]\n\n return [fmt(entry) for entry in entries]\n\n\nclass CmdList(CmdBaseNoRepo):\n def run(self):\n from dvc.repo import Repo\n\n try:\n entries = Repo.ls(\n self.args.url,\n self.args.path,\n rev=self.args.rev,\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n if self.args.show_json:\n import json\n\n logger.info(json.dumps(entries))\n elif entries:\n entries = _prettify(entries, sys.stdout.isatty())\n logger.info(\"\\n\".join(entries))\n return 0\n except DvcException:\n logger.exception(f\"failed to list '{self.args.url}'\")\n return 1\n\n\ndef add_parser(subparsers, parent_parser):\n LIST_HELP = (\n \"List repository contents, including files\"\n \" and directories tracked by DVC and by Git.\"\n )\n list_parser = subparsers.add_parser(\n \"list\",\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n list_parser.add_argument(\"url\", help=\"Location of DVC repository to list\")\n list_parser.add_argument(\n \"-R\",\n \"--recursive\",\n action=\"store_true\",\n help=\"Recursively list files.\",\n )\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n list_parser.add_argument(\n \"--show-json\", action=\"store_true\", help=\"Show output in JSON format.\"\n )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n help=\"Git revision (e.g. SHA, branch, tag)\",\n metavar=\"<commit>\",\n )\n list_parser.add_argument(\n \"path\",\n nargs=\"?\",\n help=\"Path to directory within the repository to list outputs for\",\n ).complete = completion.DIR\n list_parser.set_defaults(func=CmdList)\n", "path": "dvc/command/ls/__init__.py"}]} | 1,198 | 222 |
gh_patches_debug_3062 | rasdani/github-patches | git_diff | facebookresearch__hydra-1281 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release new version of Hydra
# 🚀 Feature Request
I would like you to release Hydra that includes this PR: https://github.com/facebookresearch/hydra/pull/1197
## Motivation
currently I am using python 3.9 and I can't run Hydra due to a bug that is solved in above PR
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 # Source of truth for Hydra's version
4 __version__ = "1.0.4"
5 from hydra import utils
6 from hydra.errors import MissingConfigException
7 from hydra.main import main
8 from hydra.types import TaskFunction
9
10 __all__ = ["__version__", "MissingConfigException", "main", "utils", "TaskFunction"]
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/__init__.py b/hydra/__init__.py
--- a/hydra/__init__.py
+++ b/hydra/__init__.py
@@ -1,7 +1,7 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# Source of truth for Hydra's version
-__version__ = "1.0.4"
+__version__ = "1.0.5"
from hydra import utils
from hydra.errors import MissingConfigException
from hydra.main import main
| {"golden_diff": "diff --git a/hydra/__init__.py b/hydra/__init__.py\n--- a/hydra/__init__.py\n+++ b/hydra/__init__.py\n@@ -1,7 +1,7 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n \n # Source of truth for Hydra's version\n-__version__ = \"1.0.4\"\n+__version__ = \"1.0.5\"\n from hydra import utils\n from hydra.errors import MissingConfigException\n from hydra.main import main\n", "issue": "Release new version of Hydra\n# \ud83d\ude80 Feature Request\r\n\r\nI would like you to release Hydra that includes this PR: https://github.com/facebookresearch/hydra/pull/1197\r\n\r\n## Motivation\r\n\r\ncurrently I am using python 3.9 and I can't run Hydra due to a bug that is solved in above PR\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n# Source of truth for Hydra's version\n__version__ = \"1.0.4\"\nfrom hydra import utils\nfrom hydra.errors import MissingConfigException\nfrom hydra.main import main\nfrom hydra.types import TaskFunction\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\", \"TaskFunction\"]\n", "path": "hydra/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n# Source of truth for Hydra's version\n__version__ = \"1.0.5\"\nfrom hydra import utils\nfrom hydra.errors import MissingConfigException\nfrom hydra.main import main\nfrom hydra.types import TaskFunction\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\", \"TaskFunction\"]\n", "path": "hydra/__init__.py"}]} | 440 | 122 |
gh_patches_debug_47930 | rasdani/github-patches | git_diff | liqd__a4-opin-614 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
project page header: more vertical space for byline
The byline in the project page’s header area, which show’s the project’s organization is vertically too close to the headline of the project.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/organisations/views.py`
Content:
```
1 from django.views import generic
2
3 from . import models
4
5
6 class OrganisationDetailView(generic.DetailView):
7 model = models.Organisation
8
9 def visible_projects(self):
10 if self.request.user in self.object.initiators.all():
11 return self.object.project_set.all()
12 else:
13 return self.object.project_set.filter(is_draft=False)
14
15
16 class OrganisationListView(generic.ListView):
17 model = models.Organisation
18 paginate_by = 10
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/organisations/views.py b/euth/organisations/views.py
--- a/euth/organisations/views.py
+++ b/euth/organisations/views.py
@@ -15,4 +15,4 @@
class OrganisationListView(generic.ListView):
model = models.Organisation
- paginate_by = 10
+ paginate_by = 12
| {"golden_diff": "diff --git a/euth/organisations/views.py b/euth/organisations/views.py\n--- a/euth/organisations/views.py\n+++ b/euth/organisations/views.py\n@@ -15,4 +15,4 @@\n \n class OrganisationListView(generic.ListView):\n model = models.Organisation\n- paginate_by = 10\n+ paginate_by = 12\n", "issue": "project page header: more vertical space for byline\nThe byline in the project page\u2019s header area, which show\u2019s the project\u2019s organization is vertically too close to the headline of the project. \r\n\r\n\n", "before_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n def visible_projects(self):\n if self.request.user in self.object.initiators.all():\n return self.object.project_set.all()\n else:\n return self.object.project_set.filter(is_draft=False)\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 10\n", "path": "euth/organisations/views.py"}], "after_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n def visible_projects(self):\n if self.request.user in self.object.initiators.all():\n return self.object.project_set.all()\n else:\n return self.object.project_set.filter(is_draft=False)\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 12\n", "path": "euth/organisations/views.py"}]} | 514 | 86 |
gh_patches_debug_26170 | rasdani/github-patches | git_diff | zulip__zulip-22270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
management: `rename_stream` management command does not work
`rename_stream` uses the `do_rename_stream` function to rename the stream. However, it accesses a non-existent attribute when calling it.
```
do_rename_stream(stream, new_name, self.user_profile) # self.user_profile does not exist
```
To replicate this, run:
```
python manage.py rename_stream Denmark bar -r zulip
```
and you should see:
```
AttributeError: 'Command' object has no attribute 'user_profile'
```
You might want to look at `zerver/management/commands/rename_stream.py` and `zerver/actions/streams.py`.
The fix should refactor `do_rename_stream` to accept `user_profile: Optional[UserProfile]` with the `None` default, and correctly handle what should happen for the notification message that might be sent when the stream is renamed (which currently mentions the name of the acting user that renames it).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/management/commands/rename_stream.py`
Content:
```
1 from argparse import ArgumentParser
2 from typing import Any
3
4 from zerver.actions.streams import do_rename_stream
5 from zerver.lib.management import ZulipBaseCommand
6 from zerver.models import get_stream
7
8
9 class Command(ZulipBaseCommand):
10 help = """Change the stream name for a realm."""
11
12 def add_arguments(self, parser: ArgumentParser) -> None:
13 parser.add_argument("old_name", metavar="<old name>", help="name of stream to be renamed")
14 parser.add_argument(
15 "new_name", metavar="<new name>", help="new name to rename the stream to"
16 )
17 self.add_realm_args(parser, required=True)
18
19 def handle(self, *args: Any, **options: str) -> None:
20 realm = self.get_realm(options)
21 assert realm is not None # Should be ensured by parser
22 old_name = options["old_name"]
23 new_name = options["new_name"]
24
25 stream = get_stream(old_name, realm)
26 do_rename_stream(stream, new_name, self.user_profile)
27
```
Path: `zilencer/management/commands/migrate_stream_notifications.py`
Content:
```
1 from typing import Any
2
3 from django.core.management.base import BaseCommand
4
5 from zerver.models import Subscription
6
7
8 class Command(BaseCommand):
9 help = """One-off script to migration users' stream notification settings."""
10
11 def handle(self, *args: Any, **options: Any) -> None:
12 for subscription in Subscription.objects.all():
13 subscription.desktop_notifications = subscription.notifications
14 subscription.audible_notifications = subscription.notifications
15 subscription.save(update_fields=["desktop_notifications", "audible_notifications"])
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zerver/management/commands/rename_stream.py b/zerver/management/commands/rename_stream.py
deleted file mode 100644
--- a/zerver/management/commands/rename_stream.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from argparse import ArgumentParser
-from typing import Any
-
-from zerver.actions.streams import do_rename_stream
-from zerver.lib.management import ZulipBaseCommand
-from zerver.models import get_stream
-
-
-class Command(ZulipBaseCommand):
- help = """Change the stream name for a realm."""
-
- def add_arguments(self, parser: ArgumentParser) -> None:
- parser.add_argument("old_name", metavar="<old name>", help="name of stream to be renamed")
- parser.add_argument(
- "new_name", metavar="<new name>", help="new name to rename the stream to"
- )
- self.add_realm_args(parser, required=True)
-
- def handle(self, *args: Any, **options: str) -> None:
- realm = self.get_realm(options)
- assert realm is not None # Should be ensured by parser
- old_name = options["old_name"]
- new_name = options["new_name"]
-
- stream = get_stream(old_name, realm)
- do_rename_stream(stream, new_name, self.user_profile)
diff --git a/zilencer/management/commands/migrate_stream_notifications.py b/zilencer/management/commands/migrate_stream_notifications.py
deleted file mode 100644
--- a/zilencer/management/commands/migrate_stream_notifications.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from typing import Any
-
-from django.core.management.base import BaseCommand
-
-from zerver.models import Subscription
-
-
-class Command(BaseCommand):
- help = """One-off script to migration users' stream notification settings."""
-
- def handle(self, *args: Any, **options: Any) -> None:
- for subscription in Subscription.objects.all():
- subscription.desktop_notifications = subscription.notifications
- subscription.audible_notifications = subscription.notifications
- subscription.save(update_fields=["desktop_notifications", "audible_notifications"])
| {"golden_diff": "diff --git a/zerver/management/commands/rename_stream.py b/zerver/management/commands/rename_stream.py\ndeleted file mode 100644\n--- a/zerver/management/commands/rename_stream.py\n+++ /dev/null\n@@ -1,26 +0,0 @@\n-from argparse import ArgumentParser\n-from typing import Any\n-\n-from zerver.actions.streams import do_rename_stream\n-from zerver.lib.management import ZulipBaseCommand\n-from zerver.models import get_stream\n-\n-\n-class Command(ZulipBaseCommand):\n- help = \"\"\"Change the stream name for a realm.\"\"\"\n-\n- def add_arguments(self, parser: ArgumentParser) -> None:\n- parser.add_argument(\"old_name\", metavar=\"<old name>\", help=\"name of stream to be renamed\")\n- parser.add_argument(\n- \"new_name\", metavar=\"<new name>\", help=\"new name to rename the stream to\"\n- )\n- self.add_realm_args(parser, required=True)\n-\n- def handle(self, *args: Any, **options: str) -> None:\n- realm = self.get_realm(options)\n- assert realm is not None # Should be ensured by parser\n- old_name = options[\"old_name\"]\n- new_name = options[\"new_name\"]\n-\n- stream = get_stream(old_name, realm)\n- do_rename_stream(stream, new_name, self.user_profile)\ndiff --git a/zilencer/management/commands/migrate_stream_notifications.py b/zilencer/management/commands/migrate_stream_notifications.py\ndeleted file mode 100644\n--- a/zilencer/management/commands/migrate_stream_notifications.py\n+++ /dev/null\n@@ -1,15 +0,0 @@\n-from typing import Any\n-\n-from django.core.management.base import BaseCommand\n-\n-from zerver.models import Subscription\n-\n-\n-class Command(BaseCommand):\n- help = \"\"\"One-off script to migration users' stream notification settings.\"\"\"\n-\n- def handle(self, *args: Any, **options: Any) -> None:\n- for subscription in Subscription.objects.all():\n- subscription.desktop_notifications = subscription.notifications\n- subscription.audible_notifications = subscription.notifications\n- subscription.save(update_fields=[\"desktop_notifications\", \"audible_notifications\"])\n", "issue": "management: `rename_stream` management command does not work\n`rename_stream` uses the `do_rename_stream` function to rename the stream. However, it accesses a non-existent attribute when calling it.\r\n\r\n```\r\ndo_rename_stream(stream, new_name, self.user_profile) # self.user_profile does not exist\r\n```\r\n\r\nTo replicate this, run:\r\n```\r\npython manage.py rename_stream Denmark bar -r zulip\r\n```\r\nand you should see:\r\n```\r\nAttributeError: 'Command' object has no attribute 'user_profile'\r\n```\r\nYou might want to look at `zerver/management/commands/rename_stream.py` and `zerver/actions/streams.py`.\r\n\r\nThe fix should refactor `do_rename_stream` to accept `user_profile: Optional[UserProfile]` with the `None` default, and correctly handle what should happen for the notification message that might be sent when the stream is renamed (which currently mentions the name of the acting user that renames it).\n", "before_files": [{"content": "from argparse import ArgumentParser\nfrom typing import Any\n\nfrom zerver.actions.streams import do_rename_stream\nfrom zerver.lib.management import ZulipBaseCommand\nfrom zerver.models import get_stream\n\n\nclass Command(ZulipBaseCommand):\n help = \"\"\"Change the stream name for a realm.\"\"\"\n\n def add_arguments(self, parser: ArgumentParser) -> None:\n parser.add_argument(\"old_name\", metavar=\"<old name>\", help=\"name of stream to be renamed\")\n parser.add_argument(\n \"new_name\", metavar=\"<new name>\", help=\"new name to rename the stream to\"\n )\n self.add_realm_args(parser, required=True)\n\n def handle(self, *args: Any, **options: str) -> None:\n realm = self.get_realm(options)\n assert realm is not None # Should be ensured by parser\n old_name = options[\"old_name\"]\n new_name = options[\"new_name\"]\n\n stream = get_stream(old_name, realm)\n do_rename_stream(stream, new_name, self.user_profile)\n", "path": "zerver/management/commands/rename_stream.py"}, {"content": "from typing import Any\n\nfrom django.core.management.base import BaseCommand\n\nfrom zerver.models import Subscription\n\n\nclass Command(BaseCommand):\n help = \"\"\"One-off script to migration users' stream notification settings.\"\"\"\n\n def handle(self, *args: Any, **options: Any) -> None:\n for subscription in Subscription.objects.all():\n subscription.desktop_notifications = subscription.notifications\n subscription.audible_notifications = subscription.notifications\n subscription.save(update_fields=[\"desktop_notifications\", \"audible_notifications\"])\n", "path": "zilencer/management/commands/migrate_stream_notifications.py"}], "after_files": [{"content": null, "path": "zerver/management/commands/rename_stream.py"}, {"content": null, "path": "zilencer/management/commands/migrate_stream_notifications.py"}]} | 884 | 486 |
gh_patches_debug_12810 | rasdani/github-patches | git_diff | joke2k__faker-626 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Uk mobile number
It seems like the uk mobile number is not in the right format
it's completely not valid
some examples of them:
+44(0)9128 405119
(01414) 35336
01231052134
Uk mobile number
It seems like the uk mobile number is not in the right format
it's completely not valid
some examples of them:
+44(0)9128 405119
(01414) 35336
01231052134
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/phone_number/en_GB/__init__.py`
Content:
```
1 from __future__ import unicode_literals
2 from .. import Provider as PhoneNumberProvider
3
4
5 class Provider(PhoneNumberProvider):
6 formats = (
7 '+44(0)##########',
8 '+44(0)#### ######',
9 '+44(0)#########',
10 '+44(0)#### #####',
11 '0##########',
12 '0#########',
13 '0#### ######',
14 '0#### #####',
15 '(0####) ######',
16 '(0####) #####',
17 )
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/phone_number/en_GB/__init__.py b/faker/providers/phone_number/en_GB/__init__.py
--- a/faker/providers/phone_number/en_GB/__init__.py
+++ b/faker/providers/phone_number/en_GB/__init__.py
@@ -3,6 +3,15 @@
class Provider(PhoneNumberProvider):
+ # Source: https://en.wikipedia.org/wiki/Telephone_numbers_in_the_United_Kingdom
+
+ cellphone_formats = (
+ '+44 7### ######',
+ '+44 7#########',
+ '07### ######',
+ '07#########',
+ )
+
formats = (
'+44(0)##########',
'+44(0)#### ######',
@@ -15,3 +24,7 @@
'(0####) ######',
'(0####) #####',
)
+
+ def cellphone_number(self):
+ pattern = self.random_element(self.cellphone_formats)
+ return self.numerify(self.generator.parse(pattern))
| {"golden_diff": "diff --git a/faker/providers/phone_number/en_GB/__init__.py b/faker/providers/phone_number/en_GB/__init__.py\n--- a/faker/providers/phone_number/en_GB/__init__.py\n+++ b/faker/providers/phone_number/en_GB/__init__.py\n@@ -3,6 +3,15 @@\n \n \n class Provider(PhoneNumberProvider):\n+ # Source: https://en.wikipedia.org/wiki/Telephone_numbers_in_the_United_Kingdom\n+\n+ cellphone_formats = (\n+ '+44 7### ######',\n+ '+44 7#########',\n+ '07### ######',\n+ '07#########',\n+ )\n+\n formats = (\n '+44(0)##########',\n '+44(0)#### ######',\n@@ -15,3 +24,7 @@\n '(0####) ######',\n '(0####) #####',\n )\n+\n+ def cellphone_number(self):\n+ pattern = self.random_element(self.cellphone_formats)\n+ return self.numerify(self.generator.parse(pattern))\n", "issue": "Uk mobile number\nIt seems like the uk mobile number is not in the right format \r\nit's completely not valid\r\nsome examples of them: \r\n+44(0)9128 405119\r\n(01414) 35336\r\n01231052134\nUk mobile number\nIt seems like the uk mobile number is not in the right format \r\nit's completely not valid\r\nsome examples of them: \r\n+44(0)9128 405119\r\n(01414) 35336\r\n01231052134\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom .. import Provider as PhoneNumberProvider\n\n\nclass Provider(PhoneNumberProvider):\n formats = (\n '+44(0)##########',\n '+44(0)#### ######',\n '+44(0)#########',\n '+44(0)#### #####',\n '0##########',\n '0#########',\n '0#### ######',\n '0#### #####',\n '(0####) ######',\n '(0####) #####',\n )\n", "path": "faker/providers/phone_number/en_GB/__init__.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom .. import Provider as PhoneNumberProvider\n\n\nclass Provider(PhoneNumberProvider):\n # Source: https://en.wikipedia.org/wiki/Telephone_numbers_in_the_United_Kingdom\n\n cellphone_formats = (\n '+44 7### ######',\n '+44 7#########',\n '07### ######',\n '07#########',\n )\n\n formats = (\n '+44(0)##########',\n '+44(0)#### ######',\n '+44(0)#########',\n '+44(0)#### #####',\n '0##########',\n '0#########',\n '0#### ######',\n '0#### #####',\n '(0####) ######',\n '(0####) #####',\n )\n\n def cellphone_number(self):\n pattern = self.random_element(self.cellphone_formats)\n return self.numerify(self.generator.parse(pattern))\n", "path": "faker/providers/phone_number/en_GB/__init__.py"}]} | 553 | 235 |
gh_patches_debug_2835 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-2012 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect Fields in Jobs serializer
*Observed code:* [here](https://github.com/Cloud-CV/EvalAI/blob/master/apps/jobs/serializers.py/#L54)
```
class Meta:
model = LeaderboardData
fields = "__all__"
fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')
```
*Expected Code:*
```
class Meta:
model = LeaderboardData
fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/jobs/serializers.py`
Content:
```
1 from django.contrib.auth.models import User
2
3 from rest_framework import serializers
4
5 from challenges.models import LeaderboardData
6 from participants.models import Participant, ParticipantTeam
7
8 from .models import Submission
9
10
11 class SubmissionSerializer(serializers.ModelSerializer):
12
13 participant_team_name = serializers.SerializerMethodField()
14 execution_time = serializers.SerializerMethodField()
15
16 def __init__(self, *args, **kwargs):
17 context = kwargs.get('context')
18 if context and context.get('request').method == 'POST':
19 created_by = context.get('request').user
20 kwargs['data']['created_by'] = created_by.pk
21
22 participant_team = context.get('participant_team').pk
23 kwargs['data']['participant_team'] = participant_team
24
25 challenge_phase = context.get('challenge_phase').pk
26 kwargs['data']['challenge_phase'] = challenge_phase
27
28 super(SubmissionSerializer, self).__init__(*args, **kwargs)
29
30 class Meta:
31 model = Submission
32 fields = ('id', 'participant_team', 'participant_team_name', 'execution_time', 'challenge_phase',
33 'created_by', 'status', 'input_file', 'stdout_file', 'stderr_file', 'submitted_at',
34 'method_name', 'method_description', 'project_url', 'publication_url', 'is_public',
35 'submission_result_file', 'when_made_public',)
36
37 def get_participant_team_name(self, obj):
38 return obj.participant_team.team_name
39
40 def get_execution_time(self, obj):
41 return obj.execution_time
42
43
44 class LeaderboardDataSerializer(serializers.ModelSerializer):
45
46 participant_team_name = serializers.SerializerMethodField()
47 leaderboard_schema = serializers.SerializerMethodField()
48
49 def __init__(self, *args, **kwargs):
50 super(LeaderboardDataSerializer, self).__init__(*args, **kwargs)
51
52 class Meta:
53 model = LeaderboardData
54 fields = "__all__"
55 fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')
56
57 def get_participant_team_name(self, obj):
58 return obj.submission.participant_team.team_name
59
60 def get_leaderboard_schema(self, obj):
61 return obj.leaderboard.schema
62
63
64 class ChallengeSubmissionManagementSerializer(serializers.ModelSerializer):
65
66 participant_team = serializers.SerializerMethodField()
67 challenge_phase = serializers.SerializerMethodField()
68 created_by = serializers.SerializerMethodField()
69 participant_team_members_email_ids = serializers.SerializerMethodField()
70 created_at = serializers.SerializerMethodField()
71 participant_team_members = serializers.SerializerMethodField()
72
73 class Meta:
74 model = Submission
75 fields = ('id', 'participant_team', 'challenge_phase', 'created_by', 'status', 'is_public',
76 'submission_number', 'submitted_at', 'execution_time', 'input_file', 'stdout_file',
77 'stderr_file', 'submission_result_file', 'submission_metadata_file',
78 'participant_team_members_email_ids', 'created_at', 'method_name', 'participant_team_members',)
79
80 def get_participant_team(self, obj):
81 return obj.participant_team.team_name
82
83 def get_challenge_phase(self, obj):
84 return obj.challenge_phase.name
85
86 def get_created_by(self, obj):
87 return obj.created_by.username
88
89 def get_participant_team_members_email_ids(self, obj):
90 try:
91 participant_team = ParticipantTeam.objects.get(team_name=obj.participant_team.team_name)
92 except ParticipantTeam.DoesNotExist:
93 return 'Participant team does not exist'
94
95 participant_ids = Participant.objects.filter(team=participant_team).values_list('user_id', flat=True)
96 return list(User.objects.filter(id__in=participant_ids).values_list('email', flat=True))
97
98 def get_created_at(self, obj):
99 return obj.created_at
100
101 def get_participant_team_members(self, obj):
102 try:
103 participant_team = ParticipantTeam.objects.get(team_name=obj.participant_team.team_name)
104 except ParticipantTeam.DoesNotExist:
105 return 'Participant team does not exist'
106
107 participant_ids = Participant.objects.filter(team=participant_team).values_list('user_id', flat=True)
108 return list(User.objects.filter(id__in=participant_ids).values('username', 'email'))
109
110
111 class SubmissionCount(object):
112 def __init__(self, submission_count):
113 self.submission_count = submission_count
114
115
116 class SubmissionCountSerializer(serializers.Serializer):
117 submission_count = serializers.IntegerField()
118
119
120 class LastSubmissionDateTime(object):
121 def __init__(self, last_submission_datetime):
122 self.last_submission_datetime = last_submission_datetime
123
124
125 class LastSubmissionDateTimeSerializer(serializers.Serializer):
126 last_submission_datetime = serializers.DateTimeField()
127
128
129 class CreateLeaderboardDataSerializer(serializers.ModelSerializer):
130
131 def __init__(self, *args, **kwargs):
132 context = kwargs.get('context')
133 if context and context.get('request').method == 'PUT':
134 challenge_phase_split = context.get('challenge_phase_split')
135 kwargs['data']['challenge_phase_split'] = challenge_phase_split.pk
136
137 submission = context.get('submission').pk
138 kwargs['data']['submission'] = submission
139
140 kwargs['data']['leaderboard'] = challenge_phase_split.leaderboard.pk
141
142 super(CreateLeaderboardDataSerializer, self).__init__(*args, **kwargs)
143
144 class Meta:
145 model = LeaderboardData
146 fields = ('challenge_phase_split', 'submission', 'result', 'leaderboard')
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/jobs/serializers.py b/apps/jobs/serializers.py
--- a/apps/jobs/serializers.py
+++ b/apps/jobs/serializers.py
@@ -51,7 +51,6 @@
class Meta:
model = LeaderboardData
- fields = "__all__"
fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')
def get_participant_team_name(self, obj):
| {"golden_diff": "diff --git a/apps/jobs/serializers.py b/apps/jobs/serializers.py\n--- a/apps/jobs/serializers.py\n+++ b/apps/jobs/serializers.py\n@@ -51,7 +51,6 @@\n \n class Meta:\n model = LeaderboardData\n- fields = \"__all__\"\n fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')\n \n def get_participant_team_name(self, obj):\n", "issue": "Incorrect Fields in Jobs serializer\n*Observed code:* [here](https://github.com/Cloud-CV/EvalAI/blob/master/apps/jobs/serializers.py/#L54)\r\n```\r\nclass Meta:\r\n model = LeaderboardData\r\n fields = \"__all__\"\r\n fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')\r\n```\r\n*Expected Code:*\r\n```\r\nclass Meta:\r\n model = LeaderboardData\r\n fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')\r\n```\n", "before_files": [{"content": "from django.contrib.auth.models import User\n\nfrom rest_framework import serializers\n\nfrom challenges.models import LeaderboardData\nfrom participants.models import Participant, ParticipantTeam\n\nfrom .models import Submission\n\n\nclass SubmissionSerializer(serializers.ModelSerializer):\n\n participant_team_name = serializers.SerializerMethodField()\n execution_time = serializers.SerializerMethodField()\n\n def __init__(self, *args, **kwargs):\n context = kwargs.get('context')\n if context and context.get('request').method == 'POST':\n created_by = context.get('request').user\n kwargs['data']['created_by'] = created_by.pk\n\n participant_team = context.get('participant_team').pk\n kwargs['data']['participant_team'] = participant_team\n\n challenge_phase = context.get('challenge_phase').pk\n kwargs['data']['challenge_phase'] = challenge_phase\n\n super(SubmissionSerializer, self).__init__(*args, **kwargs)\n\n class Meta:\n model = Submission\n fields = ('id', 'participant_team', 'participant_team_name', 'execution_time', 'challenge_phase',\n 'created_by', 'status', 'input_file', 'stdout_file', 'stderr_file', 'submitted_at',\n 'method_name', 'method_description', 'project_url', 'publication_url', 'is_public',\n 'submission_result_file', 'when_made_public',)\n\n def get_participant_team_name(self, obj):\n return obj.participant_team.team_name\n\n def get_execution_time(self, obj):\n return obj.execution_time\n\n\nclass LeaderboardDataSerializer(serializers.ModelSerializer):\n\n participant_team_name = serializers.SerializerMethodField()\n leaderboard_schema = serializers.SerializerMethodField()\n\n def __init__(self, *args, **kwargs):\n super(LeaderboardDataSerializer, self).__init__(*args, **kwargs)\n\n class Meta:\n model = LeaderboardData\n fields = \"__all__\"\n fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')\n\n def get_participant_team_name(self, obj):\n return obj.submission.participant_team.team_name\n\n def get_leaderboard_schema(self, obj):\n return obj.leaderboard.schema\n\n\nclass ChallengeSubmissionManagementSerializer(serializers.ModelSerializer):\n\n participant_team = serializers.SerializerMethodField()\n challenge_phase = serializers.SerializerMethodField()\n created_by = serializers.SerializerMethodField()\n participant_team_members_email_ids = serializers.SerializerMethodField()\n created_at = serializers.SerializerMethodField()\n participant_team_members = serializers.SerializerMethodField()\n\n class Meta:\n model = Submission\n fields = ('id', 'participant_team', 'challenge_phase', 'created_by', 'status', 'is_public',\n 'submission_number', 'submitted_at', 'execution_time', 'input_file', 'stdout_file',\n 'stderr_file', 'submission_result_file', 'submission_metadata_file',\n 'participant_team_members_email_ids', 'created_at', 'method_name', 'participant_team_members',)\n\n def get_participant_team(self, obj):\n return obj.participant_team.team_name\n\n def get_challenge_phase(self, obj):\n return obj.challenge_phase.name\n\n def get_created_by(self, obj):\n return obj.created_by.username\n\n def get_participant_team_members_email_ids(self, obj):\n try:\n participant_team = ParticipantTeam.objects.get(team_name=obj.participant_team.team_name)\n except ParticipantTeam.DoesNotExist:\n return 'Participant team does not exist'\n\n participant_ids = Participant.objects.filter(team=participant_team).values_list('user_id', flat=True)\n return list(User.objects.filter(id__in=participant_ids).values_list('email', flat=True))\n\n def get_created_at(self, obj):\n return obj.created_at\n\n def get_participant_team_members(self, obj):\n try:\n participant_team = ParticipantTeam.objects.get(team_name=obj.participant_team.team_name)\n except ParticipantTeam.DoesNotExist:\n return 'Participant team does not exist'\n\n participant_ids = Participant.objects.filter(team=participant_team).values_list('user_id', flat=True)\n return list(User.objects.filter(id__in=participant_ids).values('username', 'email'))\n\n\nclass SubmissionCount(object):\n def __init__(self, submission_count):\n self.submission_count = submission_count\n\n\nclass SubmissionCountSerializer(serializers.Serializer):\n submission_count = serializers.IntegerField()\n\n\nclass LastSubmissionDateTime(object):\n def __init__(self, last_submission_datetime):\n self.last_submission_datetime = last_submission_datetime\n\n\nclass LastSubmissionDateTimeSerializer(serializers.Serializer):\n last_submission_datetime = serializers.DateTimeField()\n\n\nclass CreateLeaderboardDataSerializer(serializers.ModelSerializer):\n\n def __init__(self, *args, **kwargs):\n context = kwargs.get('context')\n if context and context.get('request').method == 'PUT':\n challenge_phase_split = context.get('challenge_phase_split')\n kwargs['data']['challenge_phase_split'] = challenge_phase_split.pk\n\n submission = context.get('submission').pk\n kwargs['data']['submission'] = submission\n\n kwargs['data']['leaderboard'] = challenge_phase_split.leaderboard.pk\n\n super(CreateLeaderboardDataSerializer, self).__init__(*args, **kwargs)\n\n class Meta:\n model = LeaderboardData\n fields = ('challenge_phase_split', 'submission', 'result', 'leaderboard')\n", "path": "apps/jobs/serializers.py"}], "after_files": [{"content": "from django.contrib.auth.models import User\n\nfrom rest_framework import serializers\n\nfrom challenges.models import LeaderboardData\nfrom participants.models import Participant, ParticipantTeam\n\nfrom .models import Submission\n\n\nclass SubmissionSerializer(serializers.ModelSerializer):\n\n participant_team_name = serializers.SerializerMethodField()\n execution_time = serializers.SerializerMethodField()\n\n def __init__(self, *args, **kwargs):\n context = kwargs.get('context')\n if context and context.get('request').method == 'POST':\n created_by = context.get('request').user\n kwargs['data']['created_by'] = created_by.pk\n\n participant_team = context.get('participant_team').pk\n kwargs['data']['participant_team'] = participant_team\n\n challenge_phase = context.get('challenge_phase').pk\n kwargs['data']['challenge_phase'] = challenge_phase\n\n super(SubmissionSerializer, self).__init__(*args, **kwargs)\n\n class Meta:\n model = Submission\n fields = ('id', 'participant_team', 'participant_team_name', 'execution_time', 'challenge_phase',\n 'created_by', 'status', 'input_file', 'stdout_file', 'stderr_file', 'submitted_at',\n 'method_name', 'method_description', 'project_url', 'publication_url', 'is_public',\n 'submission_result_file', 'when_made_public',)\n\n def get_participant_team_name(self, obj):\n return obj.participant_team.team_name\n\n def get_execution_time(self, obj):\n return obj.execution_time\n\n\nclass LeaderboardDataSerializer(serializers.ModelSerializer):\n\n participant_team_name = serializers.SerializerMethodField()\n leaderboard_schema = serializers.SerializerMethodField()\n\n def __init__(self, *args, **kwargs):\n super(LeaderboardDataSerializer, self).__init__(*args, **kwargs)\n\n class Meta:\n model = LeaderboardData\n fields = ('id', 'participant_team_name', 'challenge_phase_split', 'leaderboard_schema', 'result')\n\n def get_participant_team_name(self, obj):\n return obj.submission.participant_team.team_name\n\n def get_leaderboard_schema(self, obj):\n return obj.leaderboard.schema\n\n\nclass ChallengeSubmissionManagementSerializer(serializers.ModelSerializer):\n\n participant_team = serializers.SerializerMethodField()\n challenge_phase = serializers.SerializerMethodField()\n created_by = serializers.SerializerMethodField()\n participant_team_members_email_ids = serializers.SerializerMethodField()\n created_at = serializers.SerializerMethodField()\n participant_team_members = serializers.SerializerMethodField()\n\n class Meta:\n model = Submission\n fields = ('id', 'participant_team', 'challenge_phase', 'created_by', 'status', 'is_public',\n 'submission_number', 'submitted_at', 'execution_time', 'input_file', 'stdout_file',\n 'stderr_file', 'submission_result_file', 'submission_metadata_file',\n 'participant_team_members_email_ids', 'created_at', 'method_name', 'participant_team_members',)\n\n def get_participant_team(self, obj):\n return obj.participant_team.team_name\n\n def get_challenge_phase(self, obj):\n return obj.challenge_phase.name\n\n def get_created_by(self, obj):\n return obj.created_by.username\n\n def get_participant_team_members_email_ids(self, obj):\n try:\n participant_team = ParticipantTeam.objects.get(team_name=obj.participant_team.team_name)\n except ParticipantTeam.DoesNotExist:\n return 'Participant team does not exist'\n\n participant_ids = Participant.objects.filter(team=participant_team).values_list('user_id', flat=True)\n return list(User.objects.filter(id__in=participant_ids).values_list('email', flat=True))\n\n def get_created_at(self, obj):\n return obj.created_at\n\n def get_participant_team_members(self, obj):\n try:\n participant_team = ParticipantTeam.objects.get(team_name=obj.participant_team.team_name)\n except ParticipantTeam.DoesNotExist:\n return 'Participant team does not exist'\n\n participant_ids = Participant.objects.filter(team=participant_team).values_list('user_id', flat=True)\n return list(User.objects.filter(id__in=participant_ids).values('username', 'email'))\n\n\nclass SubmissionCount(object):\n def __init__(self, submission_count):\n self.submission_count = submission_count\n\n\nclass SubmissionCountSerializer(serializers.Serializer):\n submission_count = serializers.IntegerField()\n\n\nclass LastSubmissionDateTime(object):\n def __init__(self, last_submission_datetime):\n self.last_submission_datetime = last_submission_datetime\n\n\nclass LastSubmissionDateTimeSerializer(serializers.Serializer):\n last_submission_datetime = serializers.DateTimeField()\n\n\nclass CreateLeaderboardDataSerializer(serializers.ModelSerializer):\n\n def __init__(self, *args, **kwargs):\n context = kwargs.get('context')\n if context and context.get('request').method == 'PUT':\n challenge_phase_split = context.get('challenge_phase_split')\n kwargs['data']['challenge_phase_split'] = challenge_phase_split.pk\n\n submission = context.get('submission').pk\n kwargs['data']['submission'] = submission\n\n kwargs['data']['leaderboard'] = challenge_phase_split.leaderboard.pk\n\n super(CreateLeaderboardDataSerializer, self).__init__(*args, **kwargs)\n\n class Meta:\n model = LeaderboardData\n fields = ('challenge_phase_split', 'submission', 'result', 'leaderboard')\n", "path": "apps/jobs/serializers.py"}]} | 1,851 | 108 |
gh_patches_debug_3977 | rasdani/github-patches | git_diff | activeloopai__deeplake-1513 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Pytorch dataloader because of transforms and shuffle=false
## 🐛🐛 Bug Report
### ⚗️ Current Behavior
A clear and concise description of the behavior.
```python
import hub
ds = hub.load("hub://activeloop/mnist-test")
dataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={"images": None, "labels": None})
for (images, labels) in dataloader:
print(images.shape, labels.shape)
break
```
```
Opening dataset in read-only mode as you don't have write permissions.
hub://activeloop/mnist-test loaded successfully.
This dataset can be visualized at https://app.activeloop.ai/activeloop/mnist-test.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-21-22b652d8dbed>](https://localhost:8080/#) in <module>()
4 dataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={"images": None, "labels": None})
5 for (images, labels) in dataloader:
----> 6 print(images.shape, labels.shape)
7 break
AttributeError: 'str' object has no attribute 'shape'
```
but when you remove the argument `transform` from dataloader that script works.
### ⚙️ Environment
- Google colab
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hub/integrations/pytorch/common.py`
Content:
```
1 from typing import Callable, Dict, List, Optional
2 from hub.util.iterable_ordered_dict import IterableOrderedDict
3 import numpy as np
4
5
6 def collate_fn(batch):
7 import torch
8
9 elem = batch[0]
10
11 if isinstance(elem, IterableOrderedDict):
12 return IterableOrderedDict(
13 (key, collate_fn([d[key] for d in batch])) for key in elem.keys()
14 )
15
16 if isinstance(elem, np.ndarray) and elem.size > 0 and isinstance(elem[0], str):
17 batch = [it[0] for it in batch]
18 return torch.utils.data._utils.collate.default_collate(batch)
19
20
21 def convert_fn(data):
22 import torch
23
24 if isinstance(data, IterableOrderedDict):
25 return IterableOrderedDict((k, convert_fn(v)) for k, v in data.items())
26 if isinstance(data, np.ndarray) and data.size > 0 and isinstance(data[0], str):
27 data = data[0]
28
29 return torch.utils.data._utils.collate.default_convert(data)
30
31
32 class PytorchTransformFunction:
33 def __init__(
34 self,
35 transform_dict: Optional[Dict[str, Optional[Callable]]] = None,
36 composite_transform: Optional[Callable] = None,
37 tensors: List[str] = None,
38 ) -> None:
39 self.composite_transform = composite_transform
40 self.transform_dict = transform_dict
41 tensors = tensors or []
42
43 if transform_dict is not None:
44 for tensor in transform_dict:
45 if tensor not in tensors:
46 raise ValueError(f"Invalid transform. Tensor {tensor} not found.")
47
48 def __call__(self, data_in: Dict) -> Dict:
49 if self.composite_transform is not None:
50 return self.composite_transform(data_in)
51 elif self.transform_dict is not None:
52 data_out = {}
53 for tensor, fn in self.transform_dict.items():
54 value = data_in[tensor]
55 data_out[tensor] = value if fn is None else fn(value)
56 return data_out
57 return data_in
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hub/integrations/pytorch/common.py b/hub/integrations/pytorch/common.py
--- a/hub/integrations/pytorch/common.py
+++ b/hub/integrations/pytorch/common.py
@@ -53,5 +53,6 @@
for tensor, fn in self.transform_dict.items():
value = data_in[tensor]
data_out[tensor] = value if fn is None else fn(value)
+ data_out = IterableOrderedDict(data_out)
return data_out
return data_in
| {"golden_diff": "diff --git a/hub/integrations/pytorch/common.py b/hub/integrations/pytorch/common.py\n--- a/hub/integrations/pytorch/common.py\n+++ b/hub/integrations/pytorch/common.py\n@@ -53,5 +53,6 @@\n for tensor, fn in self.transform_dict.items():\n value = data_in[tensor]\n data_out[tensor] = value if fn is None else fn(value)\n+ data_out = IterableOrderedDict(data_out)\n return data_out\n return data_in\n", "issue": "[BUG] Pytorch dataloader because of transforms and shuffle=false\n## \ud83d\udc1b\ud83d\udc1b Bug Report\r\n\r\n\r\n### \u2697\ufe0f Current Behavior\r\nA clear and concise description of the behavior.\r\n\r\n```python\r\nimport hub\r\nds = hub.load(\"hub://activeloop/mnist-test\")\r\n\r\ndataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={\"images\": None, \"labels\": None})\r\nfor (images, labels) in dataloader:\r\n print(images.shape, labels.shape)\r\n break\r\n```\r\n\r\n```\r\nOpening dataset in read-only mode as you don't have write permissions.\r\nhub://activeloop/mnist-test loaded successfully.\r\nThis dataset can be visualized at https://app.activeloop.ai/activeloop/mnist-test.\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-21-22b652d8dbed>](https://localhost:8080/#) in <module>()\r\n 4 dataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={\"images\": None, \"labels\": None})\r\n 5 for (images, labels) in dataloader:\r\n----> 6 print(images.shape, labels.shape)\r\n 7 break\r\n\r\nAttributeError: 'str' object has no attribute 'shape'\r\n```\r\n\r\nbut when you remove the argument `transform` from dataloader that script works.\r\n\r\n### \u2699\ufe0f Environment\r\n\r\n- Google colab\r\n\n", "before_files": [{"content": "from typing import Callable, Dict, List, Optional\nfrom hub.util.iterable_ordered_dict import IterableOrderedDict\nimport numpy as np\n\n\ndef collate_fn(batch):\n import torch\n\n elem = batch[0]\n\n if isinstance(elem, IterableOrderedDict):\n return IterableOrderedDict(\n (key, collate_fn([d[key] for d in batch])) for key in elem.keys()\n )\n\n if isinstance(elem, np.ndarray) and elem.size > 0 and isinstance(elem[0], str):\n batch = [it[0] for it in batch]\n return torch.utils.data._utils.collate.default_collate(batch)\n\n\ndef convert_fn(data):\n import torch\n\n if isinstance(data, IterableOrderedDict):\n return IterableOrderedDict((k, convert_fn(v)) for k, v in data.items())\n if isinstance(data, np.ndarray) and data.size > 0 and isinstance(data[0], str):\n data = data[0]\n\n return torch.utils.data._utils.collate.default_convert(data)\n\n\nclass PytorchTransformFunction:\n def __init__(\n self,\n transform_dict: Optional[Dict[str, Optional[Callable]]] = None,\n composite_transform: Optional[Callable] = None,\n tensors: List[str] = None,\n ) -> None:\n self.composite_transform = composite_transform\n self.transform_dict = transform_dict\n tensors = tensors or []\n\n if transform_dict is not None:\n for tensor in transform_dict:\n if tensor not in tensors:\n raise ValueError(f\"Invalid transform. Tensor {tensor} not found.\")\n\n def __call__(self, data_in: Dict) -> Dict:\n if self.composite_transform is not None:\n return self.composite_transform(data_in)\n elif self.transform_dict is not None:\n data_out = {}\n for tensor, fn in self.transform_dict.items():\n value = data_in[tensor]\n data_out[tensor] = value if fn is None else fn(value)\n return data_out\n return data_in\n", "path": "hub/integrations/pytorch/common.py"}], "after_files": [{"content": "from typing import Callable, Dict, List, Optional\nfrom hub.util.iterable_ordered_dict import IterableOrderedDict\nimport numpy as np\n\n\ndef collate_fn(batch):\n import torch\n\n elem = batch[0]\n\n if isinstance(elem, IterableOrderedDict):\n return IterableOrderedDict(\n (key, collate_fn([d[key] for d in batch])) for key in elem.keys()\n )\n\n if isinstance(elem, np.ndarray) and elem.size > 0 and isinstance(elem[0], str):\n batch = [it[0] for it in batch]\n return torch.utils.data._utils.collate.default_collate(batch)\n\n\ndef convert_fn(data):\n import torch\n\n if isinstance(data, IterableOrderedDict):\n return IterableOrderedDict((k, convert_fn(v)) for k, v in data.items())\n if isinstance(data, np.ndarray) and data.size > 0 and isinstance(data[0], str):\n data = data[0]\n\n return torch.utils.data._utils.collate.default_convert(data)\n\n\nclass PytorchTransformFunction:\n def __init__(\n self,\n transform_dict: Optional[Dict[str, Optional[Callable]]] = None,\n composite_transform: Optional[Callable] = None,\n tensors: List[str] = None,\n ) -> None:\n self.composite_transform = composite_transform\n self.transform_dict = transform_dict\n tensors = tensors or []\n\n if transform_dict is not None:\n for tensor in transform_dict:\n if tensor not in tensors:\n raise ValueError(f\"Invalid transform. Tensor {tensor} not found.\")\n\n def __call__(self, data_in: Dict) -> Dict:\n if self.composite_transform is not None:\n return self.composite_transform(data_in)\n elif self.transform_dict is not None:\n data_out = {}\n for tensor, fn in self.transform_dict.items():\n value = data_in[tensor]\n data_out[tensor] = value if fn is None else fn(value)\n data_out = IterableOrderedDict(data_out)\n return data_out\n return data_in\n", "path": "hub/integrations/pytorch/common.py"}]} | 1,127 | 116 |
gh_patches_debug_2155 | rasdani/github-patches | git_diff | wright-group__WrightTools-878 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pcov TypeError in kit._leastsq
In kit._leastsq, if the line 62 if statement is not passed, the consequent else statement makes pcov data type float, triggering"TypeError: 'int' object is not subscriptable" in line 72-73:
72: try:
73: error.append(np.absolute(pcov[i][i]) ** 0.5)
Line 74 picks up index out of bound errors, not sure if it was meant to catch the type error.
74: except IndexError:
75: error.append(0.00)
Error is bypassed if I put a 2D array into line 68, but have not spent the time considering what this array should look like.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/kit/_leastsq.py`
Content:
```
1 """Least-square fitting tools."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 from ._utilities import Timer
8
9 import numpy as np
10
11 from scipy import optimize as scipy_optimize
12
13
14 # --- define --------------------------------------------------------------------------------------
15
16
17 __all__ = ["leastsqfitter"]
18
19
20 # --- functions -----------------------------------------------------------------------------------
21
22
23 def leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):
24 """Conveniently call scipy.optmize.leastsq().
25
26 Returns fit parameters and their errors.
27
28 Parameters
29 ----------
30 p0 : list
31 list of guess parameters to pass to function
32 datax : array
33 array of independent values
34 datay : array
35 array of dependent values
36 function : function
37 function object to fit data to. Must be of the callable form function(p, x)
38 verbose : bool
39 toggles printing of fit time, fit params, and fit param errors
40 cov_verbose : bool
41 toggles printing of covarience matrix
42
43 Returns
44 -------
45 pfit_leastsq : list
46 list of fit parameters. s.t. the error between datay and function(p, datax) is minimized
47 perr_leastsq : list
48 list of fit parameter errors (1 std)
49 """
50 timer = Timer(verbose=False)
51 with timer:
52 # define error function
53 def errfunc(p, x, y):
54 return y - function(p, x)
55
56 # run optimization
57 pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(
58 errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001
59 )
60 # calculate covarience matrix
61 # original idea https://stackoverflow.com/a/21844726
62 if (len(datay) > len(p0)) and pcov is not None:
63 s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))
64 pcov = pcov * s_sq
65 if cov_verbose:
66 print(pcov)
67 else:
68 pcov = np.inf
69 # calculate and write errors
70 error = []
71 for i in range(len(pfit_leastsq)):
72 try:
73 error.append(np.absolute(pcov[i][i]) ** 0.5)
74 except IndexError:
75 error.append(0.00)
76 perr_leastsq = np.array(error)
77 # exit
78 if verbose:
79 print("fit params: ", pfit_leastsq)
80 print("fit params error: ", perr_leastsq)
81 print("fitting done in %f seconds" % timer.interval)
82 return pfit_leastsq, perr_leastsq
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/kit/_leastsq.py b/WrightTools/kit/_leastsq.py
--- a/WrightTools/kit/_leastsq.py
+++ b/WrightTools/kit/_leastsq.py
@@ -65,7 +65,7 @@
if cov_verbose:
print(pcov)
else:
- pcov = np.inf
+ pcov = np.array(np.inf)
# calculate and write errors
error = []
for i in range(len(pfit_leastsq)):
| {"golden_diff": "diff --git a/WrightTools/kit/_leastsq.py b/WrightTools/kit/_leastsq.py\n--- a/WrightTools/kit/_leastsq.py\n+++ b/WrightTools/kit/_leastsq.py\n@@ -65,7 +65,7 @@\n if cov_verbose:\n print(pcov)\n else:\n- pcov = np.inf\n+ pcov = np.array(np.inf)\n # calculate and write errors\n error = []\n for i in range(len(pfit_leastsq)):\n", "issue": "pcov TypeError in kit._leastsq\nIn kit._leastsq, if the line 62 if statement is not passed, the consequent else statement makes pcov data type float, triggering\"TypeError: 'int' object is not subscriptable\" in line 72-73:\r\n\r\n72: try:\r\n73: error.append(np.absolute(pcov[i][i]) ** 0.5)\r\n\r\nLine 74 picks up index out of bound errors, not sure if it was meant to catch the type error.\r\n\r\n74: except IndexError:\r\n75: error.append(0.00)\r\n\r\nError is bypassed if I put a 2D array into line 68, but have not spent the time considering what this array should look like.\n", "before_files": [{"content": "\"\"\"Least-square fitting tools.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nfrom ._utilities import Timer\n\nimport numpy as np\n\nfrom scipy import optimize as scipy_optimize\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = [\"leastsqfitter\"]\n\n\n# --- functions -----------------------------------------------------------------------------------\n\n\ndef leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):\n \"\"\"Conveniently call scipy.optmize.leastsq().\n\n Returns fit parameters and their errors.\n\n Parameters\n ----------\n p0 : list\n list of guess parameters to pass to function\n datax : array\n array of independent values\n datay : array\n array of dependent values\n function : function\n function object to fit data to. Must be of the callable form function(p, x)\n verbose : bool\n toggles printing of fit time, fit params, and fit param errors\n cov_verbose : bool\n toggles printing of covarience matrix\n\n Returns\n -------\n pfit_leastsq : list\n list of fit parameters. s.t. the error between datay and function(p, datax) is minimized\n perr_leastsq : list\n list of fit parameter errors (1 std)\n \"\"\"\n timer = Timer(verbose=False)\n with timer:\n # define error function\n def errfunc(p, x, y):\n return y - function(p, x)\n\n # run optimization\n pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(\n errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001\n )\n # calculate covarience matrix\n # original idea https://stackoverflow.com/a/21844726\n if (len(datay) > len(p0)) and pcov is not None:\n s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))\n pcov = pcov * s_sq\n if cov_verbose:\n print(pcov)\n else:\n pcov = np.inf\n # calculate and write errors\n error = []\n for i in range(len(pfit_leastsq)):\n try:\n error.append(np.absolute(pcov[i][i]) ** 0.5)\n except IndexError:\n error.append(0.00)\n perr_leastsq = np.array(error)\n # exit\n if verbose:\n print(\"fit params: \", pfit_leastsq)\n print(\"fit params error: \", perr_leastsq)\n print(\"fitting done in %f seconds\" % timer.interval)\n return pfit_leastsq, perr_leastsq\n", "path": "WrightTools/kit/_leastsq.py"}], "after_files": [{"content": "\"\"\"Least-square fitting tools.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nfrom ._utilities import Timer\n\nimport numpy as np\n\nfrom scipy import optimize as scipy_optimize\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = [\"leastsqfitter\"]\n\n\n# --- functions -----------------------------------------------------------------------------------\n\n\ndef leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):\n \"\"\"Conveniently call scipy.optmize.leastsq().\n\n Returns fit parameters and their errors.\n\n Parameters\n ----------\n p0 : list\n list of guess parameters to pass to function\n datax : array\n array of independent values\n datay : array\n array of dependent values\n function : function\n function object to fit data to. Must be of the callable form function(p, x)\n verbose : bool\n toggles printing of fit time, fit params, and fit param errors\n cov_verbose : bool\n toggles printing of covarience matrix\n\n Returns\n -------\n pfit_leastsq : list\n list of fit parameters. s.t. the error between datay and function(p, datax) is minimized\n perr_leastsq : list\n list of fit parameter errors (1 std)\n \"\"\"\n timer = Timer(verbose=False)\n with timer:\n # define error function\n def errfunc(p, x, y):\n return y - function(p, x)\n\n # run optimization\n pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(\n errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001\n )\n # calculate covarience matrix\n # original idea https://stackoverflow.com/a/21844726\n if (len(datay) > len(p0)) and pcov is not None:\n s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))\n pcov = pcov * s_sq\n if cov_verbose:\n print(pcov)\n else:\n pcov = np.array(np.inf)\n # calculate and write errors\n error = []\n for i in range(len(pfit_leastsq)):\n try:\n error.append(np.absolute(pcov[i][i]) ** 0.5)\n except IndexError:\n error.append(0.00)\n perr_leastsq = np.array(error)\n # exit\n if verbose:\n print(\"fit params: \", pfit_leastsq)\n print(\"fit params error: \", perr_leastsq)\n print(\"fitting done in %f seconds\" % timer.interval)\n return pfit_leastsq, perr_leastsq\n", "path": "WrightTools/kit/_leastsq.py"}]} | 1,207 | 118 |
gh_patches_debug_1593 | rasdani/github-patches | git_diff | ethereum__web3.py-3090 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `eth_chainId` to retry middleware whitelist
### What was wrong?
I enabled the `http_retry_request_middleware`, but an idempotent method that is called frequently (`eth_chainId`) is missing from the retry whitelist
### How can it be fixed?
Add this method to the retry method whitelist in the code
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/middleware/exception_retry_request.py`
Content:
```
1 import asyncio
2 from typing import (
3 TYPE_CHECKING,
4 Any,
5 Callable,
6 Collection,
7 Optional,
8 Type,
9 )
10
11 import aiohttp
12 from requests.exceptions import (
13 ConnectionError,
14 HTTPError,
15 Timeout,
16 TooManyRedirects,
17 )
18
19 from web3.types import (
20 AsyncMiddlewareCoroutine,
21 RPCEndpoint,
22 RPCResponse,
23 )
24
25 if TYPE_CHECKING:
26 from web3 import ( # noqa: F401
27 AsyncWeb3,
28 Web3,
29 )
30
31 whitelist = [
32 "admin",
33 "miner",
34 "net",
35 "txpool",
36 "testing",
37 "evm",
38 "eth_protocolVersion",
39 "eth_syncing",
40 "eth_coinbase",
41 "eth_mining",
42 "eth_hashrate",
43 "eth_chainId",
44 "eth_gasPrice",
45 "eth_accounts",
46 "eth_blockNumber",
47 "eth_getBalance",
48 "eth_getStorageAt",
49 "eth_getProof",
50 "eth_getCode",
51 "eth_getBlockByNumber",
52 "eth_getBlockByHash",
53 "eth_getBlockTransactionCountByNumber",
54 "eth_getBlockTransactionCountByHash",
55 "eth_getUncleCountByBlockNumber",
56 "eth_getUncleCountByBlockHash",
57 "eth_getTransactionByHash",
58 "eth_getTransactionByBlockHashAndIndex",
59 "eth_getTransactionByBlockNumberAndIndex",
60 "eth_getTransactionReceipt",
61 "eth_getTransactionCount",
62 "eth_getRawTransactionByHash",
63 "eth_call",
64 "eth_estimateGas",
65 "eth_newBlockFilter",
66 "eth_newPendingTransactionFilter",
67 "eth_newFilter",
68 "eth_getFilterChanges",
69 "eth_getFilterLogs",
70 "eth_getLogs",
71 "eth_uninstallFilter",
72 "eth_getCompilers",
73 "eth_getWork",
74 "eth_sign",
75 "eth_signTypedData",
76 "eth_sendRawTransaction",
77 "personal_importRawKey",
78 "personal_newAccount",
79 "personal_listAccounts",
80 "personal_listWallets",
81 "personal_lockAccount",
82 "personal_unlockAccount",
83 "personal_ecRecover",
84 "personal_sign",
85 "personal_signTypedData",
86 ]
87
88
89 def check_if_retry_on_failure(method: RPCEndpoint) -> bool:
90 root = method.split("_")[0]
91 if root in whitelist:
92 return True
93 elif method in whitelist:
94 return True
95 else:
96 return False
97
98
99 def exception_retry_middleware(
100 make_request: Callable[[RPCEndpoint, Any], RPCResponse],
101 _w3: "Web3",
102 errors: Collection[Type[BaseException]],
103 retries: int = 5,
104 ) -> Callable[[RPCEndpoint, Any], RPCResponse]:
105 """
106 Creates middleware that retries failed HTTP requests. Is a default
107 middleware for HTTPProvider.
108 """
109
110 def middleware(method: RPCEndpoint, params: Any) -> Optional[RPCResponse]:
111 if check_if_retry_on_failure(method):
112 for i in range(retries):
113 try:
114 return make_request(method, params)
115 except tuple(errors):
116 if i < retries - 1:
117 continue
118 else:
119 raise
120 return None
121 else:
122 return make_request(method, params)
123
124 return middleware
125
126
127 def http_retry_request_middleware(
128 make_request: Callable[[RPCEndpoint, Any], Any], w3: "Web3"
129 ) -> Callable[[RPCEndpoint, Any], Any]:
130 return exception_retry_middleware(
131 make_request, w3, (ConnectionError, HTTPError, Timeout, TooManyRedirects)
132 )
133
134
135 async def async_exception_retry_middleware(
136 make_request: Callable[[RPCEndpoint, Any], Any],
137 _async_w3: "AsyncWeb3",
138 errors: Collection[Type[BaseException]],
139 retries: int = 5,
140 backoff_factor: float = 0.3,
141 ) -> AsyncMiddlewareCoroutine:
142 """
143 Creates middleware that retries failed HTTP requests.
144 Is a default middleware for AsyncHTTPProvider.
145 """
146
147 async def middleware(method: RPCEndpoint, params: Any) -> Optional[RPCResponse]:
148 if check_if_retry_on_failure(method):
149 for i in range(retries):
150 try:
151 return await make_request(method, params)
152 except tuple(errors):
153 if i < retries - 1:
154 await asyncio.sleep(backoff_factor)
155 continue
156 else:
157 raise
158 return None
159 else:
160 return await make_request(method, params)
161
162 return middleware
163
164
165 async def async_http_retry_request_middleware(
166 make_request: Callable[[RPCEndpoint, Any], Any], async_w3: "AsyncWeb3"
167 ) -> Callable[[RPCEndpoint, Any], Any]:
168 return await async_exception_retry_middleware(
169 make_request,
170 async_w3,
171 (TimeoutError, aiohttp.ClientError),
172 )
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/middleware/exception_retry_request.py b/web3/middleware/exception_retry_request.py
--- a/web3/middleware/exception_retry_request.py
+++ b/web3/middleware/exception_retry_request.py
@@ -62,6 +62,7 @@
"eth_getRawTransactionByHash",
"eth_call",
"eth_estimateGas",
+ "eth_maxPriorityFeePerGas",
"eth_newBlockFilter",
"eth_newPendingTransactionFilter",
"eth_newFilter",
| {"golden_diff": "diff --git a/web3/middleware/exception_retry_request.py b/web3/middleware/exception_retry_request.py\n--- a/web3/middleware/exception_retry_request.py\n+++ b/web3/middleware/exception_retry_request.py\n@@ -62,6 +62,7 @@\n \"eth_getRawTransactionByHash\",\n \"eth_call\",\n \"eth_estimateGas\",\n+ \"eth_maxPriorityFeePerGas\",\n \"eth_newBlockFilter\",\n \"eth_newPendingTransactionFilter\",\n \"eth_newFilter\",\n", "issue": "Add `eth_chainId` to retry middleware whitelist\n### What was wrong?\r\n\r\nI enabled the `http_retry_request_middleware`, but an idempotent method that is called frequently (`eth_chainId`) is missing from the retry whitelist\r\n\r\n\r\n### How can it be fixed?\r\n\r\nAdd this method to the retry method whitelist in the code\r\n\n", "before_files": [{"content": "import asyncio\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Collection,\n Optional,\n Type,\n)\n\nimport aiohttp\nfrom requests.exceptions import (\n ConnectionError,\n HTTPError,\n Timeout,\n TooManyRedirects,\n)\n\nfrom web3.types import (\n AsyncMiddlewareCoroutine,\n RPCEndpoint,\n RPCResponse,\n)\n\nif TYPE_CHECKING:\n from web3 import ( # noqa: F401\n AsyncWeb3,\n Web3,\n )\n\nwhitelist = [\n \"admin\",\n \"miner\",\n \"net\",\n \"txpool\",\n \"testing\",\n \"evm\",\n \"eth_protocolVersion\",\n \"eth_syncing\",\n \"eth_coinbase\",\n \"eth_mining\",\n \"eth_hashrate\",\n \"eth_chainId\",\n \"eth_gasPrice\",\n \"eth_accounts\",\n \"eth_blockNumber\",\n \"eth_getBalance\",\n \"eth_getStorageAt\",\n \"eth_getProof\",\n \"eth_getCode\",\n \"eth_getBlockByNumber\",\n \"eth_getBlockByHash\",\n \"eth_getBlockTransactionCountByNumber\",\n \"eth_getBlockTransactionCountByHash\",\n \"eth_getUncleCountByBlockNumber\",\n \"eth_getUncleCountByBlockHash\",\n \"eth_getTransactionByHash\",\n \"eth_getTransactionByBlockHashAndIndex\",\n \"eth_getTransactionByBlockNumberAndIndex\",\n \"eth_getTransactionReceipt\",\n \"eth_getTransactionCount\",\n \"eth_getRawTransactionByHash\",\n \"eth_call\",\n \"eth_estimateGas\",\n \"eth_newBlockFilter\",\n \"eth_newPendingTransactionFilter\",\n \"eth_newFilter\",\n \"eth_getFilterChanges\",\n \"eth_getFilterLogs\",\n \"eth_getLogs\",\n \"eth_uninstallFilter\",\n \"eth_getCompilers\",\n \"eth_getWork\",\n \"eth_sign\",\n \"eth_signTypedData\",\n \"eth_sendRawTransaction\",\n \"personal_importRawKey\",\n \"personal_newAccount\",\n \"personal_listAccounts\",\n \"personal_listWallets\",\n \"personal_lockAccount\",\n \"personal_unlockAccount\",\n \"personal_ecRecover\",\n \"personal_sign\",\n \"personal_signTypedData\",\n]\n\n\ndef check_if_retry_on_failure(method: RPCEndpoint) -> bool:\n root = method.split(\"_\")[0]\n if root in whitelist:\n return True\n elif method in whitelist:\n return True\n else:\n return False\n\n\ndef exception_retry_middleware(\n make_request: Callable[[RPCEndpoint, Any], RPCResponse],\n _w3: \"Web3\",\n errors: Collection[Type[BaseException]],\n retries: int = 5,\n) -> Callable[[RPCEndpoint, Any], RPCResponse]:\n \"\"\"\n Creates middleware that retries failed HTTP requests. Is a default\n middleware for HTTPProvider.\n \"\"\"\n\n def middleware(method: RPCEndpoint, params: Any) -> Optional[RPCResponse]:\n if check_if_retry_on_failure(method):\n for i in range(retries):\n try:\n return make_request(method, params)\n except tuple(errors):\n if i < retries - 1:\n continue\n else:\n raise\n return None\n else:\n return make_request(method, params)\n\n return middleware\n\n\ndef http_retry_request_middleware(\n make_request: Callable[[RPCEndpoint, Any], Any], w3: \"Web3\"\n) -> Callable[[RPCEndpoint, Any], Any]:\n return exception_retry_middleware(\n make_request, w3, (ConnectionError, HTTPError, Timeout, TooManyRedirects)\n )\n\n\nasync def async_exception_retry_middleware(\n make_request: Callable[[RPCEndpoint, Any], Any],\n _async_w3: \"AsyncWeb3\",\n errors: Collection[Type[BaseException]],\n retries: int = 5,\n backoff_factor: float = 0.3,\n) -> AsyncMiddlewareCoroutine:\n \"\"\"\n Creates middleware that retries failed HTTP requests.\n Is a default middleware for AsyncHTTPProvider.\n \"\"\"\n\n async def middleware(method: RPCEndpoint, params: Any) -> Optional[RPCResponse]:\n if check_if_retry_on_failure(method):\n for i in range(retries):\n try:\n return await make_request(method, params)\n except tuple(errors):\n if i < retries - 1:\n await asyncio.sleep(backoff_factor)\n continue\n else:\n raise\n return None\n else:\n return await make_request(method, params)\n\n return middleware\n\n\nasync def async_http_retry_request_middleware(\n make_request: Callable[[RPCEndpoint, Any], Any], async_w3: \"AsyncWeb3\"\n) -> Callable[[RPCEndpoint, Any], Any]:\n return await async_exception_retry_middleware(\n make_request,\n async_w3,\n (TimeoutError, aiohttp.ClientError),\n )\n", "path": "web3/middleware/exception_retry_request.py"}], "after_files": [{"content": "import asyncio\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Collection,\n Optional,\n Type,\n)\n\nimport aiohttp\nfrom requests.exceptions import (\n ConnectionError,\n HTTPError,\n Timeout,\n TooManyRedirects,\n)\n\nfrom web3.types import (\n AsyncMiddlewareCoroutine,\n RPCEndpoint,\n RPCResponse,\n)\n\nif TYPE_CHECKING:\n from web3 import ( # noqa: F401\n AsyncWeb3,\n Web3,\n )\n\nwhitelist = [\n \"admin\",\n \"miner\",\n \"net\",\n \"txpool\",\n \"testing\",\n \"evm\",\n \"eth_protocolVersion\",\n \"eth_syncing\",\n \"eth_coinbase\",\n \"eth_mining\",\n \"eth_hashrate\",\n \"eth_chainId\",\n \"eth_gasPrice\",\n \"eth_accounts\",\n \"eth_blockNumber\",\n \"eth_getBalance\",\n \"eth_getStorageAt\",\n \"eth_getProof\",\n \"eth_getCode\",\n \"eth_getBlockByNumber\",\n \"eth_getBlockByHash\",\n \"eth_getBlockTransactionCountByNumber\",\n \"eth_getBlockTransactionCountByHash\",\n \"eth_getUncleCountByBlockNumber\",\n \"eth_getUncleCountByBlockHash\",\n \"eth_getTransactionByHash\",\n \"eth_getTransactionByBlockHashAndIndex\",\n \"eth_getTransactionByBlockNumberAndIndex\",\n \"eth_getTransactionReceipt\",\n \"eth_getTransactionCount\",\n \"eth_getRawTransactionByHash\",\n \"eth_call\",\n \"eth_estimateGas\",\n \"eth_maxPriorityFeePerGas\",\n \"eth_newBlockFilter\",\n \"eth_newPendingTransactionFilter\",\n \"eth_newFilter\",\n \"eth_getFilterChanges\",\n \"eth_getFilterLogs\",\n \"eth_getLogs\",\n \"eth_uninstallFilter\",\n \"eth_getCompilers\",\n \"eth_getWork\",\n \"eth_sign\",\n \"eth_signTypedData\",\n \"eth_sendRawTransaction\",\n \"personal_importRawKey\",\n \"personal_newAccount\",\n \"personal_listAccounts\",\n \"personal_listWallets\",\n \"personal_lockAccount\",\n \"personal_unlockAccount\",\n \"personal_ecRecover\",\n \"personal_sign\",\n \"personal_signTypedData\",\n]\n\n\ndef check_if_retry_on_failure(method: RPCEndpoint) -> bool:\n root = method.split(\"_\")[0]\n if root in whitelist:\n return True\n elif method in whitelist:\n return True\n else:\n return False\n\n\ndef exception_retry_middleware(\n make_request: Callable[[RPCEndpoint, Any], RPCResponse],\n _w3: \"Web3\",\n errors: Collection[Type[BaseException]],\n retries: int = 5,\n) -> Callable[[RPCEndpoint, Any], RPCResponse]:\n \"\"\"\n Creates middleware that retries failed HTTP requests. Is a default\n middleware for HTTPProvider.\n \"\"\"\n\n def middleware(method: RPCEndpoint, params: Any) -> Optional[RPCResponse]:\n if check_if_retry_on_failure(method):\n for i in range(retries):\n try:\n return make_request(method, params)\n except tuple(errors):\n if i < retries - 1:\n continue\n else:\n raise\n return None\n else:\n return make_request(method, params)\n\n return middleware\n\n\ndef http_retry_request_middleware(\n make_request: Callable[[RPCEndpoint, Any], Any], w3: \"Web3\"\n) -> Callable[[RPCEndpoint, Any], Any]:\n return exception_retry_middleware(\n make_request, w3, (ConnectionError, HTTPError, Timeout, TooManyRedirects)\n )\n\n\nasync def async_exception_retry_middleware(\n make_request: Callable[[RPCEndpoint, Any], Any],\n _async_w3: \"AsyncWeb3\",\n errors: Collection[Type[BaseException]],\n retries: int = 5,\n backoff_factor: float = 0.3,\n) -> AsyncMiddlewareCoroutine:\n \"\"\"\n Creates middleware that retries failed HTTP requests.\n Is a default middleware for AsyncHTTPProvider.\n \"\"\"\n\n async def middleware(method: RPCEndpoint, params: Any) -> Optional[RPCResponse]:\n if check_if_retry_on_failure(method):\n for i in range(retries):\n try:\n return await make_request(method, params)\n except tuple(errors):\n if i < retries - 1:\n await asyncio.sleep(backoff_factor)\n continue\n else:\n raise\n return None\n else:\n return await make_request(method, params)\n\n return middleware\n\n\nasync def async_http_retry_request_middleware(\n make_request: Callable[[RPCEndpoint, Any], Any], async_w3: \"AsyncWeb3\"\n) -> Callable[[RPCEndpoint, Any], Any]:\n return await async_exception_retry_middleware(\n make_request,\n async_w3,\n (TimeoutError, aiohttp.ClientError),\n )\n", "path": "web3/middleware/exception_retry_request.py"}]} | 1,807 | 111 |
gh_patches_debug_20829 | rasdani/github-patches | git_diff | litestar-org__litestar-1114 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation: "older version" warning present on latest
Every page under https://starlite-api.github.io/starlite/latest/ has the "You are viewing the documentation for an older version of Starlite. Click here to get to the latest version" warning, which links back to the welcome page.
The message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.
Documentation: "older version" warning present on latest
Every page under https://starlite-api.github.io/starlite/latest/ has the "You are viewing the documentation for an older version of Starlite. Click here to get to the latest version" warning, which links back to the welcome page.
The message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/publish_docs.py`
Content:
```
1 import importlib.metadata
2 import json
3 import shutil
4 import subprocess
5 from pathlib import Path
6 import argparse
7 import shutil
8
9 parser = argparse.ArgumentParser()
10 parser.add_argument("--version", required=False)
11 parser.add_argument("--push", action="store_true")
12 parser.add_argument("--latest", action="store_true")
13
14
15 def update_versions_file(version: str) -> None:
16 versions_file = Path("versions.json")
17 versions = []
18 if versions_file.exists():
19 versions = json.loads(versions_file.read_text())
20
21 new_version_spec = {"version": version, "title": version, "aliases": []}
22 if any(v["version"] == version for v in versions):
23 versions = [v if v["version"] != version else new_version_spec for v in versions]
24 else:
25 versions.insert(0, new_version_spec)
26
27 versions_file.write_text(json.dumps(versions))
28
29
30 def make_version(version: str, push: bool, latest: bool) -> None:
31 subprocess.run(["make", "docs"], check=True)
32
33 subprocess.run(["git", "checkout", "gh-pages"], check=True)
34
35 update_versions_file(version)
36
37 docs_src_path = Path("docs/_build/html")
38 docs_dest_path = Path(version)
39 docs_dest_path_latest = Path("latest")
40 if docs_dest_path.exists():
41 shutil.rmtree(docs_dest_path)
42
43 docs_src_path.rename(docs_dest_path)
44 if latest:
45 if docs_dest_path_latest.exists():
46 shutil.rmtree(docs_dest_path_latest)
47 shutil.copytree(docs_dest_path, docs_dest_path_latest)
48 subprocess.run(["git", "add", "latest"], check=True)
49
50 subprocess.run(["git", "add", version], check=True)
51 subprocess.run(["git", "add", "versions.json"], check=True)
52 subprocess.run(["git", "commit", "-m", f"automated docs build: {version}"], check=True)
53 if push:
54 subprocess.run(["git", "push"], check=True)
55 subprocess.run(["git", "checkout", "-"], check=True)
56
57
58 def main() -> None:
59 args = parser.parse_args()
60 version = args.version or importlib.metadata.version("starlite").rsplit(".", 1)[0]
61 make_version(version=version, push=args.push, latest=args.latest)
62
63
64 if __name__ == "__main__":
65 main()
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/publish_docs.py b/tools/publish_docs.py
--- a/tools/publish_docs.py
+++ b/tools/publish_docs.py
@@ -12,7 +12,7 @@
parser.add_argument("--latest", action="store_true")
-def update_versions_file(version: str) -> None:
+def add_to_versions_file(version: str, latest: bool) -> None:
versions_file = Path("versions.json")
versions = []
if versions_file.exists():
@@ -24,6 +24,11 @@
else:
versions.insert(0, new_version_spec)
+ if latest:
+ for version in versions:
+ version["aliases"] = []
+ versions[0]["aliases"] = ["latest"]
+
versions_file.write_text(json.dumps(versions))
@@ -32,7 +37,7 @@
subprocess.run(["git", "checkout", "gh-pages"], check=True)
- update_versions_file(version)
+ add_to_versions_file(version, latest)
docs_src_path = Path("docs/_build/html")
docs_dest_path = Path(version)
| {"golden_diff": "diff --git a/tools/publish_docs.py b/tools/publish_docs.py\n--- a/tools/publish_docs.py\n+++ b/tools/publish_docs.py\n@@ -12,7 +12,7 @@\n parser.add_argument(\"--latest\", action=\"store_true\")\n \n \n-def update_versions_file(version: str) -> None:\n+def add_to_versions_file(version: str, latest: bool) -> None:\n versions_file = Path(\"versions.json\")\n versions = []\n if versions_file.exists():\n@@ -24,6 +24,11 @@\n else:\n versions.insert(0, new_version_spec)\n \n+ if latest:\n+ for version in versions:\n+ version[\"aliases\"] = []\n+ versions[0][\"aliases\"] = [\"latest\"]\n+\n versions_file.write_text(json.dumps(versions))\n \n \n@@ -32,7 +37,7 @@\n \n subprocess.run([\"git\", \"checkout\", \"gh-pages\"], check=True)\n \n- update_versions_file(version)\n+ add_to_versions_file(version, latest)\n \n docs_src_path = Path(\"docs/_build/html\")\n docs_dest_path = Path(version)\n", "issue": "Documentation: \"older version\" warning present on latest\nEvery page under https://starlite-api.github.io/starlite/latest/ has the \"You are viewing the documentation for an older version of Starlite. Click here to get to the latest version\" warning, which links back to the welcome page. \r\n\r\nThe message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.\nDocumentation: \"older version\" warning present on latest\nEvery page under https://starlite-api.github.io/starlite/latest/ has the \"You are viewing the documentation for an older version of Starlite. Click here to get to the latest version\" warning, which links back to the welcome page. \r\n\r\nThe message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.\n", "before_files": [{"content": "import importlib.metadata\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nimport argparse\nimport shutil\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--version\", required=False)\nparser.add_argument(\"--push\", action=\"store_true\")\nparser.add_argument(\"--latest\", action=\"store_true\")\n\n\ndef update_versions_file(version: str) -> None:\n versions_file = Path(\"versions.json\")\n versions = []\n if versions_file.exists():\n versions = json.loads(versions_file.read_text())\n\n new_version_spec = {\"version\": version, \"title\": version, \"aliases\": []}\n if any(v[\"version\"] == version for v in versions):\n versions = [v if v[\"version\"] != version else new_version_spec for v in versions]\n else:\n versions.insert(0, new_version_spec)\n\n versions_file.write_text(json.dumps(versions))\n\n\ndef make_version(version: str, push: bool, latest: bool) -> None:\n subprocess.run([\"make\", \"docs\"], check=True)\n\n subprocess.run([\"git\", \"checkout\", \"gh-pages\"], check=True)\n\n update_versions_file(version)\n\n docs_src_path = Path(\"docs/_build/html\")\n docs_dest_path = Path(version)\n docs_dest_path_latest = Path(\"latest\")\n if docs_dest_path.exists():\n shutil.rmtree(docs_dest_path)\n\n docs_src_path.rename(docs_dest_path)\n if latest:\n if docs_dest_path_latest.exists():\n shutil.rmtree(docs_dest_path_latest)\n shutil.copytree(docs_dest_path, docs_dest_path_latest)\n subprocess.run([\"git\", \"add\", \"latest\"], check=True)\n\n subprocess.run([\"git\", \"add\", version], check=True)\n subprocess.run([\"git\", \"add\", \"versions.json\"], check=True)\n subprocess.run([\"git\", \"commit\", \"-m\", f\"automated docs build: {version}\"], check=True)\n if push:\n subprocess.run([\"git\", \"push\"], check=True)\n subprocess.run([\"git\", \"checkout\", \"-\"], check=True)\n\n\ndef main() -> None:\n args = parser.parse_args()\n version = args.version or importlib.metadata.version(\"starlite\").rsplit(\".\", 1)[0]\n make_version(version=version, push=args.push, latest=args.latest)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/publish_docs.py"}], "after_files": [{"content": "import importlib.metadata\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nimport argparse\nimport shutil\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--version\", required=False)\nparser.add_argument(\"--push\", action=\"store_true\")\nparser.add_argument(\"--latest\", action=\"store_true\")\n\n\ndef add_to_versions_file(version: str, latest: bool) -> None:\n versions_file = Path(\"versions.json\")\n versions = []\n if versions_file.exists():\n versions = json.loads(versions_file.read_text())\n\n new_version_spec = {\"version\": version, \"title\": version, \"aliases\": []}\n if any(v[\"version\"] == version for v in versions):\n versions = [v if v[\"version\"] != version else new_version_spec for v in versions]\n else:\n versions.insert(0, new_version_spec)\n\n if latest:\n for version in versions:\n version[\"aliases\"] = []\n versions[0][\"aliases\"] = [\"latest\"]\n\n versions_file.write_text(json.dumps(versions))\n\n\ndef make_version(version: str, push: bool, latest: bool) -> None:\n subprocess.run([\"make\", \"docs\"], check=True)\n\n subprocess.run([\"git\", \"checkout\", \"gh-pages\"], check=True)\n\n add_to_versions_file(version, latest)\n\n docs_src_path = Path(\"docs/_build/html\")\n docs_dest_path = Path(version)\n docs_dest_path_latest = Path(\"latest\")\n if docs_dest_path.exists():\n shutil.rmtree(docs_dest_path)\n\n docs_src_path.rename(docs_dest_path)\n if latest:\n if docs_dest_path_latest.exists():\n shutil.rmtree(docs_dest_path_latest)\n shutil.copytree(docs_dest_path, docs_dest_path_latest)\n subprocess.run([\"git\", \"add\", \"latest\"], check=True)\n\n subprocess.run([\"git\", \"add\", version], check=True)\n subprocess.run([\"git\", \"add\", \"versions.json\"], check=True)\n subprocess.run([\"git\", \"commit\", \"-m\", f\"automated docs build: {version}\"], check=True)\n if push:\n subprocess.run([\"git\", \"push\"], check=True)\n subprocess.run([\"git\", \"checkout\", \"-\"], check=True)\n\n\ndef main() -> None:\n args = parser.parse_args()\n version = args.version or importlib.metadata.version(\"starlite\").rsplit(\".\", 1)[0]\n make_version(version=version, push=args.push, latest=args.latest)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/publish_docs.py"}]} | 1,104 | 242 |
gh_patches_debug_16074 | rasdani/github-patches | git_diff | Zeroto521__my-data-toolkit-645 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DEP: Depcrated `utm_crs`
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [x] whatsnew entry
`pyproj.database.query_utm_crs_info` too slow to query all data.
For 1 point will cost 200ms but for 2000 points will cost 200s.
Even try `parallelize` to `utm_crs`, but the speed is still so lower.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/geoaccessor/geoseries/utm_crs.py`
Content:
```
1 import geopandas as gpd
2 import pandas as pd
3 from pyproj.aoi import AreaOfInterest
4 from pyproj.database import query_utm_crs_info
5
6 from dtoolkit.geoaccessor.register import register_geoseries_method
7 from dtoolkit.util._decorator import warning
8
9
10 @register_geoseries_method
11 @warning(
12 "The 'utm_crs' is deprecated and will be removed in 0.0.17. "
13 "(Warning added DToolKit 0.0.16)",
14 DeprecationWarning,
15 stacklevel=3,
16 )
17 def utm_crs(s: gpd.GeoSeries, /, datum_name: str = "WGS 84") -> pd.Series:
18 """
19 Returns the estimated UTM CRS based on the bounds of each geometry.
20
21 .. deprecated:: 0.0.17
22 The 'utm_crs' is deprecated and will be removed in 0.0.17.
23 (Warning added DToolKit 0.0.16)
24
25 Parameters
26 ----------
27 datum_name : str, default 'WGS 84'
28 The name of the datum in the CRS name ('NAD27', 'NAD83', 'WGS 84', …).
29
30 Returns
31 -------
32 Series
33 The element type is :class:`~pyproj.database.CRSInfo`.
34
35 See Also
36 --------
37 dtoolkit.geoaccessor.geoseries.utm_crs
38 Returns the estimated UTM CRS based on the bounds of each geometry.
39
40 dtoolkit.geoaccessor.geodataframe.utm_crs
41 Returns the estimated UTM CRS based on the bounds of each geometry.
42
43 geopandas.GeoSeries.estimate_utm_crs
44 Returns the estimated UTM CRS based on the bounds of the dataset.
45
46 geopandas.GeoDataFrame.estimate_utm_crs
47 Returns the estimated UTM CRS based on the bounds of the dataset.
48
49 Examples
50 --------
51 >>> import dtoolkit.accessor
52 >>> import dtoolkit.geoaccessor
53 >>> import geopandas as gpd
54 >>> s = gpd.GeoSeries.from_wkt(["Point (120 50)", "Point (100 1)"], crs="epsg:4326")
55 >>> s.utm_crs()
56 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...
57 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...
58 dtype: object
59
60 Same operate for GeoDataFrame.
61
62 >>> s.to_frame("geometry").utm_crs()
63 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...
64 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...
65 dtype: object
66
67 Get the EPSG code.
68
69 >>> s.utm_crs().getattr("code")
70 0 32650
71 1 32647
72 dtype: object
73 """
74
75 return s.bounds.apply(
76 lambda bound: None
77 if bound.isna().all()
78 else query_utm_crs_info(
79 datum_name=datum_name,
80 area_of_interest=AreaOfInterest(
81 west_lon_degree=bound["minx"],
82 south_lat_degree=bound["miny"],
83 east_lon_degree=bound["maxx"],
84 north_lat_degree=bound["maxy"],
85 ),
86 )[0],
87 axis=1,
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dtoolkit/geoaccessor/geoseries/utm_crs.py b/dtoolkit/geoaccessor/geoseries/utm_crs.py
--- a/dtoolkit/geoaccessor/geoseries/utm_crs.py
+++ b/dtoolkit/geoaccessor/geoseries/utm_crs.py
@@ -9,8 +9,8 @@
@register_geoseries_method
@warning(
- "The 'utm_crs' is deprecated and will be removed in 0.0.17. "
- "(Warning added DToolKit 0.0.16)",
+ "The 'utm_crs' is deprecated and will be removed in 0.0.18. "
+ "(Warning added DToolKit 0.0.17)",
DeprecationWarning,
stacklevel=3,
)
@@ -18,9 +18,9 @@
"""
Returns the estimated UTM CRS based on the bounds of each geometry.
- .. deprecated:: 0.0.17
- The 'utm_crs' is deprecated and will be removed in 0.0.17.
- (Warning added DToolKit 0.0.16)
+ .. deprecated:: 0.0.18
+ The 'utm_crs' is deprecated and will be removed in 0.0.18.
+ (Warning added DToolKit 0.0.17)
Parameters
----------
| {"golden_diff": "diff --git a/dtoolkit/geoaccessor/geoseries/utm_crs.py b/dtoolkit/geoaccessor/geoseries/utm_crs.py\n--- a/dtoolkit/geoaccessor/geoseries/utm_crs.py\n+++ b/dtoolkit/geoaccessor/geoseries/utm_crs.py\n@@ -9,8 +9,8 @@\n \n @register_geoseries_method\n @warning(\n- \"The 'utm_crs' is deprecated and will be removed in 0.0.17. \"\n- \"(Warning added DToolKit 0.0.16)\",\n+ \"The 'utm_crs' is deprecated and will be removed in 0.0.18. \"\n+ \"(Warning added DToolKit 0.0.17)\",\n DeprecationWarning,\n stacklevel=3,\n )\n@@ -18,9 +18,9 @@\n \"\"\"\n Returns the estimated UTM CRS based on the bounds of each geometry.\n \n- .. deprecated:: 0.0.17\n- The 'utm_crs' is deprecated and will be removed in 0.0.17.\n- (Warning added DToolKit 0.0.16)\n+ .. deprecated:: 0.0.18\n+ The 'utm_crs' is deprecated and will be removed in 0.0.18.\n+ (Warning added DToolKit 0.0.17)\n \n Parameters\n ----------\n", "issue": "DEP: Depcrated `utm_crs`\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [x] whatsnew entry\r\n\r\n`pyproj.database.query_utm_crs_info` too slow to query all data.\r\nFor 1 point will cost 200ms but for 2000 points will cost 200s.\r\nEven try `parallelize` to `utm_crs`, but the speed is still so lower.\n", "before_files": [{"content": "import geopandas as gpd\nimport pandas as pd\nfrom pyproj.aoi import AreaOfInterest\nfrom pyproj.database import query_utm_crs_info\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\nfrom dtoolkit.util._decorator import warning\n\n\n@register_geoseries_method\n@warning(\n \"The 'utm_crs' is deprecated and will be removed in 0.0.17. \"\n \"(Warning added DToolKit 0.0.16)\",\n DeprecationWarning,\n stacklevel=3,\n)\ndef utm_crs(s: gpd.GeoSeries, /, datum_name: str = \"WGS 84\") -> pd.Series:\n \"\"\"\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n .. deprecated:: 0.0.17\n The 'utm_crs' is deprecated and will be removed in 0.0.17.\n (Warning added DToolKit 0.0.16)\n\n Parameters\n ----------\n datum_name : str, default 'WGS 84'\n The name of the datum in the CRS name ('NAD27', 'NAD83', 'WGS 84', \u2026).\n\n Returns\n -------\n Series\n The element type is :class:`~pyproj.database.CRSInfo`.\n\n See Also\n --------\n dtoolkit.geoaccessor.geoseries.utm_crs\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n dtoolkit.geoaccessor.geodataframe.utm_crs\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n geopandas.GeoSeries.estimate_utm_crs\n Returns the estimated UTM CRS based on the bounds of the dataset.\n\n geopandas.GeoDataFrame.estimate_utm_crs\n Returns the estimated UTM CRS based on the bounds of the dataset.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import dtoolkit.geoaccessor\n >>> import geopandas as gpd\n >>> s = gpd.GeoSeries.from_wkt([\"Point (120 50)\", \"Point (100 1)\"], crs=\"epsg:4326\")\n >>> s.utm_crs()\n 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...\n 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...\n dtype: object\n\n Same operate for GeoDataFrame.\n\n >>> s.to_frame(\"geometry\").utm_crs()\n 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...\n 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...\n dtype: object\n\n Get the EPSG code.\n\n >>> s.utm_crs().getattr(\"code\")\n 0 32650\n 1 32647\n dtype: object\n \"\"\"\n\n return s.bounds.apply(\n lambda bound: None\n if bound.isna().all()\n else query_utm_crs_info(\n datum_name=datum_name,\n area_of_interest=AreaOfInterest(\n west_lon_degree=bound[\"minx\"],\n south_lat_degree=bound[\"miny\"],\n east_lon_degree=bound[\"maxx\"],\n north_lat_degree=bound[\"maxy\"],\n ),\n )[0],\n axis=1,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/utm_crs.py"}], "after_files": [{"content": "import geopandas as gpd\nimport pandas as pd\nfrom pyproj.aoi import AreaOfInterest\nfrom pyproj.database import query_utm_crs_info\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\nfrom dtoolkit.util._decorator import warning\n\n\n@register_geoseries_method\n@warning(\n \"The 'utm_crs' is deprecated and will be removed in 0.0.18. \"\n \"(Warning added DToolKit 0.0.17)\",\n DeprecationWarning,\n stacklevel=3,\n)\ndef utm_crs(s: gpd.GeoSeries, /, datum_name: str = \"WGS 84\") -> pd.Series:\n \"\"\"\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n .. deprecated:: 0.0.18\n The 'utm_crs' is deprecated and will be removed in 0.0.18.\n (Warning added DToolKit 0.0.17)\n\n Parameters\n ----------\n datum_name : str, default 'WGS 84'\n The name of the datum in the CRS name ('NAD27', 'NAD83', 'WGS 84', \u2026).\n\n Returns\n -------\n Series\n The element type is :class:`~pyproj.database.CRSInfo`.\n\n See Also\n --------\n dtoolkit.geoaccessor.geoseries.utm_crs\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n dtoolkit.geoaccessor.geodataframe.utm_crs\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n geopandas.GeoSeries.estimate_utm_crs\n Returns the estimated UTM CRS based on the bounds of the dataset.\n\n geopandas.GeoDataFrame.estimate_utm_crs\n Returns the estimated UTM CRS based on the bounds of the dataset.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import dtoolkit.geoaccessor\n >>> import geopandas as gpd\n >>> s = gpd.GeoSeries.from_wkt([\"Point (120 50)\", \"Point (100 1)\"], crs=\"epsg:4326\")\n >>> s.utm_crs()\n 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...\n 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...\n dtype: object\n\n Same operate for GeoDataFrame.\n\n >>> s.to_frame(\"geometry\").utm_crs()\n 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...\n 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...\n dtype: object\n\n Get the EPSG code.\n\n >>> s.utm_crs().getattr(\"code\")\n 0 32650\n 1 32647\n dtype: object\n \"\"\"\n\n return s.bounds.apply(\n lambda bound: None\n if bound.isna().all()\n else query_utm_crs_info(\n datum_name=datum_name,\n area_of_interest=AreaOfInterest(\n west_lon_degree=bound[\"minx\"],\n south_lat_degree=bound[\"miny\"],\n east_lon_degree=bound[\"maxx\"],\n north_lat_degree=bound[\"maxy\"],\n ),\n )[0],\n axis=1,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/utm_crs.py"}]} | 1,519 | 328 |
gh_patches_debug_5539 | rasdani/github-patches | git_diff | acl-org__acl-anthology-2313 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ingestion request: Generation challenges at INLG 2022
This is to complete the ingestions of all papers from INLG; the Generation Challenges papers still needed to be uploaded. See #1897 for the other papers.
Here are the papers and metadata from the INLG generation challenges, as generated using ACLPUB2: https://drive.google.com/file/d/1518aAVuvtbvHgw_6FREzJl0kip_lkqNg/view?usp=share_link
I think this matches the Anthology format, but I'm not sure as I added everything manually. (Export didn't work.) Could you check whether everything is OK to ingest in the Anthology? Many thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bin/volumes_from_diff.py`
Content:
```
1 #!/usr/bin/env python3
2
3 """
4 Takes a list of XML files on STDIN, and prints all the volumes
5 within each of those files. e.g.,
6
7 git diff --name-only master | ./bin/volumes_from_xml.py https://preview.aclanthology.org/BRANCH
8
9 Used to find the list of volumes to generate previews for.
10 """
11
12 import sys
13 import argparse
14 import lxml.etree as etree
15 import subprocess
16
17
18 parser = argparse.ArgumentParser()
19 parser.add_argument("url_root")
20 args = parser.parse_args()
21
22 volumes = []
23 for filepath in sys.stdin:
24 try:
25 tree = etree.parse(filepath.rstrip())
26 except Exception as e:
27 continue
28 root = tree.getroot()
29 collection_id = root.attrib["id"]
30 for volume in root:
31 volume_name = volume.attrib["id"]
32 volume_id = f"{collection_id}-{volume_name}"
33 volumes.append(f"[{volume_id}]({args.url_root}/{volume_id})")
34
35 if len(volumes) > 50:
36 volumes = volumes[0:50] + [f"(plus {len(volumes)-50} more...)"]
37
38 print(", ".join(volumes))
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bin/volumes_from_diff.py b/bin/volumes_from_diff.py
--- a/bin/volumes_from_diff.py
+++ b/bin/volumes_from_diff.py
@@ -27,7 +27,7 @@
continue
root = tree.getroot()
collection_id = root.attrib["id"]
- for volume in root:
+ for volume in root.findall("./volume"):
volume_name = volume.attrib["id"]
volume_id = f"{collection_id}-{volume_name}"
volumes.append(f"[{volume_id}]({args.url_root}/{volume_id})")
| {"golden_diff": "diff --git a/bin/volumes_from_diff.py b/bin/volumes_from_diff.py\n--- a/bin/volumes_from_diff.py\n+++ b/bin/volumes_from_diff.py\n@@ -27,7 +27,7 @@\n continue\n root = tree.getroot()\n collection_id = root.attrib[\"id\"]\n- for volume in root:\n+ for volume in root.findall(\"./volume\"):\n volume_name = volume.attrib[\"id\"]\n volume_id = f\"{collection_id}-{volume_name}\"\n volumes.append(f\"[{volume_id}]({args.url_root}/{volume_id})\")\n", "issue": "Ingestion request: Generation challenges at INLG 2022\nThis is to complete the ingestions of all papers from INLG; the Generation Challenges papers still needed to be uploaded. See #1897 for the other papers. \r\n\r\nHere are the papers and metadata from the INLG generation challenges, as generated using ACLPUB2: https://drive.google.com/file/d/1518aAVuvtbvHgw_6FREzJl0kip_lkqNg/view?usp=share_link\r\n\r\nI think this matches the Anthology format, but I'm not sure as I added everything manually. (Export didn't work.) Could you check whether everything is OK to ingest in the Anthology? Many thanks!\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nTakes a list of XML files on STDIN, and prints all the volumes\nwithin each of those files. e.g.,\n\n git diff --name-only master | ./bin/volumes_from_xml.py https://preview.aclanthology.org/BRANCH\n\nUsed to find the list of volumes to generate previews for.\n\"\"\"\n\nimport sys\nimport argparse\nimport lxml.etree as etree\nimport subprocess\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"url_root\")\nargs = parser.parse_args()\n\nvolumes = []\nfor filepath in sys.stdin:\n try:\n tree = etree.parse(filepath.rstrip())\n except Exception as e:\n continue\n root = tree.getroot()\n collection_id = root.attrib[\"id\"]\n for volume in root:\n volume_name = volume.attrib[\"id\"]\n volume_id = f\"{collection_id}-{volume_name}\"\n volumes.append(f\"[{volume_id}]({args.url_root}/{volume_id})\")\n\nif len(volumes) > 50:\n volumes = volumes[0:50] + [f\"(plus {len(volumes)-50} more...)\"]\n\nprint(\", \".join(volumes))\n", "path": "bin/volumes_from_diff.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nTakes a list of XML files on STDIN, and prints all the volumes\nwithin each of those files. e.g.,\n\n git diff --name-only master | ./bin/volumes_from_xml.py https://preview.aclanthology.org/BRANCH\n\nUsed to find the list of volumes to generate previews for.\n\"\"\"\n\nimport sys\nimport argparse\nimport lxml.etree as etree\nimport subprocess\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"url_root\")\nargs = parser.parse_args()\n\nvolumes = []\nfor filepath in sys.stdin:\n try:\n tree = etree.parse(filepath.rstrip())\n except Exception as e:\n continue\n root = tree.getroot()\n collection_id = root.attrib[\"id\"]\n for volume in root.findall(\"./volume\"):\n volume_name = volume.attrib[\"id\"]\n volume_id = f\"{collection_id}-{volume_name}\"\n volumes.append(f\"[{volume_id}]({args.url_root}/{volume_id})\")\n\nif len(volumes) > 50:\n volumes = volumes[0:50] + [f\"(plus {len(volumes)-50} more...)\"]\n\nprint(\", \".join(volumes))\n", "path": "bin/volumes_from_diff.py"}]} | 734 | 123 |
gh_patches_debug_20906 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not using HA supported python version
HA supports the last two minor versions of Python, that is currently 3.10 and 3.9.
[calendar.py](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/calendar.py#L118) makes use of Python 3.10 only type hinting features for optional arguments via unions:
`def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):`
The union str | None is not supported as type hint by Python 3.9, hence the waste collection schedule fails to load albeit HA runs on a supported installation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/calendar.py`
Content:
```
1 """Calendar platform support for Waste Collection Schedule."""
2
3 import logging
4 from datetime import timedelta, timezone, datetime
5
6 from homeassistant.components.calendar import CalendarEntity, CalendarEvent
7 from homeassistant.core import HomeAssistant
8 from homeassistant.util.dt import DEFAULT_TIME_ZONE
9
10 from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (
11 Scraper,
12 )
13
14 _LOGGER = logging.getLogger(__name__)
15
16
17 async def async_setup_platform(hass, config, async_add_entities, discovery_info=None):
18 """Set up calendar platform."""
19 # We only want this platform to be set up via discovery.
20 if discovery_info is None:
21 return
22
23 entities = []
24
25 api = discovery_info["api"]
26
27 for scraper in api.scrapers:
28 dedicated_calendar_types = scraper.get_dedicated_calendar_types()
29 global_calendar_types = scraper.get_global_calendar_types()
30
31 if dedicated_calendar_types is not None:
32 for type in dedicated_calendar_types:
33 unique_id = calc_unique_calendar_id(scraper, type)
34
35 entities.append(
36 WasteCollectionCalendar(
37 api,
38 scraper,
39 scraper.get_calendar_title_for_type(type),
40 [scraper.get_collection_type(type)],
41 unique_id,
42 )
43 )
44
45 if global_calendar_types is not None or dedicated_calendar_types is None:
46 unique_id = calc_unique_calendar_id(scraper)
47 entities.append(
48 WasteCollectionCalendar(
49 api,
50 scraper,
51 scraper.calendar_title,
52 [
53 scraper.get_collection_type(type)
54 for type in global_calendar_types
55 ]
56 if global_calendar_types is not None
57 else None,
58 unique_id,
59 )
60 )
61
62 async_add_entities(entities)
63
64
65 class WasteCollectionCalendar(CalendarEntity):
66 """Calendar entity class."""
67
68 def __init__(self, api, scraper, name, types, unique_id: str):
69 self._api = api
70 self._scraper = scraper
71 self._name = name
72 self._types = types
73 self._unique_id = unique_id
74 self._attr_unique_id = unique_id
75
76 @property
77 def name(self):
78 """Return entity name."""
79 return self._name
80
81 @property
82 def event(self):
83 """Return next collection event."""
84 collections = self._scraper.get_upcoming(
85 count=1, include_today=True, types=self._types
86 )
87
88 if len(collections) == 0:
89 return None
90 else:
91 return self._convert(collections[0])
92
93 async def async_get_events(
94 self, hass: HomeAssistant, start_date: datetime, end_date: datetime
95 ):
96 """Return all events within specified time span."""
97 events = []
98
99 for collection in self._scraper.get_upcoming(
100 include_today=True, types=self._types
101 ):
102 event = self._convert(collection)
103
104 if start_date <= event.start_datetime_local <= end_date:
105 events.append(event)
106
107 return events
108
109 def _convert(self, collection) -> CalendarEvent:
110 """Convert an collection into a Home Assistant calendar event."""
111 return CalendarEvent(
112 summary=collection.type,
113 start=collection.date,
114 end=collection.date + timedelta(days=1),
115 )
116
117
118 def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):
119 return scraper.unique_id + ("_" + type if type is not None else "") + "_calendar"
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py
--- a/custom_components/waste_collection_schedule/calendar.py
+++ b/custom_components/waste_collection_schedule/calendar.py
@@ -1,15 +1,12 @@
"""Calendar platform support for Waste Collection Schedule."""
import logging
-from datetime import timedelta, timezone, datetime
+from datetime import datetime, timedelta
from homeassistant.components.calendar import CalendarEntity, CalendarEvent
from homeassistant.core import HomeAssistant
-from homeassistant.util.dt import DEFAULT_TIME_ZONE
-from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (
- Scraper,
-)
+from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import Scraper
_LOGGER = logging.getLogger(__name__)
@@ -115,5 +112,5 @@
)
-def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):
+def calc_unique_calendar_id(scraper: Scraper, type: str = None):
return scraper.unique_id + ("_" + type if type is not None else "") + "_calendar"
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py\n--- a/custom_components/waste_collection_schedule/calendar.py\n+++ b/custom_components/waste_collection_schedule/calendar.py\n@@ -1,15 +1,12 @@\n \"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n \n import logging\n-from datetime import timedelta, timezone, datetime\n+from datetime import datetime, timedelta\n \n from homeassistant.components.calendar import CalendarEntity, CalendarEvent\n from homeassistant.core import HomeAssistant\n-from homeassistant.util.dt import DEFAULT_TIME_ZONE\n \n-from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n- Scraper,\n-)\n+from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import Scraper\n \n _LOGGER = logging.getLogger(__name__)\n \n@@ -115,5 +112,5 @@\n )\n \n \n-def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):\n+def calc_unique_calendar_id(scraper: Scraper, type: str = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "issue": "Not using HA supported python version\nHA supports the last two minor versions of Python, that is currently 3.10 and 3.9.\r\n[calendar.py](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/calendar.py#L118) makes use of Python 3.10 only type hinting features for optional arguments via unions:\r\n`def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):`\r\nThe union str | None is not supported as type hint by Python 3.9, hence the waste collection schedule fails to load albeit HA runs on a supported installation.\n", "before_files": [{"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import timedelta, timezone, datetime\n\nfrom homeassistant.components.calendar import CalendarEntity, CalendarEvent\nfrom homeassistant.core import HomeAssistant\nfrom homeassistant.util.dt import DEFAULT_TIME_ZONE\n\nfrom custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n Scraper,\n)\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n dedicated_calendar_types = scraper.get_dedicated_calendar_types()\n global_calendar_types = scraper.get_global_calendar_types()\n\n if dedicated_calendar_types is not None:\n for type in dedicated_calendar_types:\n unique_id = calc_unique_calendar_id(scraper, type)\n\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.get_calendar_title_for_type(type),\n [scraper.get_collection_type(type)],\n unique_id,\n )\n )\n\n if global_calendar_types is not None or dedicated_calendar_types is None:\n unique_id = calc_unique_calendar_id(scraper)\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.calendar_title,\n [\n scraper.get_collection_type(type)\n for type in global_calendar_types\n ]\n if global_calendar_types is not None\n else None,\n unique_id,\n )\n )\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEntity):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper, name, types, unique_id: str):\n self._api = api\n self._scraper = scraper\n self._name = name\n self._types = types\n self._unique_id = unique_id\n self._attr_unique_id = unique_id\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._name\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(\n count=1, include_today=True, types=self._types\n )\n\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(\n self, hass: HomeAssistant, start_date: datetime, end_date: datetime\n ):\n \"\"\"Return all events within specified time span.\"\"\"\n events = []\n\n for collection in self._scraper.get_upcoming(\n include_today=True, types=self._types\n ):\n event = self._convert(collection)\n\n if start_date <= event.start_datetime_local <= end_date:\n events.append(event)\n\n return events\n\n def _convert(self, collection) -> CalendarEvent:\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return CalendarEvent(\n summary=collection.type,\n start=collection.date,\n end=collection.date + timedelta(days=1),\n )\n\n\ndef calc_unique_calendar_id(scraper: Scraper, type: str | None = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "path": "custom_components/waste_collection_schedule/calendar.py"}], "after_files": [{"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import datetime, timedelta\n\nfrom homeassistant.components.calendar import CalendarEntity, CalendarEvent\nfrom homeassistant.core import HomeAssistant\n\nfrom custom_components.waste_collection_schedule.waste_collection_schedule.scraper import Scraper\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n dedicated_calendar_types = scraper.get_dedicated_calendar_types()\n global_calendar_types = scraper.get_global_calendar_types()\n\n if dedicated_calendar_types is not None:\n for type in dedicated_calendar_types:\n unique_id = calc_unique_calendar_id(scraper, type)\n\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.get_calendar_title_for_type(type),\n [scraper.get_collection_type(type)],\n unique_id,\n )\n )\n\n if global_calendar_types is not None or dedicated_calendar_types is None:\n unique_id = calc_unique_calendar_id(scraper)\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.calendar_title,\n [\n scraper.get_collection_type(type)\n for type in global_calendar_types\n ]\n if global_calendar_types is not None\n else None,\n unique_id,\n )\n )\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEntity):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper, name, types, unique_id: str):\n self._api = api\n self._scraper = scraper\n self._name = name\n self._types = types\n self._unique_id = unique_id\n self._attr_unique_id = unique_id\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._name\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(\n count=1, include_today=True, types=self._types\n )\n\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(\n self, hass: HomeAssistant, start_date: datetime, end_date: datetime\n ):\n \"\"\"Return all events within specified time span.\"\"\"\n events = []\n\n for collection in self._scraper.get_upcoming(\n include_today=True, types=self._types\n ):\n event = self._convert(collection)\n\n if start_date <= event.start_datetime_local <= end_date:\n events.append(event)\n\n return events\n\n def _convert(self, collection) -> CalendarEvent:\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return CalendarEvent(\n summary=collection.type,\n start=collection.date,\n end=collection.date + timedelta(days=1),\n )\n\n\ndef calc_unique_calendar_id(scraper: Scraper, type: str = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "path": "custom_components/waste_collection_schedule/calendar.py"}]} | 1,377 | 240 |
gh_patches_debug_2209 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1887 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please allow markdown to the organization description field
Right now markdown is not allowed in that field. I believe that this is preventing me from adding paragraphs and other particular styles to the text in question.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py`
Content:
```
1 '''
2 Created on Nov 3, 2014
3
4 @author: alexandru-m-g
5 '''
6
7 import logging
8
9 import ckan.lib.base as base
10 import ckan.logic as logic
11 import ckan.model as model
12 import ckan.common as common
13 import ckan.lib.helpers as h
14
15 import ckanext.hdx_crisis.dao.data_access as data_access
16 import ckanext.hdx_crisis.formatters.top_line_items_formatter as formatters
17
18 render = base.render
19 get_action = logic.get_action
20 c = common.c
21 request = common.request
22 _ = common._
23
24
25 log = logging.getLogger(__name__)
26
27
28 class CrisisController(base.BaseController):
29
30 def show(self):
31
32 context = {'model': model, 'session': model.Session,
33 'user': c.user or c.author, 'for_view': True,
34 'auth_user_obj': c.userobj}
35
36 crisis_data_access = data_access.EbolaCrisisDataAccess()
37 crisis_data_access.fetch_data(context)
38 c.top_line_items = crisis_data_access.get_top_line_items()
39
40 formatter = formatters.TopLineItemsFormatter(c.top_line_items)
41 formatter.format_results()
42
43 search_term = u'ebola'
44
45 self._generate_dataset_results(context, search_term)
46
47 self._generate_other_links(search_term)
48
49 return render('crisis/crisis.html')
50
51 def _generate_dataset_results(self, context, search_term):
52 limit = 25
53 c.q = search_term
54
55 page = int(request.params.get('page', 1))
56 data_dict = {'sort': u'metadata_modified desc',
57 'fq': '+dataset_type:dataset',
58 'rows': limit,
59 'q': c.q,
60 'start': (page - 1) * limit
61 }
62 query = get_action("package_search")(context, data_dict)
63
64 def pager_url(q=None, page=None):
65 url = h.url_for('show_crisis', page=page) + '#datasets-section'
66 return url
67
68 c.page = h.Page(
69 collection=query['results'],
70 page=page,
71 url=pager_url,
72 item_count=query['count'],
73 items_per_page=limit
74 )
75 c.items = query['results']
76 c.item_count = query['count']
77
78 def _generate_other_links(self, search_term):
79 c.other_links = {}
80 c.other_links['show_more'] = h.url_for(
81 "search", **{'q': search_term, 'sort': u'metadata_modified desc',
82 'ext_indicator': '0'})
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
--- a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
+++ b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
@@ -46,7 +46,7 @@
self._generate_other_links(search_term)
- return render('crisis/crisis.html')
+ return render('crisis/crisis-ebola.html')
def _generate_dataset_results(self, context, search_term):
limit = 25
| {"golden_diff": "diff --git a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n--- a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n+++ b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n@@ -46,7 +46,7 @@\n \n self._generate_other_links(search_term)\n \n- return render('crisis/crisis.html')\n+ return render('crisis/crisis-ebola.html')\n \n def _generate_dataset_results(self, context, search_term):\n limit = 25\n", "issue": "Please allow markdown to the organization description field\nRight now markdown is not allowed in that field. I believe that this is preventing me from adding paragraphs and other particular styles to the text in question. \n\n\n\n", "before_files": [{"content": "'''\nCreated on Nov 3, 2014\n\n@author: alexandru-m-g\n'''\n\nimport logging\n\nimport ckan.lib.base as base\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.common as common\nimport ckan.lib.helpers as h\n\nimport ckanext.hdx_crisis.dao.data_access as data_access\nimport ckanext.hdx_crisis.formatters.top_line_items_formatter as formatters\n\nrender = base.render\nget_action = logic.get_action\nc = common.c\nrequest = common.request\n_ = common._\n\n\nlog = logging.getLogger(__name__)\n\n\nclass CrisisController(base.BaseController):\n\n def show(self):\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True,\n 'auth_user_obj': c.userobj}\n\n crisis_data_access = data_access.EbolaCrisisDataAccess()\n crisis_data_access.fetch_data(context)\n c.top_line_items = crisis_data_access.get_top_line_items()\n\n formatter = formatters.TopLineItemsFormatter(c.top_line_items)\n formatter.format_results()\n\n search_term = u'ebola'\n\n self._generate_dataset_results(context, search_term)\n\n self._generate_other_links(search_term)\n\n return render('crisis/crisis.html')\n\n def _generate_dataset_results(self, context, search_term):\n limit = 25\n c.q = search_term\n\n page = int(request.params.get('page', 1))\n data_dict = {'sort': u'metadata_modified desc',\n 'fq': '+dataset_type:dataset',\n 'rows': limit,\n 'q': c.q,\n 'start': (page - 1) * limit\n }\n query = get_action(\"package_search\")(context, data_dict)\n\n def pager_url(q=None, page=None):\n url = h.url_for('show_crisis', page=page) + '#datasets-section'\n return url\n\n c.page = h.Page(\n collection=query['results'],\n page=page,\n url=pager_url,\n item_count=query['count'],\n items_per_page=limit\n )\n c.items = query['results']\n c.item_count = query['count']\n\n def _generate_other_links(self, search_term):\n c.other_links = {}\n c.other_links['show_more'] = h.url_for(\n \"search\", **{'q': search_term, 'sort': u'metadata_modified desc',\n 'ext_indicator': '0'})\n", "path": "ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py"}], "after_files": [{"content": "'''\nCreated on Nov 3, 2014\n\n@author: alexandru-m-g\n'''\n\nimport logging\n\nimport ckan.lib.base as base\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.common as common\nimport ckan.lib.helpers as h\n\nimport ckanext.hdx_crisis.dao.data_access as data_access\nimport ckanext.hdx_crisis.formatters.top_line_items_formatter as formatters\n\nrender = base.render\nget_action = logic.get_action\nc = common.c\nrequest = common.request\n_ = common._\n\n\nlog = logging.getLogger(__name__)\n\n\nclass CrisisController(base.BaseController):\n\n def show(self):\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True,\n 'auth_user_obj': c.userobj}\n\n crisis_data_access = data_access.EbolaCrisisDataAccess()\n crisis_data_access.fetch_data(context)\n c.top_line_items = crisis_data_access.get_top_line_items()\n\n formatter = formatters.TopLineItemsFormatter(c.top_line_items)\n formatter.format_results()\n\n search_term = u'ebola'\n\n self._generate_dataset_results(context, search_term)\n\n self._generate_other_links(search_term)\n\n return render('crisis/crisis-ebola.html')\n\n def _generate_dataset_results(self, context, search_term):\n limit = 25\n c.q = search_term\n\n page = int(request.params.get('page', 1))\n data_dict = {'sort': u'metadata_modified desc',\n 'fq': '+dataset_type:dataset',\n 'rows': limit,\n 'q': c.q,\n 'start': (page - 1) * limit\n }\n query = get_action(\"package_search\")(context, data_dict)\n\n def pager_url(q=None, page=None):\n url = h.url_for('show_crisis', page=page) + '#datasets-section'\n return url\n\n c.page = h.Page(\n collection=query['results'],\n page=page,\n url=pager_url,\n item_count=query['count'],\n items_per_page=limit\n )\n c.items = query['results']\n c.item_count = query['count']\n\n def _generate_other_links(self, search_term):\n c.other_links = {}\n c.other_links['show_more'] = h.url_for(\n \"search\", **{'q': search_term, 'sort': u'metadata_modified desc',\n 'ext_indicator': '0'})\n", "path": "ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py"}]} | 1,113 | 179 |
gh_patches_debug_39448 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-355 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Instrument Starlette background tasks
Starlette supports [background tasks](https://www.starlette.io/background/). We should instrument these as background transactions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/async_/starlette.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from starlette.requests import Request
5
6 import scout_apm.core
7 from scout_apm.core.tracked_request import TrackedRequest
8 from scout_apm.core.web_requests import (
9 create_filtered_path,
10 ignore_path,
11 track_amazon_request_queue_time,
12 track_request_queue_time,
13 )
14
15
16 class ScoutMiddleware:
17 def __init__(self, app):
18 self.app = app
19 installed = scout_apm.core.install()
20 self._do_nothing = not installed
21
22 async def __call__(self, scope, receive, send):
23 if self._do_nothing or scope["type"] != "http":
24 await self.app(scope, receive, send)
25 return
26
27 request = Request(scope)
28 tracked_request = TrackedRequest.instance()
29 # Can't name controller until post-routing - see final clause
30 controller_span = tracked_request.start_span(operation="Controller/Unknown")
31
32 tracked_request.tag(
33 "path",
34 create_filtered_path(request.url.path, request.query_params.multi_items()),
35 )
36 if ignore_path(request.url.path):
37 tracked_request.tag("ignore_transaction", True)
38
39 user_ip = (
40 request.headers.get("x-forwarded-for", default="").split(",")[0]
41 or request.headers.get("client-ip", default="").split(",")[0]
42 or request.client.host
43 )
44 tracked_request.tag("user_ip", user_ip)
45
46 queue_time = request.headers.get(
47 "x-queue-start", default=""
48 ) or request.headers.get("x-request-start", default="")
49 tracked_queue_time = track_request_queue_time(queue_time, tracked_request)
50 if not tracked_queue_time:
51 amazon_queue_time = request.headers.get("x-amzn-trace-id", default="")
52 track_amazon_request_queue_time(amazon_queue_time, tracked_request)
53
54 try:
55 await self.app(scope, receive, send)
56 except Exception as exc:
57 tracked_request.tag("error", "true")
58 raise exc
59 finally:
60 if "endpoint" in scope:
61 endpoint = scope["endpoint"]
62 controller_span.operation = "Controller/{}.{}".format(
63 endpoint.__module__, endpoint.__qualname__
64 )
65 tracked_request.is_real_request = True
66 tracked_request.stop_span()
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py
--- a/src/scout_apm/async_/starlette.py
+++ b/src/scout_apm/async_/starlette.py
@@ -1,6 +1,8 @@
# coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
+import wrapt
+from starlette.background import BackgroundTask
from starlette.requests import Request
import scout_apm.core
@@ -18,6 +20,8 @@
self.app = app
installed = scout_apm.core.install()
self._do_nothing = not installed
+ if installed:
+ install_background_instrumentation()
async def __call__(self, scope, receive, send):
if self._do_nothing or scope["type"] != "http":
@@ -51,16 +55,57 @@
amazon_queue_time = request.headers.get("x-amzn-trace-id", default="")
track_amazon_request_queue_time(amazon_queue_time, tracked_request)
- try:
- await self.app(scope, receive, send)
- except Exception as exc:
- tracked_request.tag("error", "true")
- raise exc
- finally:
+ def rename_controller_span_from_endpoint():
if "endpoint" in scope:
+ # Rename top span
endpoint = scope["endpoint"]
controller_span.operation = "Controller/{}.{}".format(
endpoint.__module__, endpoint.__qualname__
)
tracked_request.is_real_request = True
+
+ async def wrapped_send(data):
+ # Finish HTTP span when body finishes sending, not later (e.g.
+ # after background tasks)
+ if data.get("type", None) == "http.response.body" and not data.get(
+ "more_body", False
+ ):
+ rename_controller_span_from_endpoint()
+ tracked_request.stop_span()
+ return await send(data)
+
+ try:
+ await self.app(scope, receive, wrapped_send)
+ except Exception as exc:
+ tracked_request.tag("error", "true")
+ raise exc
+ finally:
+ if tracked_request.end_time is None:
+ rename_controller_span_from_endpoint()
+ tracked_request.stop_span()
+
+
+background_instrumentation_installed = False
+
+
+def install_background_instrumentation():
+ global background_instrumentation_installed
+ if background_instrumentation_installed:
+ return
+ background_instrumentation_installed = True
+
+ @wrapt.decorator
+ async def wrapped_background_call(wrapped, instance, args, kwargs):
+ tracked_request = TrackedRequest.instance()
+ tracked_request.is_real_request = True
+ tracked_request.start_span(
+ operation="Job/{}.{}".format(
+ instance.func.__module__, instance.func.__qualname__
+ )
+ )
+ try:
+ return await wrapped(*args, **kwargs)
+ finally:
tracked_request.stop_span()
+
+ BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)
| {"golden_diff": "diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py\n--- a/src/scout_apm/async_/starlette.py\n+++ b/src/scout_apm/async_/starlette.py\n@@ -1,6 +1,8 @@\n # coding=utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+import wrapt\n+from starlette.background import BackgroundTask\n from starlette.requests import Request\n \n import scout_apm.core\n@@ -18,6 +20,8 @@\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n+ if installed:\n+ install_background_instrumentation()\n \n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n@@ -51,16 +55,57 @@\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n \n- try:\n- await self.app(scope, receive, send)\n- except Exception as exc:\n- tracked_request.tag(\"error\", \"true\")\n- raise exc\n- finally:\n+ def rename_controller_span_from_endpoint():\n if \"endpoint\" in scope:\n+ # Rename top span\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n+\n+ async def wrapped_send(data):\n+ # Finish HTTP span when body finishes sending, not later (e.g.\n+ # after background tasks)\n+ if data.get(\"type\", None) == \"http.response.body\" and not data.get(\n+ \"more_body\", False\n+ ):\n+ rename_controller_span_from_endpoint()\n+ tracked_request.stop_span()\n+ return await send(data)\n+\n+ try:\n+ await self.app(scope, receive, wrapped_send)\n+ except Exception as exc:\n+ tracked_request.tag(\"error\", \"true\")\n+ raise exc\n+ finally:\n+ if tracked_request.end_time is None:\n+ rename_controller_span_from_endpoint()\n+ tracked_request.stop_span()\n+\n+\n+background_instrumentation_installed = False\n+\n+\n+def install_background_instrumentation():\n+ global background_instrumentation_installed\n+ if background_instrumentation_installed:\n+ return\n+ background_instrumentation_installed = True\n+\n+ @wrapt.decorator\n+ async def wrapped_background_call(wrapped, instance, args, kwargs):\n+ tracked_request = TrackedRequest.instance()\n+ tracked_request.is_real_request = True\n+ tracked_request.start_span(\n+ operation=\"Job/{}.{}\".format(\n+ instance.func.__module__, instance.func.__qualname__\n+ )\n+ )\n+ try:\n+ return await wrapped(*args, **kwargs)\n+ finally:\n tracked_request.stop_span()\n+\n+ BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)\n", "issue": "Instrument Starlette background tasks\nStarlette supports [background tasks](https://www.starlette.io/background/). We should instrument these as background transactions.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom starlette.requests import Request\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\n\nclass ScoutMiddleware:\n def __init__(self, app):\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n\n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = Request(scope)\n tracked_request = TrackedRequest.instance()\n # Can't name controller until post-routing - see final clause\n controller_span = tracked_request.start_span(operation=\"Controller/Unknown\")\n\n tracked_request.tag(\n \"path\",\n create_filtered_path(request.url.path, request.query_params.multi_items()),\n )\n if ignore_path(request.url.path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n user_ip = (\n request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or request.client.host\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n\n try:\n await self.app(scope, receive, send)\n except Exception as exc:\n tracked_request.tag(\"error\", \"true\")\n raise exc\n finally:\n if \"endpoint\" in scope:\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n tracked_request.stop_span()\n", "path": "src/scout_apm/async_/starlette.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport wrapt\nfrom starlette.background import BackgroundTask\nfrom starlette.requests import Request\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\n\nclass ScoutMiddleware:\n def __init__(self, app):\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n if installed:\n install_background_instrumentation()\n\n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = Request(scope)\n tracked_request = TrackedRequest.instance()\n # Can't name controller until post-routing - see final clause\n controller_span = tracked_request.start_span(operation=\"Controller/Unknown\")\n\n tracked_request.tag(\n \"path\",\n create_filtered_path(request.url.path, request.query_params.multi_items()),\n )\n if ignore_path(request.url.path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n user_ip = (\n request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or request.client.host\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n\n def rename_controller_span_from_endpoint():\n if \"endpoint\" in scope:\n # Rename top span\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n\n async def wrapped_send(data):\n # Finish HTTP span when body finishes sending, not later (e.g.\n # after background tasks)\n if data.get(\"type\", None) == \"http.response.body\" and not data.get(\n \"more_body\", False\n ):\n rename_controller_span_from_endpoint()\n tracked_request.stop_span()\n return await send(data)\n\n try:\n await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n tracked_request.tag(\"error\", \"true\")\n raise exc\n finally:\n if tracked_request.end_time is None:\n rename_controller_span_from_endpoint()\n tracked_request.stop_span()\n\n\nbackground_instrumentation_installed = False\n\n\ndef install_background_instrumentation():\n global background_instrumentation_installed\n if background_instrumentation_installed:\n return\n background_instrumentation_installed = True\n\n @wrapt.decorator\n async def wrapped_background_call(wrapped, instance, args, kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n tracked_request.start_span(\n operation=\"Job/{}.{}\".format(\n instance.func.__module__, instance.func.__qualname__\n )\n )\n try:\n return await wrapped(*args, **kwargs)\n finally:\n tracked_request.stop_span()\n\n BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)\n", "path": "src/scout_apm/async_/starlette.py"}]} | 912 | 683 |
gh_patches_debug_27631 | rasdani/github-patches | git_diff | OpenMined__PySyft-2308 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TrainConfig parameter "epochs"
**TrainConfig parameter "epochs" doesn't have effect.**
After changing the number of epochs=1 to epochs=100. The worker still do only 1 epoch.
```
train_config = sy.TrainConfig(
model=traced_model,
loss_fn=loss_fn,
batch_size=batch_size,
shuffle=True,
#max_nr_batches=max_nr_batches,
epochs=100,
lr=lr,
)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `syft/federated/federated_client.py`
Content:
```
1 import torch as th
2 from torch.utils.data import BatchSampler, RandomSampler, SequentialSampler
3
4 from syft.generic import ObjectStorage
5 from syft.federated.train_config import TrainConfig
6
7
8 class FederatedClient(ObjectStorage):
9 """A Client able to execute federated learning in local datasets."""
10
11 def __init__(self, datasets=None):
12 super().__init__()
13 self.datasets = datasets if datasets is not None else dict()
14 self.optimizer = None
15 self.train_config = None
16
17 def add_dataset(self, dataset, key: str):
18 self.datasets[key] = dataset
19
20 def remove_dataset(self, key: str):
21 if key in self.datasets:
22 del self.datasets[key]
23
24 def set_obj(self, obj: object):
25 """Registers objects checking if which objects it should cache.
26
27 Args:
28 obj: An object to be registered.
29 """
30 if isinstance(obj, TrainConfig):
31 self.train_config = obj
32 self.optimizer = None
33 else:
34 super().set_obj(obj)
35
36 def _build_optimizer(
37 self, optimizer_name: str, model, lr: float, weight_decay: float
38 ) -> th.optim.Optimizer:
39 """Build an optimizer if needed.
40
41 Args:
42 optimizer_name: A string indicating the optimizer name.
43 lr: A float indicating the learning rate.
44 weight_decay: Weight decay parameter of the optimizer
45 Returns:
46 A Torch Optimizer.
47 """
48 if self.optimizer is not None:
49 return self.optimizer
50
51 optimizer_name = optimizer_name.lower()
52 if optimizer_name == "sgd":
53 optim_args = dict()
54 optim_args["lr"] = lr
55 if weight_decay is not None:
56 optim_args["weight_decay"] = weight_decay
57 self.optimizer = th.optim.SGD(model.parameters(), **optim_args)
58 else:
59 raise ValueError("Unknown optimizer: {}".format(optimizer_name))
60 return self.optimizer
61
62 def fit(self, dataset_key: str, **kwargs):
63 """Fits a model on the local dataset as specified in the local TrainConfig object.
64
65 Args:
66 dataset_key: Identifier of the local dataset that shall be used for training.
67 **kwargs: Unused.
68
69 Returns:
70 loss: Training loss on the last batch of training data.
71 """
72 if self.train_config is None:
73 raise ValueError("TrainConfig not defined.")
74
75 model = self.get_obj(self.train_config._model_id).obj
76 loss_fn = self.get_obj(self.train_config._loss_fn_id).obj
77
78 self._build_optimizer(
79 self.train_config.optimizer,
80 model,
81 lr=self.train_config.lr,
82 weight_decay=self.train_config.weight_decay,
83 )
84
85 return self._fit(model=model, dataset_key=dataset_key, loss_fn=loss_fn)
86
87 def _create_data_loader(self, dataset_key: str, shuffle: bool = False):
88 data_range = range(len(self.datasets[dataset_key]))
89 if shuffle:
90 sampler = RandomSampler(data_range)
91 else:
92 sampler = SequentialSampler(data_range)
93 data_loader = th.utils.data.DataLoader(
94 self.datasets[dataset_key],
95 batch_size=self.train_config.batch_size,
96 sampler=sampler,
97 num_workers=0,
98 )
99 return data_loader
100
101 def _fit(self, model, dataset_key, loss_fn):
102 model.train()
103 data_loader = self._create_data_loader(
104 dataset_key=dataset_key, shuffle=self.train_config.shuffle
105 )
106
107 loss = None
108 iteration_count = 0
109 for (data, target) in data_loader:
110 # Set gradients to zero
111 self.optimizer.zero_grad()
112
113 # Update model
114 output = model(data)
115 loss = loss_fn(target=target, pred=output)
116 loss.backward()
117 self.optimizer.step()
118
119 # Update and check interation count
120 iteration_count += 1
121 if iteration_count >= self.train_config.max_nr_batches >= 0:
122 break
123
124 return loss
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/syft/federated/federated_client.py b/syft/federated/federated_client.py
--- a/syft/federated/federated_client.py
+++ b/syft/federated/federated_client.py
@@ -72,6 +72,9 @@
if self.train_config is None:
raise ValueError("TrainConfig not defined.")
+ if dataset_key not in self.datasets:
+ raise ValueError("Dataset {} unknown.".format(dataset_key))
+
model = self.get_obj(self.train_config._model_id).obj
loss_fn = self.get_obj(self.train_config._loss_fn_id).obj
@@ -106,19 +109,21 @@
loss = None
iteration_count = 0
- for (data, target) in data_loader:
- # Set gradients to zero
- self.optimizer.zero_grad()
-
- # Update model
- output = model(data)
- loss = loss_fn(target=target, pred=output)
- loss.backward()
- self.optimizer.step()
-
- # Update and check interation count
- iteration_count += 1
- if iteration_count >= self.train_config.max_nr_batches >= 0:
- break
+
+ for _ in range(self.train_config.epochs):
+ for (data, target) in data_loader:
+ # Set gradients to zero
+ self.optimizer.zero_grad()
+
+ # Update model
+ output = model(data)
+ loss = loss_fn(target=target, pred=output)
+ loss.backward()
+ self.optimizer.step()
+
+ # Update and check interation count
+ iteration_count += 1
+ if iteration_count >= self.train_config.max_nr_batches >= 0:
+ break
return loss
| {"golden_diff": "diff --git a/syft/federated/federated_client.py b/syft/federated/federated_client.py\n--- a/syft/federated/federated_client.py\n+++ b/syft/federated/federated_client.py\n@@ -72,6 +72,9 @@\n if self.train_config is None:\n raise ValueError(\"TrainConfig not defined.\")\n \n+ if dataset_key not in self.datasets:\n+ raise ValueError(\"Dataset {} unknown.\".format(dataset_key))\n+\n model = self.get_obj(self.train_config._model_id).obj\n loss_fn = self.get_obj(self.train_config._loss_fn_id).obj\n \n@@ -106,19 +109,21 @@\n \n loss = None\n iteration_count = 0\n- for (data, target) in data_loader:\n- # Set gradients to zero\n- self.optimizer.zero_grad()\n-\n- # Update model\n- output = model(data)\n- loss = loss_fn(target=target, pred=output)\n- loss.backward()\n- self.optimizer.step()\n-\n- # Update and check interation count\n- iteration_count += 1\n- if iteration_count >= self.train_config.max_nr_batches >= 0:\n- break\n+\n+ for _ in range(self.train_config.epochs):\n+ for (data, target) in data_loader:\n+ # Set gradients to zero\n+ self.optimizer.zero_grad()\n+\n+ # Update model\n+ output = model(data)\n+ loss = loss_fn(target=target, pred=output)\n+ loss.backward()\n+ self.optimizer.step()\n+\n+ # Update and check interation count\n+ iteration_count += 1\n+ if iteration_count >= self.train_config.max_nr_batches >= 0:\n+ break\n \n return loss\n", "issue": "TrainConfig parameter \"epochs\"\n**TrainConfig parameter \"epochs\" doesn't have effect.**\r\nAfter changing the number of epochs=1 to epochs=100. The worker still do only 1 epoch.\r\n\r\n```\r\ntrain_config = sy.TrainConfig(\r\n model=traced_model,\r\n loss_fn=loss_fn,\r\n batch_size=batch_size,\r\n shuffle=True,\r\n #max_nr_batches=max_nr_batches,\r\n epochs=100,\r\n lr=lr,\r\n )\r\n```\n", "before_files": [{"content": "import torch as th\nfrom torch.utils.data import BatchSampler, RandomSampler, SequentialSampler\n\nfrom syft.generic import ObjectStorage\nfrom syft.federated.train_config import TrainConfig\n\n\nclass FederatedClient(ObjectStorage):\n \"\"\"A Client able to execute federated learning in local datasets.\"\"\"\n\n def __init__(self, datasets=None):\n super().__init__()\n self.datasets = datasets if datasets is not None else dict()\n self.optimizer = None\n self.train_config = None\n\n def add_dataset(self, dataset, key: str):\n self.datasets[key] = dataset\n\n def remove_dataset(self, key: str):\n if key in self.datasets:\n del self.datasets[key]\n\n def set_obj(self, obj: object):\n \"\"\"Registers objects checking if which objects it should cache.\n\n Args:\n obj: An object to be registered.\n \"\"\"\n if isinstance(obj, TrainConfig):\n self.train_config = obj\n self.optimizer = None\n else:\n super().set_obj(obj)\n\n def _build_optimizer(\n self, optimizer_name: str, model, lr: float, weight_decay: float\n ) -> th.optim.Optimizer:\n \"\"\"Build an optimizer if needed.\n\n Args:\n optimizer_name: A string indicating the optimizer name.\n lr: A float indicating the learning rate.\n weight_decay: Weight decay parameter of the optimizer\n Returns:\n A Torch Optimizer.\n \"\"\"\n if self.optimizer is not None:\n return self.optimizer\n\n optimizer_name = optimizer_name.lower()\n if optimizer_name == \"sgd\":\n optim_args = dict()\n optim_args[\"lr\"] = lr\n if weight_decay is not None:\n optim_args[\"weight_decay\"] = weight_decay\n self.optimizer = th.optim.SGD(model.parameters(), **optim_args)\n else:\n raise ValueError(\"Unknown optimizer: {}\".format(optimizer_name))\n return self.optimizer\n\n def fit(self, dataset_key: str, **kwargs):\n \"\"\"Fits a model on the local dataset as specified in the local TrainConfig object.\n\n Args:\n dataset_key: Identifier of the local dataset that shall be used for training.\n **kwargs: Unused.\n\n Returns:\n loss: Training loss on the last batch of training data.\n \"\"\"\n if self.train_config is None:\n raise ValueError(\"TrainConfig not defined.\")\n\n model = self.get_obj(self.train_config._model_id).obj\n loss_fn = self.get_obj(self.train_config._loss_fn_id).obj\n\n self._build_optimizer(\n self.train_config.optimizer,\n model,\n lr=self.train_config.lr,\n weight_decay=self.train_config.weight_decay,\n )\n\n return self._fit(model=model, dataset_key=dataset_key, loss_fn=loss_fn)\n\n def _create_data_loader(self, dataset_key: str, shuffle: bool = False):\n data_range = range(len(self.datasets[dataset_key]))\n if shuffle:\n sampler = RandomSampler(data_range)\n else:\n sampler = SequentialSampler(data_range)\n data_loader = th.utils.data.DataLoader(\n self.datasets[dataset_key],\n batch_size=self.train_config.batch_size,\n sampler=sampler,\n num_workers=0,\n )\n return data_loader\n\n def _fit(self, model, dataset_key, loss_fn):\n model.train()\n data_loader = self._create_data_loader(\n dataset_key=dataset_key, shuffle=self.train_config.shuffle\n )\n\n loss = None\n iteration_count = 0\n for (data, target) in data_loader:\n # Set gradients to zero\n self.optimizer.zero_grad()\n\n # Update model\n output = model(data)\n loss = loss_fn(target=target, pred=output)\n loss.backward()\n self.optimizer.step()\n\n # Update and check interation count\n iteration_count += 1\n if iteration_count >= self.train_config.max_nr_batches >= 0:\n break\n\n return loss\n", "path": "syft/federated/federated_client.py"}], "after_files": [{"content": "import torch as th\nfrom torch.utils.data import BatchSampler, RandomSampler, SequentialSampler\n\nfrom syft.generic import ObjectStorage\nfrom syft.federated.train_config import TrainConfig\n\n\nclass FederatedClient(ObjectStorage):\n \"\"\"A Client able to execute federated learning in local datasets.\"\"\"\n\n def __init__(self, datasets=None):\n super().__init__()\n self.datasets = datasets if datasets is not None else dict()\n self.optimizer = None\n self.train_config = None\n\n def add_dataset(self, dataset, key: str):\n self.datasets[key] = dataset\n\n def remove_dataset(self, key: str):\n if key in self.datasets:\n del self.datasets[key]\n\n def set_obj(self, obj: object):\n \"\"\"Registers objects checking if which objects it should cache.\n\n Args:\n obj: An object to be registered.\n \"\"\"\n if isinstance(obj, TrainConfig):\n self.train_config = obj\n self.optimizer = None\n else:\n super().set_obj(obj)\n\n def _build_optimizer(\n self, optimizer_name: str, model, lr: float, weight_decay: float\n ) -> th.optim.Optimizer:\n \"\"\"Build an optimizer if needed.\n\n Args:\n optimizer_name: A string indicating the optimizer name.\n lr: A float indicating the learning rate.\n weight_decay: Weight decay parameter of the optimizer\n Returns:\n A Torch Optimizer.\n \"\"\"\n if self.optimizer is not None:\n return self.optimizer\n\n optimizer_name = optimizer_name.lower()\n if optimizer_name == \"sgd\":\n optim_args = dict()\n optim_args[\"lr\"] = lr\n if weight_decay is not None:\n optim_args[\"weight_decay\"] = weight_decay\n self.optimizer = th.optim.SGD(model.parameters(), **optim_args)\n else:\n raise ValueError(\"Unknown optimizer: {}\".format(optimizer_name))\n return self.optimizer\n\n def fit(self, dataset_key: str, **kwargs):\n \"\"\"Fits a model on the local dataset as specified in the local TrainConfig object.\n\n Args:\n dataset_key: Identifier of the local dataset that shall be used for training.\n **kwargs: Unused.\n\n Returns:\n loss: Training loss on the last batch of training data.\n \"\"\"\n if self.train_config is None:\n raise ValueError(\"TrainConfig not defined.\")\n\n if dataset_key not in self.datasets:\n raise ValueError(\"Dataset {} unknown.\".format(dataset_key))\n\n model = self.get_obj(self.train_config._model_id).obj\n loss_fn = self.get_obj(self.train_config._loss_fn_id).obj\n\n self._build_optimizer(\n self.train_config.optimizer,\n model,\n lr=self.train_config.lr,\n weight_decay=self.train_config.weight_decay,\n )\n\n return self._fit(model=model, dataset_key=dataset_key, loss_fn=loss_fn)\n\n def _create_data_loader(self, dataset_key: str, shuffle: bool = False):\n data_range = range(len(self.datasets[dataset_key]))\n if shuffle:\n sampler = RandomSampler(data_range)\n else:\n sampler = SequentialSampler(data_range)\n data_loader = th.utils.data.DataLoader(\n self.datasets[dataset_key],\n batch_size=self.train_config.batch_size,\n sampler=sampler,\n num_workers=0,\n )\n return data_loader\n\n def _fit(self, model, dataset_key, loss_fn):\n model.train()\n data_loader = self._create_data_loader(\n dataset_key=dataset_key, shuffle=self.train_config.shuffle\n )\n\n loss = None\n iteration_count = 0\n\n for _ in range(self.train_config.epochs):\n for (data, target) in data_loader:\n # Set gradients to zero\n self.optimizer.zero_grad()\n\n # Update model\n output = model(data)\n loss = loss_fn(target=target, pred=output)\n loss.backward()\n self.optimizer.step()\n\n # Update and check interation count\n iteration_count += 1\n if iteration_count >= self.train_config.max_nr_batches >= 0:\n break\n\n return loss\n", "path": "syft/federated/federated_client.py"}]} | 1,477 | 394 |
gh_patches_debug_15340 | rasdani/github-patches | git_diff | pypa__setuptools-2134 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removing compatibility for Python 2
In #1458 and the Setuptools 45 release, this project dropped declared support for Python 2, adding a warning when a late version was invoked on Python 2. This warning helped address many of the systemic uses of Setuptools 45+ on Python 2, but there continue to be users (presumably) reporting that they've [encountered the warning](https://github.com/pypa/setuptools/issues?q=is%3Aissue+in%3Atitle+%22incompatible+install%22+).
I say presumably because most of them have submitted a blank template without providing any information.
Since March, these users have been directed to the template via bit.ly, so I have metrics on the number of users encountering and following the link.

It seems there have been 50-100 clicks per day since Apr 11. I'm guessing bit.ly doesn't give me data older than 30 days.
To put that in perspective, Setuptools received over 45M downloads in the last month, so the number of people that followed that link (3.3k) is 0.007% of the downloads.
Still, that's upwards of 100 people per day whose workflow would be broken until they could fix their environment.
Let's also consider that each of these users encountering this issue are following discouraged if not deprecated workflows and are creating new or updated environments (new since setuptools 45 was released in January).
It seems to me we have two options - support Python 2 until the incidents of users encountering this error message reduces to a trickle (what is that threshold) or bite the bullet and drop support for Python 2.
I'd like to review the outstanding issues relating to this issue, but my inclination is to move forward with dropping support.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pkg_resources/py2_warn.py`
Content:
```
1 import sys
2 import warnings
3 import textwrap
4
5
6 msg = textwrap.dedent("""
7 You are running Setuptools on Python 2, which is no longer
8 supported and
9 >>> SETUPTOOLS WILL STOP WORKING <<<
10 in a subsequent release (no sooner than 2020-04-20).
11 Please ensure you are installing
12 Setuptools using pip 9.x or later or pin to `setuptools<45`
13 in your environment.
14 If you have done those things and are still encountering
15 this message, please follow up at
16 https://bit.ly/setuptools-py2-warning.
17 """)
18
19 pre = "Setuptools will stop working on Python 2\n"
20
21 sys.version_info < (3,) and warnings.warn(pre + "*" * 60 + msg + "*" * 60)
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pkg_resources/py2_warn.py b/pkg_resources/py2_warn.py
--- a/pkg_resources/py2_warn.py
+++ b/pkg_resources/py2_warn.py
@@ -4,18 +4,13 @@
msg = textwrap.dedent("""
- You are running Setuptools on Python 2, which is no longer
- supported and
- >>> SETUPTOOLS WILL STOP WORKING <<<
- in a subsequent release (no sooner than 2020-04-20).
- Please ensure you are installing
- Setuptools using pip 9.x or later or pin to `setuptools<45`
- in your environment.
- If you have done those things and are still encountering
- this message, please follow up at
- https://bit.ly/setuptools-py2-warning.
+ Encountered a version of Setuptools that no longer supports
+ this version of Python. Please head to
+ https://bit.ly/setuptools-py2-warning for support.
""")
-pre = "Setuptools will stop working on Python 2\n"
+pre = "Setuptools no longer works on Python 2\n"
-sys.version_info < (3,) and warnings.warn(pre + "*" * 60 + msg + "*" * 60)
+if sys.version_info < (3,):
+ warnings.warn(pre + "*" * 60 + msg + "*" * 60)
+ raise SystemExit(32)
| {"golden_diff": "diff --git a/pkg_resources/py2_warn.py b/pkg_resources/py2_warn.py\n--- a/pkg_resources/py2_warn.py\n+++ b/pkg_resources/py2_warn.py\n@@ -4,18 +4,13 @@\n \n \n msg = textwrap.dedent(\"\"\"\n- You are running Setuptools on Python 2, which is no longer\n- supported and\n- >>> SETUPTOOLS WILL STOP WORKING <<<\n- in a subsequent release (no sooner than 2020-04-20).\n- Please ensure you are installing\n- Setuptools using pip 9.x or later or pin to `setuptools<45`\n- in your environment.\n- If you have done those things and are still encountering\n- this message, please follow up at\n- https://bit.ly/setuptools-py2-warning.\n+ Encountered a version of Setuptools that no longer supports\n+ this version of Python. Please head to\n+ https://bit.ly/setuptools-py2-warning for support.\n \"\"\")\n \n-pre = \"Setuptools will stop working on Python 2\\n\"\n+pre = \"Setuptools no longer works on Python 2\\n\"\n \n-sys.version_info < (3,) and warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n+if sys.version_info < (3,):\n+ warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n+ raise SystemExit(32)\n", "issue": "Removing compatibility for Python 2\nIn #1458 and the Setuptools 45 release, this project dropped declared support for Python 2, adding a warning when a late version was invoked on Python 2. This warning helped address many of the systemic uses of Setuptools 45+ on Python 2, but there continue to be users (presumably) reporting that they've [encountered the warning](https://github.com/pypa/setuptools/issues?q=is%3Aissue+in%3Atitle+%22incompatible+install%22+).\r\n\r\nI say presumably because most of them have submitted a blank template without providing any information.\r\n\r\nSince March, these users have been directed to the template via bit.ly, so I have metrics on the number of users encountering and following the link.\r\n\r\n\r\n\r\nIt seems there have been 50-100 clicks per day since Apr 11. I'm guessing bit.ly doesn't give me data older than 30 days.\r\n\r\nTo put that in perspective, Setuptools received over 45M downloads in the last month, so the number of people that followed that link (3.3k) is 0.007% of the downloads.\r\n\r\nStill, that's upwards of 100 people per day whose workflow would be broken until they could fix their environment.\r\n\r\nLet's also consider that each of these users encountering this issue are following discouraged if not deprecated workflows and are creating new or updated environments (new since setuptools 45 was released in January).\r\n\r\nIt seems to me we have two options - support Python 2 until the incidents of users encountering this error message reduces to a trickle (what is that threshold) or bite the bullet and drop support for Python 2.\r\n\r\nI'd like to review the outstanding issues relating to this issue, but my inclination is to move forward with dropping support.\n", "before_files": [{"content": "import sys\nimport warnings\nimport textwrap\n\n\nmsg = textwrap.dedent(\"\"\"\n You are running Setuptools on Python 2, which is no longer\n supported and\n >>> SETUPTOOLS WILL STOP WORKING <<<\n in a subsequent release (no sooner than 2020-04-20).\n Please ensure you are installing\n Setuptools using pip 9.x or later or pin to `setuptools<45`\n in your environment.\n If you have done those things and are still encountering\n this message, please follow up at\n https://bit.ly/setuptools-py2-warning.\n \"\"\")\n\npre = \"Setuptools will stop working on Python 2\\n\"\n\nsys.version_info < (3,) and warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n", "path": "pkg_resources/py2_warn.py"}], "after_files": [{"content": "import sys\nimport warnings\nimport textwrap\n\n\nmsg = textwrap.dedent(\"\"\"\n Encountered a version of Setuptools that no longer supports\n this version of Python. Please head to\n https://bit.ly/setuptools-py2-warning for support.\n \"\"\")\n\npre = \"Setuptools no longer works on Python 2\\n\"\n\nif sys.version_info < (3,):\n warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n raise SystemExit(32)\n", "path": "pkg_resources/py2_warn.py"}]} | 929 | 327 |
gh_patches_debug_33249 | rasdani/github-patches | git_diff | vispy__vispy-2226 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect behavior with multipe clipping planes
I was checking up on that nice trick where the clipping planes logic is done in the vertex shader and then interpolated to the fragment shader, with the intention of applying it in pygfx too. However, I found that this trick does not work in the case of multiple clipping planes.
This can be shown with the following example:
```py
import numpy as np
from vispy import app, scene, io
from vispy.visuals.filters.clipping_planes import PlanesClipper
canvas = scene.SceneCanvas(keys='interactive', size=(800, 600), show=True)
view = canvas.central_widget.add_view()
cube = scene.visuals.Box(100, 100, 100, color=(1, 0, 0, 1), parent=view.scene)
view.camera = scene.cameras.TurntableCamera(parent=view.scene, fov=60)
clip_center = (0, 20, 60)
clipping_planes = np.concatenate(
[ np.array([[clip_center, [1, 0, 0]]]), np.array([[clip_center, [0, 1, 0]]])]
)
clipper = PlanesClipper()
clipper.clipping_planes = clipping_planes
cube.attach(clipper)
if __name__ == '__main__':
app.run()
```
If you turn the camera to look from above, you'll see this:

I think this can be explained with the following figure:

The black lines indicate two clipping planes (the shaded side is where they clip). The two blue dots represent two vertices with a line or polygon interpolating between them. Both dots are of equal distance two a plane, one on the + side and one on the - side. Now if the `min_plane_distance` (or whatever we ended up calling it :D ) is interpolated, it will have its zero point (the point where it starts clipping) in the middle.
cc @brisvag
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/visuals/filters/clipping_planes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 from functools import lru_cache
6
7 import numpy as np
8
9 from ..shaders import Function, Varying
10 from .base_filter import Filter
11
12
13 class PlanesClipper(Filter):
14 """Clips visual output based on arbitrary clipping planes.
15
16 Parameters
17 ----------
18 cliping_planes : ArrayLike
19 Each plane is defined by a position and a normal vector (magnitude is irrelevant). Shape: (n_planes, 2, 3)
20 coord_system : str
21 Coordinate system used by the clipping planes (see visuals.transforms.transform_system.py)
22
23 """
24
25 VERT_CODE = """
26 void clip() {
27 // Transform back to visual coordinates and clip based on that
28 $v_distance_from_clip = $clip_with_planes($itransform(gl_Position).xyz);
29 }
30 """
31
32 FRAG_CODE = """
33 void clip() {
34 if ($v_distance_from_clip < 0.)
35 discard;
36 }
37 """
38
39 def __init__(self, clipping_planes=None, coord_system='scene'):
40 tr = ['visual', 'scene', 'document', 'canvas', 'framebuffer', 'render']
41 if coord_system not in tr:
42 raise ValueError(f'Invalid coordinate system {coord_system}. Must be one of {tr}.')
43 self._coord_system = coord_system
44
45 super().__init__(
46 vcode=Function(self.VERT_CODE), vhook='post', vpos=1,
47 fcode=Function(self.FRAG_CODE), fhook='pre', fpos=1,
48 )
49
50 v_distance_from_clip = Varying('v_distance_from_clip', 'float')
51 self.vshader['v_distance_from_clip'] = v_distance_from_clip
52 self.fshader['v_distance_from_clip'] = v_distance_from_clip
53
54 self.clipping_planes = clipping_planes
55
56 @property
57 def coord_system(self):
58 """
59 Coordinate system used by the clipping planes (see visuals.transforms.transform_system.py)
60 """
61 # unsettable cause we can't update the transform after being attached
62 return self._coord_system
63
64 def _attach(self, visual):
65 super()._attach(visual)
66 self.vshader['itransform'] = visual.get_transform('render', self._coord_system)
67
68 @staticmethod
69 @lru_cache(maxsize=10)
70 def _build_clipping_planes_func(n_planes):
71 """Build the code snippet used to clip the volume based on self.clipping_planes."""
72 func_template = '''
73 float clip_planes(vec3 loc) {{
74 float distance_from_clip = 3.4e38; // max float
75 {clips};
76 return distance_from_clip;
77 }}
78 '''
79 # the vertex is considered clipped if on the "negative" side of the plane
80 clip_template = '''
81 vec3 relative_vec{idx} = loc - $clipping_plane_pos{idx};
82 float distance_from_clip{idx} = dot(relative_vec{idx}, $clipping_plane_norm{idx});
83 distance_from_clip = min(distance_from_clip{idx}, distance_from_clip);
84 '''
85 all_clips = []
86 for idx in range(n_planes):
87 all_clips.append(clip_template.format(idx=idx))
88 formatted_code = func_template.format(clips=''.join(all_clips))
89 return Function(formatted_code)
90
91 @property
92 def clipping_planes(self):
93 """Get the set of planes used to clip the mesh.
94 Each plane is defined by a position and a normal vector (magnitude is irrelevant). Shape: (n_planes, 2, 3)
95 """
96 return self._clipping_planes
97
98 @clipping_planes.setter
99 def clipping_planes(self, value):
100 if value is None:
101 value = np.empty([0, 2, 3])
102 self._clipping_planes = value
103
104 clip_func = self._build_clipping_planes_func(len(value))
105 self.vshader['clip_with_planes'] = clip_func
106
107 for idx, plane in enumerate(value):
108 clip_func[f'clipping_plane_pos{idx}'] = tuple(plane[0])
109 clip_func[f'clipping_plane_norm{idx}'] = tuple(plane[1])
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vispy/visuals/filters/clipping_planes.py b/vispy/visuals/filters/clipping_planes.py
--- a/vispy/visuals/filters/clipping_planes.py
+++ b/vispy/visuals/filters/clipping_planes.py
@@ -24,14 +24,15 @@
VERT_CODE = """
void clip() {
- // Transform back to visual coordinates and clip based on that
- $v_distance_from_clip = $clip_with_planes($itransform(gl_Position).xyz);
+ // pass position as varying for interpolation
+ $v_position = gl_Position;
}
"""
FRAG_CODE = """
void clip() {
- if ($v_distance_from_clip < 0.)
+ float distance_from_clip = $clip_with_planes($itransform($v_position).xyz);
+ if (distance_from_clip < 0.)
discard;
}
"""
@@ -47,9 +48,9 @@
fcode=Function(self.FRAG_CODE), fhook='pre', fpos=1,
)
- v_distance_from_clip = Varying('v_distance_from_clip', 'float')
- self.vshader['v_distance_from_clip'] = v_distance_from_clip
- self.fshader['v_distance_from_clip'] = v_distance_from_clip
+ v_position = Varying('v_position', 'vec4')
+ self.vshader['v_position'] = v_position
+ self.fshader['v_position'] = v_position
self.clipping_planes = clipping_planes
@@ -63,7 +64,7 @@
def _attach(self, visual):
super()._attach(visual)
- self.vshader['itransform'] = visual.get_transform('render', self._coord_system)
+ self.fshader['itransform'] = visual.get_transform('render', self._coord_system)
@staticmethod
@lru_cache(maxsize=10)
@@ -102,7 +103,7 @@
self._clipping_planes = value
clip_func = self._build_clipping_planes_func(len(value))
- self.vshader['clip_with_planes'] = clip_func
+ self.fshader['clip_with_planes'] = clip_func
for idx, plane in enumerate(value):
clip_func[f'clipping_plane_pos{idx}'] = tuple(plane[0])
| {"golden_diff": "diff --git a/vispy/visuals/filters/clipping_planes.py b/vispy/visuals/filters/clipping_planes.py\n--- a/vispy/visuals/filters/clipping_planes.py\n+++ b/vispy/visuals/filters/clipping_planes.py\n@@ -24,14 +24,15 @@\n \n VERT_CODE = \"\"\"\n void clip() {\n- // Transform back to visual coordinates and clip based on that\n- $v_distance_from_clip = $clip_with_planes($itransform(gl_Position).xyz);\n+ // pass position as varying for interpolation\n+ $v_position = gl_Position;\n }\n \"\"\"\n \n FRAG_CODE = \"\"\"\n void clip() {\n- if ($v_distance_from_clip < 0.)\n+ float distance_from_clip = $clip_with_planes($itransform($v_position).xyz);\n+ if (distance_from_clip < 0.)\n discard;\n }\n \"\"\"\n@@ -47,9 +48,9 @@\n fcode=Function(self.FRAG_CODE), fhook='pre', fpos=1,\n )\n \n- v_distance_from_clip = Varying('v_distance_from_clip', 'float')\n- self.vshader['v_distance_from_clip'] = v_distance_from_clip\n- self.fshader['v_distance_from_clip'] = v_distance_from_clip\n+ v_position = Varying('v_position', 'vec4')\n+ self.vshader['v_position'] = v_position\n+ self.fshader['v_position'] = v_position\n \n self.clipping_planes = clipping_planes\n \n@@ -63,7 +64,7 @@\n \n def _attach(self, visual):\n super()._attach(visual)\n- self.vshader['itransform'] = visual.get_transform('render', self._coord_system)\n+ self.fshader['itransform'] = visual.get_transform('render', self._coord_system)\n \n @staticmethod\n @lru_cache(maxsize=10)\n@@ -102,7 +103,7 @@\n self._clipping_planes = value\n \n clip_func = self._build_clipping_planes_func(len(value))\n- self.vshader['clip_with_planes'] = clip_func\n+ self.fshader['clip_with_planes'] = clip_func\n \n for idx, plane in enumerate(value):\n clip_func[f'clipping_plane_pos{idx}'] = tuple(plane[0])\n", "issue": "Incorrect behavior with multipe clipping planes \nI was checking up on that nice trick where the clipping planes logic is done in the vertex shader and then interpolated to the fragment shader, with the intention of applying it in pygfx too. However, I found that this trick does not work in the case of multiple clipping planes.\r\n\r\nThis can be shown with the following example:\r\n```py\r\nimport numpy as np\r\nfrom vispy import app, scene, io\r\nfrom vispy.visuals.filters.clipping_planes import PlanesClipper\r\n\r\ncanvas = scene.SceneCanvas(keys='interactive', size=(800, 600), show=True)\r\nview = canvas.central_widget.add_view()\r\n\r\ncube = scene.visuals.Box(100, 100, 100, color=(1, 0, 0, 1), parent=view.scene)\r\n\r\nview.camera = scene.cameras.TurntableCamera(parent=view.scene, fov=60)\r\n\r\nclip_center = (0, 20, 60)\r\nclipping_planes = np.concatenate(\r\n [ np.array([[clip_center, [1, 0, 0]]]), np.array([[clip_center, [0, 1, 0]]])]\r\n)\r\n\r\nclipper = PlanesClipper()\r\nclipper.clipping_planes = clipping_planes\r\ncube.attach(clipper)\r\n\r\nif __name__ == '__main__':\r\n app.run()\r\n```\r\n\r\nIf you turn the camera to look from above, you'll see this:\r\n\r\n\r\n\r\n\r\nI think this can be explained with the following figure:\r\n\r\n\r\n\r\nThe black lines indicate two clipping planes (the shaded side is where they clip). The two blue dots represent two vertices with a line or polygon interpolating between them. Both dots are of equal distance two a plane, one on the + side and one on the - side. Now if the `min_plane_distance` (or whatever we ended up calling it :D ) is interpolated, it will have its zero point (the point where it starts clipping) in the middle.\r\n\r\ncc @brisvag \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\nfrom functools import lru_cache\n\nimport numpy as np\n\nfrom ..shaders import Function, Varying\nfrom .base_filter import Filter\n\n\nclass PlanesClipper(Filter):\n \"\"\"Clips visual output based on arbitrary clipping planes.\n\n Parameters\n ----------\n cliping_planes : ArrayLike\n Each plane is defined by a position and a normal vector (magnitude is irrelevant). Shape: (n_planes, 2, 3)\n coord_system : str\n Coordinate system used by the clipping planes (see visuals.transforms.transform_system.py)\n\n \"\"\"\n\n VERT_CODE = \"\"\"\n void clip() {\n // Transform back to visual coordinates and clip based on that\n $v_distance_from_clip = $clip_with_planes($itransform(gl_Position).xyz);\n }\n \"\"\"\n\n FRAG_CODE = \"\"\"\n void clip() {\n if ($v_distance_from_clip < 0.)\n discard;\n }\n \"\"\"\n\n def __init__(self, clipping_planes=None, coord_system='scene'):\n tr = ['visual', 'scene', 'document', 'canvas', 'framebuffer', 'render']\n if coord_system not in tr:\n raise ValueError(f'Invalid coordinate system {coord_system}. Must be one of {tr}.')\n self._coord_system = coord_system\n\n super().__init__(\n vcode=Function(self.VERT_CODE), vhook='post', vpos=1,\n fcode=Function(self.FRAG_CODE), fhook='pre', fpos=1,\n )\n\n v_distance_from_clip = Varying('v_distance_from_clip', 'float')\n self.vshader['v_distance_from_clip'] = v_distance_from_clip\n self.fshader['v_distance_from_clip'] = v_distance_from_clip\n\n self.clipping_planes = clipping_planes\n\n @property\n def coord_system(self):\n \"\"\"\n Coordinate system used by the clipping planes (see visuals.transforms.transform_system.py)\n \"\"\"\n # unsettable cause we can't update the transform after being attached\n return self._coord_system\n\n def _attach(self, visual):\n super()._attach(visual)\n self.vshader['itransform'] = visual.get_transform('render', self._coord_system)\n\n @staticmethod\n @lru_cache(maxsize=10)\n def _build_clipping_planes_func(n_planes):\n \"\"\"Build the code snippet used to clip the volume based on self.clipping_planes.\"\"\"\n func_template = '''\n float clip_planes(vec3 loc) {{\n float distance_from_clip = 3.4e38; // max float\n {clips};\n return distance_from_clip;\n }}\n '''\n # the vertex is considered clipped if on the \"negative\" side of the plane\n clip_template = '''\n vec3 relative_vec{idx} = loc - $clipping_plane_pos{idx};\n float distance_from_clip{idx} = dot(relative_vec{idx}, $clipping_plane_norm{idx});\n distance_from_clip = min(distance_from_clip{idx}, distance_from_clip);\n '''\n all_clips = []\n for idx in range(n_planes):\n all_clips.append(clip_template.format(idx=idx))\n formatted_code = func_template.format(clips=''.join(all_clips))\n return Function(formatted_code)\n\n @property\n def clipping_planes(self):\n \"\"\"Get the set of planes used to clip the mesh.\n Each plane is defined by a position and a normal vector (magnitude is irrelevant). Shape: (n_planes, 2, 3)\n \"\"\"\n return self._clipping_planes\n\n @clipping_planes.setter\n def clipping_planes(self, value):\n if value is None:\n value = np.empty([0, 2, 3])\n self._clipping_planes = value\n\n clip_func = self._build_clipping_planes_func(len(value))\n self.vshader['clip_with_planes'] = clip_func\n\n for idx, plane in enumerate(value):\n clip_func[f'clipping_plane_pos{idx}'] = tuple(plane[0])\n clip_func[f'clipping_plane_norm{idx}'] = tuple(plane[1])\n", "path": "vispy/visuals/filters/clipping_planes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\nfrom functools import lru_cache\n\nimport numpy as np\n\nfrom ..shaders import Function, Varying\nfrom .base_filter import Filter\n\n\nclass PlanesClipper(Filter):\n \"\"\"Clips visual output based on arbitrary clipping planes.\n\n Parameters\n ----------\n cliping_planes : ArrayLike\n Each plane is defined by a position and a normal vector (magnitude is irrelevant). Shape: (n_planes, 2, 3)\n coord_system : str\n Coordinate system used by the clipping planes (see visuals.transforms.transform_system.py)\n\n \"\"\"\n\n VERT_CODE = \"\"\"\n void clip() {\n // pass position as varying for interpolation\n $v_position = gl_Position;\n }\n \"\"\"\n\n FRAG_CODE = \"\"\"\n void clip() {\n float distance_from_clip = $clip_with_planes($itransform($v_position).xyz);\n if (distance_from_clip < 0.)\n discard;\n }\n \"\"\"\n\n def __init__(self, clipping_planes=None, coord_system='scene'):\n tr = ['visual', 'scene', 'document', 'canvas', 'framebuffer', 'render']\n if coord_system not in tr:\n raise ValueError(f'Invalid coordinate system {coord_system}. Must be one of {tr}.')\n self._coord_system = coord_system\n\n super().__init__(\n vcode=Function(self.VERT_CODE), vhook='post', vpos=1,\n fcode=Function(self.FRAG_CODE), fhook='pre', fpos=1,\n )\n\n v_position = Varying('v_position', 'vec4')\n self.vshader['v_position'] = v_position\n self.fshader['v_position'] = v_position\n\n self.clipping_planes = clipping_planes\n\n @property\n def coord_system(self):\n \"\"\"\n Coordinate system used by the clipping planes (see visuals.transforms.transform_system.py)\n \"\"\"\n # unsettable cause we can't update the transform after being attached\n return self._coord_system\n\n def _attach(self, visual):\n super()._attach(visual)\n self.fshader['itransform'] = visual.get_transform('render', self._coord_system)\n\n @staticmethod\n @lru_cache(maxsize=10)\n def _build_clipping_planes_func(n_planes):\n \"\"\"Build the code snippet used to clip the volume based on self.clipping_planes.\"\"\"\n func_template = '''\n float clip_planes(vec3 loc) {{\n float distance_from_clip = 3.4e38; // max float\n {clips};\n return distance_from_clip;\n }}\n '''\n # the vertex is considered clipped if on the \"negative\" side of the plane\n clip_template = '''\n vec3 relative_vec{idx} = loc - $clipping_plane_pos{idx};\n float distance_from_clip{idx} = dot(relative_vec{idx}, $clipping_plane_norm{idx});\n distance_from_clip = min(distance_from_clip{idx}, distance_from_clip);\n '''\n all_clips = []\n for idx in range(n_planes):\n all_clips.append(clip_template.format(idx=idx))\n formatted_code = func_template.format(clips=''.join(all_clips))\n return Function(formatted_code)\n\n @property\n def clipping_planes(self):\n \"\"\"Get the set of planes used to clip the mesh.\n Each plane is defined by a position and a normal vector (magnitude is irrelevant). Shape: (n_planes, 2, 3)\n \"\"\"\n return self._clipping_planes\n\n @clipping_planes.setter\n def clipping_planes(self, value):\n if value is None:\n value = np.empty([0, 2, 3])\n self._clipping_planes = value\n\n clip_func = self._build_clipping_planes_func(len(value))\n self.fshader['clip_with_planes'] = clip_func\n\n for idx, plane in enumerate(value):\n clip_func[f'clipping_plane_pos{idx}'] = tuple(plane[0])\n clip_func[f'clipping_plane_norm{idx}'] = tuple(plane[1])\n", "path": "vispy/visuals/filters/clipping_planes.py"}]} | 1,953 | 527 |
gh_patches_debug_10597 | rasdani/github-patches | git_diff | e2nIEE__pandapower-849 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Estimation results
Hello,
I think in python file pandapower -> estimation -> results.py there is a baseMVA missing in the calculation.
I think line 22 should be adjusted to this, or similar:
`Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA`
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandapower/estimation/results.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6 import numpy as np
7
8 from pandapower.pypower.idx_bus import PD, QD
9 from pandapower.pf.ppci_variables import _get_pf_variables_from_ppci
10 from pandapower.pf.pfsoln_numba import pfsoln
11 from pandapower.results import _copy_results_ppci_to_ppc, _extract_results_se, init_results
12 from pandapower.auxiliary import _add_pf_options, get_values, _clean_up
13
14 def _calc_power_flow(ppci, V):
15 # store results for all elements
16 # calculate branch results (in ppc_i)
17 baseMVA, bus, gen, branch, ref, pv, pq, _, _, _, ref_gens = _get_pf_variables_from_ppci(ppci)
18 Ybus, Yf, Yt = ppci['internal']['Ybus'], ppci['internal']['Yf'], ppci['internal']['Yt']
19 ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)
20
21 # calculate bus power injections
22 Sbus = np.multiply(V, np.conj(Ybus * V))
23 ppci["bus"][:, PD] = -Sbus.real # saved in per unit, injection -> demand
24 ppci["bus"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand
25 return ppci
26
27
28 def _extract_result_ppci_to_pp(net, ppc, ppci):
29 # convert to pandapower indices
30 ppc = _copy_results_ppci_to_ppc(ppci, ppc, mode="se")
31
32 # extract results from ppc
33 try:
34 _add_pf_options(net, tolerance_mva=1e-8, trafo_loading="current",
35 numba=True, ac=True, algorithm='nr', max_iteration="auto")
36 except:
37 pass
38 # writes res_bus.vm_pu / va_degree and res_line
39 _extract_results_se(net, ppc)
40
41 # restore backup of previous results
42 _rename_results(net)
43
44 # additionally, write bus power demand results (these are not written in _extract_results)
45 mapping_table = net["_pd2ppc_lookups"]["bus"]
46 net.res_bus_est.index = net.bus.index
47 net.res_bus_est.p_mw = get_values(ppc["bus"][:, 2], net.bus.index.values,
48 mapping_table)
49 net.res_bus_est.q_mvar = get_values(ppc["bus"][:, 3], net.bus.index.values,
50 mapping_table)
51
52 _clean_up(net)
53 # delete results which are not correctly calculated
54 for k in list(net.keys()):
55 if k.startswith("res_") and k.endswith("_est") and \
56 k not in ("res_bus_est", "res_line_est", "res_trafo_est", "res_trafo3w_est"):
57 del net[k]
58 return net
59
60
61 def _copy_power_flow_results(net):
62 """
63 copy old power flow results (if they exist) into res_*_power_flow tables for backup
64 :param net: pandapower grid
65 :return:
66 """
67 elements_to_init = ["bus", "ext_grid", "line", "load", "load_3ph" "sgen", "sgen_3ph", "trafo", "trafo3w",
68 "shunt", "impedance", "gen", "ward", "xward", "dcline"]
69 for element in elements_to_init:
70 res_name = "res_" + element
71 res_name_pf = res_name + "_power_flow"
72 if res_name in net:
73 net[res_name_pf] = (net[res_name]).copy()
74 init_results(net)
75
76
77 def _rename_results(net):
78 """
79 write result tables to result tables for estimation (e.g., res_bus -> res_bus_est)
80 reset backed up result tables (e.g., res_bus_power_flow -> res_bus)
81 :param net: pandapower grid
82 :return:
83 """
84 elements_to_init = ["bus", "ext_grid", "line", "load", "sgen", "trafo", "trafo3w",
85 "shunt", "impedance", "gen", "ward", "xward", "dcline"]
86 # rename res_* tables to res_*_est and then res_*_power_flow to res_*
87 for element in elements_to_init:
88 res_name = "res_" + element
89 res_name_pf = res_name + "_power_flow"
90 res_name_est = res_name + "_est"
91 net[res_name_est] = net[res_name]
92 if res_name_pf in net:
93 net[res_name] = net[res_name_pf]
94 else:
95 del net[res_name]
96
97 def eppci2pp(net, ppc, eppci):
98 # calculate the branch power flow and bus power injection based on the estimated voltage vector
99 eppci = _calc_power_flow(eppci, eppci.V)
100
101 # extract the result from ppci to ppc and pandpower network
102 net = _extract_result_ppci_to_pp(net, ppc, eppci)
103 return net
104
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandapower/estimation/results.py b/pandapower/estimation/results.py
--- a/pandapower/estimation/results.py
+++ b/pandapower/estimation/results.py
@@ -19,7 +19,7 @@
ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)
# calculate bus power injections
- Sbus = np.multiply(V, np.conj(Ybus * V))
+ Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA
ppci["bus"][:, PD] = -Sbus.real # saved in per unit, injection -> demand
ppci["bus"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand
return ppci
| {"golden_diff": "diff --git a/pandapower/estimation/results.py b/pandapower/estimation/results.py\n--- a/pandapower/estimation/results.py\n+++ b/pandapower/estimation/results.py\n@@ -19,7 +19,7 @@\n ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)\n \n # calculate bus power injections\n- Sbus = np.multiply(V, np.conj(Ybus * V))\n+ Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA\n ppci[\"bus\"][:, PD] = -Sbus.real # saved in per unit, injection -> demand\n ppci[\"bus\"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand\n return ppci\n", "issue": "Estimation results\nHello,\r\n\r\nI think in python file pandapower -> estimation -> results.py there is a baseMVA missing in the calculation.\r\n\r\nI think line 22 should be adjusted to this, or similar:\r\n\r\n`Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA`\r\n\r\nThanks\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nimport numpy as np\n\nfrom pandapower.pypower.idx_bus import PD, QD\nfrom pandapower.pf.ppci_variables import _get_pf_variables_from_ppci\nfrom pandapower.pf.pfsoln_numba import pfsoln\nfrom pandapower.results import _copy_results_ppci_to_ppc, _extract_results_se, init_results\nfrom pandapower.auxiliary import _add_pf_options, get_values, _clean_up\n\ndef _calc_power_flow(ppci, V):\n # store results for all elements\n # calculate branch results (in ppc_i)\n baseMVA, bus, gen, branch, ref, pv, pq, _, _, _, ref_gens = _get_pf_variables_from_ppci(ppci)\n Ybus, Yf, Yt = ppci['internal']['Ybus'], ppci['internal']['Yf'], ppci['internal']['Yt']\n ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)\n\n # calculate bus power injections\n Sbus = np.multiply(V, np.conj(Ybus * V))\n ppci[\"bus\"][:, PD] = -Sbus.real # saved in per unit, injection -> demand\n ppci[\"bus\"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand\n return ppci\n\n\ndef _extract_result_ppci_to_pp(net, ppc, ppci):\n # convert to pandapower indices\n ppc = _copy_results_ppci_to_ppc(ppci, ppc, mode=\"se\")\n\n # extract results from ppc\n try:\n _add_pf_options(net, tolerance_mva=1e-8, trafo_loading=\"current\",\n numba=True, ac=True, algorithm='nr', max_iteration=\"auto\")\n except:\n pass\n # writes res_bus.vm_pu / va_degree and res_line\n _extract_results_se(net, ppc)\n\n # restore backup of previous results\n _rename_results(net)\n\n # additionally, write bus power demand results (these are not written in _extract_results)\n mapping_table = net[\"_pd2ppc_lookups\"][\"bus\"]\n net.res_bus_est.index = net.bus.index\n net.res_bus_est.p_mw = get_values(ppc[\"bus\"][:, 2], net.bus.index.values,\n mapping_table)\n net.res_bus_est.q_mvar = get_values(ppc[\"bus\"][:, 3], net.bus.index.values,\n mapping_table)\n\n _clean_up(net)\n # delete results which are not correctly calculated\n for k in list(net.keys()):\n if k.startswith(\"res_\") and k.endswith(\"_est\") and \\\n k not in (\"res_bus_est\", \"res_line_est\", \"res_trafo_est\", \"res_trafo3w_est\"):\n del net[k]\n return net\n\n\ndef _copy_power_flow_results(net):\n \"\"\"\n copy old power flow results (if they exist) into res_*_power_flow tables for backup\n :param net: pandapower grid\n :return:\n \"\"\"\n elements_to_init = [\"bus\", \"ext_grid\", \"line\", \"load\", \"load_3ph\" \"sgen\", \"sgen_3ph\", \"trafo\", \"trafo3w\",\n \"shunt\", \"impedance\", \"gen\", \"ward\", \"xward\", \"dcline\"]\n for element in elements_to_init:\n res_name = \"res_\" + element\n res_name_pf = res_name + \"_power_flow\"\n if res_name in net:\n net[res_name_pf] = (net[res_name]).copy()\n init_results(net)\n\n\ndef _rename_results(net):\n \"\"\"\n write result tables to result tables for estimation (e.g., res_bus -> res_bus_est)\n reset backed up result tables (e.g., res_bus_power_flow -> res_bus)\n :param net: pandapower grid\n :return:\n \"\"\"\n elements_to_init = [\"bus\", \"ext_grid\", \"line\", \"load\", \"sgen\", \"trafo\", \"trafo3w\",\n \"shunt\", \"impedance\", \"gen\", \"ward\", \"xward\", \"dcline\"]\n # rename res_* tables to res_*_est and then res_*_power_flow to res_*\n for element in elements_to_init:\n res_name = \"res_\" + element\n res_name_pf = res_name + \"_power_flow\"\n res_name_est = res_name + \"_est\"\n net[res_name_est] = net[res_name]\n if res_name_pf in net:\n net[res_name] = net[res_name_pf]\n else:\n del net[res_name]\n\ndef eppci2pp(net, ppc, eppci):\n # calculate the branch power flow and bus power injection based on the estimated voltage vector\n eppci = _calc_power_flow(eppci, eppci.V)\n\n # extract the result from ppci to ppc and pandpower network\n net = _extract_result_ppci_to_pp(net, ppc, eppci)\n return net\n\n", "path": "pandapower/estimation/results.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nimport numpy as np\n\nfrom pandapower.pypower.idx_bus import PD, QD\nfrom pandapower.pf.ppci_variables import _get_pf_variables_from_ppci\nfrom pandapower.pf.pfsoln_numba import pfsoln\nfrom pandapower.results import _copy_results_ppci_to_ppc, _extract_results_se, init_results\nfrom pandapower.auxiliary import _add_pf_options, get_values, _clean_up\n\ndef _calc_power_flow(ppci, V):\n # store results for all elements\n # calculate branch results (in ppc_i)\n baseMVA, bus, gen, branch, ref, pv, pq, _, _, _, ref_gens = _get_pf_variables_from_ppci(ppci)\n Ybus, Yf, Yt = ppci['internal']['Ybus'], ppci['internal']['Yf'], ppci['internal']['Yt']\n ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)\n\n # calculate bus power injections\n Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA\n ppci[\"bus\"][:, PD] = -Sbus.real # saved in per unit, injection -> demand\n ppci[\"bus\"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand\n return ppci\n\n\ndef _extract_result_ppci_to_pp(net, ppc, ppci):\n # convert to pandapower indices\n ppc = _copy_results_ppci_to_ppc(ppci, ppc, mode=\"se\")\n\n # extract results from ppc\n try:\n _add_pf_options(net, tolerance_mva=1e-8, trafo_loading=\"current\",\n numba=True, ac=True, algorithm='nr', max_iteration=\"auto\")\n except:\n pass\n # writes res_bus.vm_pu / va_degree and res_line\n _extract_results_se(net, ppc)\n\n # restore backup of previous results\n _rename_results(net)\n\n # additionally, write bus power demand results (these are not written in _extract_results)\n mapping_table = net[\"_pd2ppc_lookups\"][\"bus\"]\n net.res_bus_est.index = net.bus.index\n net.res_bus_est.p_mw = get_values(ppc[\"bus\"][:, 2], net.bus.index.values,\n mapping_table)\n net.res_bus_est.q_mvar = get_values(ppc[\"bus\"][:, 3], net.bus.index.values,\n mapping_table)\n\n _clean_up(net)\n # delete results which are not correctly calculated\n for k in list(net.keys()):\n if k.startswith(\"res_\") and k.endswith(\"_est\") and \\\n k not in (\"res_bus_est\", \"res_line_est\", \"res_trafo_est\", \"res_trafo3w_est\"):\n del net[k]\n return net\n\n\ndef _copy_power_flow_results(net):\n \"\"\"\n copy old power flow results (if they exist) into res_*_power_flow tables for backup\n :param net: pandapower grid\n :return:\n \"\"\"\n elements_to_init = [\"bus\", \"ext_grid\", \"line\", \"load\", \"load_3ph\" \"sgen\", \"sgen_3ph\", \"trafo\", \"trafo3w\",\n \"shunt\", \"impedance\", \"gen\", \"ward\", \"xward\", \"dcline\"]\n for element in elements_to_init:\n res_name = \"res_\" + element\n res_name_pf = res_name + \"_power_flow\"\n if res_name in net:\n net[res_name_pf] = (net[res_name]).copy()\n init_results(net)\n\n\ndef _rename_results(net):\n \"\"\"\n write result tables to result tables for estimation (e.g., res_bus -> res_bus_est)\n reset backed up result tables (e.g., res_bus_power_flow -> res_bus)\n :param net: pandapower grid\n :return:\n \"\"\"\n elements_to_init = [\"bus\", \"ext_grid\", \"line\", \"load\", \"sgen\", \"trafo\", \"trafo3w\",\n \"shunt\", \"impedance\", \"gen\", \"ward\", \"xward\", \"dcline\"]\n # rename res_* tables to res_*_est and then res_*_power_flow to res_*\n for element in elements_to_init:\n res_name = \"res_\" + element\n res_name_pf = res_name + \"_power_flow\"\n res_name_est = res_name + \"_est\"\n net[res_name_est] = net[res_name]\n if res_name_pf in net:\n net[res_name] = net[res_name_pf]\n else:\n del net[res_name]\n\ndef eppci2pp(net, ppc, eppci):\n # calculate the branch power flow and bus power injection based on the estimated voltage vector\n eppci = _calc_power_flow(eppci, eppci.V)\n\n # extract the result from ppci to ppc and pandpower network\n net = _extract_result_ppci_to_pp(net, ppc, eppci)\n return net\n\n", "path": "pandapower/estimation/results.py"}]} | 1,738 | 211 |
gh_patches_debug_3313 | rasdani/github-patches | git_diff | ansible-collections__community.general-6941 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
machinectl become plugin does not specify it requires a tty
### Summary
see https://github.com/ansible/ansible/issues/81254
if the plugin sets the class attribute:
```
require_tty = True
```
It would automatically disable pipelining and avoid such errors
### Issue Type
Bug Report
### Component Name
become/machinectl
### Ansible Version
```console (paste below)
$ ansible --version
```
all
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
```
all
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
N/A
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Use machinectl become plugin + pipelining
### Expected Results
it works TM
### Actual Results
```console (paste below)
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/become/machinectl.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2018, Ansible Project
3 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
4 # SPDX-License-Identifier: GPL-3.0-or-later
5 from __future__ import (absolute_import, division, print_function)
6 __metaclass__ = type
7
8 DOCUMENTATION = '''
9 name: machinectl
10 short_description: Systemd's machinectl privilege escalation
11 description:
12 - This become plugins allows your remote/login user to execute commands as another user via the machinectl utility.
13 author: Ansible Core Team
14 options:
15 become_user:
16 description: User you 'become' to execute the task
17 default: ''
18 ini:
19 - section: privilege_escalation
20 key: become_user
21 - section: machinectl_become_plugin
22 key: user
23 vars:
24 - name: ansible_become_user
25 - name: ansible_machinectl_user
26 env:
27 - name: ANSIBLE_BECOME_USER
28 - name: ANSIBLE_MACHINECTL_USER
29 become_exe:
30 description: Machinectl executable
31 default: machinectl
32 ini:
33 - section: privilege_escalation
34 key: become_exe
35 - section: machinectl_become_plugin
36 key: executable
37 vars:
38 - name: ansible_become_exe
39 - name: ansible_machinectl_exe
40 env:
41 - name: ANSIBLE_BECOME_EXE
42 - name: ANSIBLE_MACHINECTL_EXE
43 become_flags:
44 description: Options to pass to machinectl
45 default: ''
46 ini:
47 - section: privilege_escalation
48 key: become_flags
49 - section: machinectl_become_plugin
50 key: flags
51 vars:
52 - name: ansible_become_flags
53 - name: ansible_machinectl_flags
54 env:
55 - name: ANSIBLE_BECOME_FLAGS
56 - name: ANSIBLE_MACHINECTL_FLAGS
57 become_pass:
58 description: Password for machinectl
59 required: false
60 vars:
61 - name: ansible_become_password
62 - name: ansible_become_pass
63 - name: ansible_machinectl_pass
64 env:
65 - name: ANSIBLE_BECOME_PASS
66 - name: ANSIBLE_MACHINECTL_PASS
67 ini:
68 - section: machinectl_become_plugin
69 key: password
70 notes:
71 - When not using this plugin with user C(root), it only works correctly with a polkit rule which will alter
72 the behaviour of machinectl. This rule must alter the prompt behaviour to ask directly for the user credentials,
73 if the user is allowed to perform the action (take a look at the examples section).
74 If such a rule is not present the plugin only work if it is used in context with the root user,
75 because then no further prompt will be shown by machinectl.
76 '''
77
78 EXAMPLES = r'''
79 # A polkit rule needed to use the module with a non-root user.
80 # See the Notes section for details.
81 60-machinectl-fast-user-auth.rules: |
82 polkit.addRule(function(action, subject) {
83 if(action.id == "org.freedesktop.machine1.host-shell" && subject.isInGroup("wheel")) {
84 return polkit.Result.AUTH_SELF_KEEP;
85 }
86 });
87 '''
88
89 from re import compile as re_compile
90
91 from ansible.plugins.become import BecomeBase
92 from ansible.module_utils._text import to_bytes
93
94
95 ansi_color_codes = re_compile(to_bytes(r'\x1B\[[0-9;]+m'))
96
97
98 class BecomeModule(BecomeBase):
99
100 name = 'community.general.machinectl'
101
102 prompt = 'Password: '
103 fail = ('==== AUTHENTICATION FAILED ====',)
104 success = ('==== AUTHENTICATION COMPLETE ====',)
105
106 @staticmethod
107 def remove_ansi_codes(line):
108 return ansi_color_codes.sub(b"", line)
109
110 def build_become_command(self, cmd, shell):
111 super(BecomeModule, self).build_become_command(cmd, shell)
112
113 if not cmd:
114 return cmd
115
116 become = self.get_option('become_exe')
117
118 flags = self.get_option('become_flags')
119 user = self.get_option('become_user')
120 return '%s -q shell %s %s@ %s' % (become, flags, user, self._build_success_command(cmd, shell))
121
122 def check_success(self, b_output):
123 b_output = self.remove_ansi_codes(b_output)
124 return super().check_success(b_output)
125
126 def check_incorrect_password(self, b_output):
127 b_output = self.remove_ansi_codes(b_output)
128 return super().check_incorrect_password(b_output)
129
130 def check_missing_password(self, b_output):
131 b_output = self.remove_ansi_codes(b_output)
132 return super().check_missing_password(b_output)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/become/machinectl.py b/plugins/become/machinectl.py
--- a/plugins/become/machinectl.py
+++ b/plugins/become/machinectl.py
@@ -102,6 +102,7 @@
prompt = 'Password: '
fail = ('==== AUTHENTICATION FAILED ====',)
success = ('==== AUTHENTICATION COMPLETE ====',)
+ require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932
@staticmethod
def remove_ansi_codes(line):
| {"golden_diff": "diff --git a/plugins/become/machinectl.py b/plugins/become/machinectl.py\n--- a/plugins/become/machinectl.py\n+++ b/plugins/become/machinectl.py\n@@ -102,6 +102,7 @@\n prompt = 'Password: '\n fail = ('==== AUTHENTICATION FAILED ====',)\n success = ('==== AUTHENTICATION COMPLETE ====',)\n+ require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932\n \n @staticmethod\n def remove_ansi_codes(line):\n", "issue": "machinectl become plugin does not specify it requires a tty\n### Summary\n\nsee https://github.com/ansible/ansible/issues/81254\r\n\r\nif the plugin sets the class attribute:\r\n\r\n```\r\n require_tty = True\r\n```\r\n\r\nIt would automatically disable pipelining and avoid such errors\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nbecome/machinectl\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\n\r\n```\r\nall\n\n### Community.general Version\n\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n\r\n```\r\nall\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\nN/A\n\n### OS / Environment\n\nN/A\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n\r\n```\r\nUse machinectl become plugin + pipelining\n\n### Expected Results\n\nit works TM\n\n### Actual Results\n\n```console (paste below)\r\n \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for the exact error\",\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2018, Ansible Project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\n name: machinectl\n short_description: Systemd's machinectl privilege escalation\n description:\n - This become plugins allows your remote/login user to execute commands as another user via the machinectl utility.\n author: Ansible Core Team\n options:\n become_user:\n description: User you 'become' to execute the task\n default: ''\n ini:\n - section: privilege_escalation\n key: become_user\n - section: machinectl_become_plugin\n key: user\n vars:\n - name: ansible_become_user\n - name: ansible_machinectl_user\n env:\n - name: ANSIBLE_BECOME_USER\n - name: ANSIBLE_MACHINECTL_USER\n become_exe:\n description: Machinectl executable\n default: machinectl\n ini:\n - section: privilege_escalation\n key: become_exe\n - section: machinectl_become_plugin\n key: executable\n vars:\n - name: ansible_become_exe\n - name: ansible_machinectl_exe\n env:\n - name: ANSIBLE_BECOME_EXE\n - name: ANSIBLE_MACHINECTL_EXE\n become_flags:\n description: Options to pass to machinectl\n default: ''\n ini:\n - section: privilege_escalation\n key: become_flags\n - section: machinectl_become_plugin\n key: flags\n vars:\n - name: ansible_become_flags\n - name: ansible_machinectl_flags\n env:\n - name: ANSIBLE_BECOME_FLAGS\n - name: ANSIBLE_MACHINECTL_FLAGS\n become_pass:\n description: Password for machinectl\n required: false\n vars:\n - name: ansible_become_password\n - name: ansible_become_pass\n - name: ansible_machinectl_pass\n env:\n - name: ANSIBLE_BECOME_PASS\n - name: ANSIBLE_MACHINECTL_PASS\n ini:\n - section: machinectl_become_plugin\n key: password\n notes:\n - When not using this plugin with user C(root), it only works correctly with a polkit rule which will alter\n the behaviour of machinectl. This rule must alter the prompt behaviour to ask directly for the user credentials,\n if the user is allowed to perform the action (take a look at the examples section).\n If such a rule is not present the plugin only work if it is used in context with the root user,\n because then no further prompt will be shown by machinectl.\n'''\n\nEXAMPLES = r'''\n# A polkit rule needed to use the module with a non-root user.\n# See the Notes section for details.\n60-machinectl-fast-user-auth.rules: |\n polkit.addRule(function(action, subject) {\n if(action.id == \"org.freedesktop.machine1.host-shell\" && subject.isInGroup(\"wheel\")) {\n return polkit.Result.AUTH_SELF_KEEP;\n }\n });\n'''\n\nfrom re import compile as re_compile\n\nfrom ansible.plugins.become import BecomeBase\nfrom ansible.module_utils._text import to_bytes\n\n\nansi_color_codes = re_compile(to_bytes(r'\\x1B\\[[0-9;]+m'))\n\n\nclass BecomeModule(BecomeBase):\n\n name = 'community.general.machinectl'\n\n prompt = 'Password: '\n fail = ('==== AUTHENTICATION FAILED ====',)\n success = ('==== AUTHENTICATION COMPLETE ====',)\n\n @staticmethod\n def remove_ansi_codes(line):\n return ansi_color_codes.sub(b\"\", line)\n\n def build_become_command(self, cmd, shell):\n super(BecomeModule, self).build_become_command(cmd, shell)\n\n if not cmd:\n return cmd\n\n become = self.get_option('become_exe')\n\n flags = self.get_option('become_flags')\n user = self.get_option('become_user')\n return '%s -q shell %s %s@ %s' % (become, flags, user, self._build_success_command(cmd, shell))\n\n def check_success(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_success(b_output)\n\n def check_incorrect_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_incorrect_password(b_output)\n\n def check_missing_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_missing_password(b_output)\n", "path": "plugins/become/machinectl.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2018, Ansible Project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\n name: machinectl\n short_description: Systemd's machinectl privilege escalation\n description:\n - This become plugins allows your remote/login user to execute commands as another user via the machinectl utility.\n author: Ansible Core Team\n options:\n become_user:\n description: User you 'become' to execute the task\n default: ''\n ini:\n - section: privilege_escalation\n key: become_user\n - section: machinectl_become_plugin\n key: user\n vars:\n - name: ansible_become_user\n - name: ansible_machinectl_user\n env:\n - name: ANSIBLE_BECOME_USER\n - name: ANSIBLE_MACHINECTL_USER\n become_exe:\n description: Machinectl executable\n default: machinectl\n ini:\n - section: privilege_escalation\n key: become_exe\n - section: machinectl_become_plugin\n key: executable\n vars:\n - name: ansible_become_exe\n - name: ansible_machinectl_exe\n env:\n - name: ANSIBLE_BECOME_EXE\n - name: ANSIBLE_MACHINECTL_EXE\n become_flags:\n description: Options to pass to machinectl\n default: ''\n ini:\n - section: privilege_escalation\n key: become_flags\n - section: machinectl_become_plugin\n key: flags\n vars:\n - name: ansible_become_flags\n - name: ansible_machinectl_flags\n env:\n - name: ANSIBLE_BECOME_FLAGS\n - name: ANSIBLE_MACHINECTL_FLAGS\n become_pass:\n description: Password for machinectl\n required: false\n vars:\n - name: ansible_become_password\n - name: ansible_become_pass\n - name: ansible_machinectl_pass\n env:\n - name: ANSIBLE_BECOME_PASS\n - name: ANSIBLE_MACHINECTL_PASS\n ini:\n - section: machinectl_become_plugin\n key: password\n notes:\n - When not using this plugin with user C(root), it only works correctly with a polkit rule which will alter\n the behaviour of machinectl. This rule must alter the prompt behaviour to ask directly for the user credentials,\n if the user is allowed to perform the action (take a look at the examples section).\n If such a rule is not present the plugin only work if it is used in context with the root user,\n because then no further prompt will be shown by machinectl.\n'''\n\nEXAMPLES = r'''\n# A polkit rule needed to use the module with a non-root user.\n# See the Notes section for details.\n60-machinectl-fast-user-auth.rules: |\n polkit.addRule(function(action, subject) {\n if(action.id == \"org.freedesktop.machine1.host-shell\" && subject.isInGroup(\"wheel\")) {\n return polkit.Result.AUTH_SELF_KEEP;\n }\n });\n'''\n\nfrom re import compile as re_compile\n\nfrom ansible.plugins.become import BecomeBase\nfrom ansible.module_utils._text import to_bytes\n\n\nansi_color_codes = re_compile(to_bytes(r'\\x1B\\[[0-9;]+m'))\n\n\nclass BecomeModule(BecomeBase):\n\n name = 'community.general.machinectl'\n\n prompt = 'Password: '\n fail = ('==== AUTHENTICATION FAILED ====',)\n success = ('==== AUTHENTICATION COMPLETE ====',)\n require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932\n\n @staticmethod\n def remove_ansi_codes(line):\n return ansi_color_codes.sub(b\"\", line)\n\n def build_become_command(self, cmd, shell):\n super(BecomeModule, self).build_become_command(cmd, shell)\n\n if not cmd:\n return cmd\n\n become = self.get_option('become_exe')\n\n flags = self.get_option('become_flags')\n user = self.get_option('become_user')\n return '%s -q shell %s %s@ %s' % (become, flags, user, self._build_success_command(cmd, shell))\n\n def check_success(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_success(b_output)\n\n def check_incorrect_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_incorrect_password(b_output)\n\n def check_missing_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_missing_password(b_output)\n", "path": "plugins/become/machinectl.py"}]} | 1,903 | 126 |
gh_patches_debug_25376 | rasdani/github-patches | git_diff | team-ocean__veros-49 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Job resubmission with job scheduler doesn't work
I was not able to find out the reason behind resubmission issue with job scheduler, such as:
`veros-resubmit -i acc.lowres -n 50 -l 62208000 -c "python acc.py -b bohrium -v debug" --callback "/usr/bin/sbatch /groups/ocean/nutrik/veros_cases/paper/acc/veros_batch.sh"`
Although jobs with run length of up to 29 days are resubmitted fine, those with longer run length are not resubmitted and no errors or messages are reported.
In fact, jobs are successfully resubmitted without scheduler (`--callback "./veros_batch.sh"`) for any run length.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `veros/cli/veros_resubmit.py`
Content:
```
1 #!/usr/bin/env python
2
3 import functools
4 import subprocess
5 import shlex
6 import sys
7 import os
8
9 import click
10
11 LAST_N_FILENAME = "{identifier}.current_run"
12
13
14 class ShellCommand(click.ParamType):
15 name = "command"
16
17 def convert(self, value, param, ctx):
18 return shlex.split(value)
19
20
21 def get_current_n(filename):
22 if not os.path.isfile(filename):
23 return 0
24
25 with open(filename, "r") as f:
26 return int(f.read())
27
28
29 def write_next_n(n, filename):
30 with open(filename, "w") as f:
31 f.write(str(n))
32
33
34 def call_veros(cmd, name, n, runlen):
35 identifier = "{name}.{n:0>4}".format(name=name, n=n)
36 prev_id = "{name}.{n:0>4}".format(name=name, n=n - 1)
37 args = ["-s", "identifier", identifier, "-s", "restart_output_filename",
38 "{identifier}.restart.h5", "-s", "runlen", "{}".format(runlen)]
39 if n:
40 args += ["-s", "restart_input_filename", "{prev_id}.restart.h5".format(prev_id=prev_id)]
41 sys.stdout.write("\n >>> {}\n\n".format(" ".join(cmd + args)))
42 sys.stdout.flush()
43 try:
44 subprocess.check_call(cmd + args)
45 except subprocess.CalledProcessError:
46 raise RuntimeError("Run {} failed, exiting".format(n))
47
48
49 def resubmit(identifier, num_runs, length_per_run, veros_cmd, callback):
50 """Performs several runs of Veros back to back, using the previous run as restart input.
51
52 Intended to be used with scheduling systems (e.g. SLURM or PBS).
53
54 """
55 last_n_filename = LAST_N_FILENAME.format(identifier=identifier)
56
57 current_n = get_current_n(last_n_filename)
58 if current_n >= num_runs:
59 return
60
61 call_veros(veros_cmd, identifier, current_n, length_per_run)
62 write_next_n(current_n + 1, last_n_filename)
63 subprocess.Popen(callback)
64
65
66 @click.command("veros-resubmit", short_help="Re-run a Veros setup several times")
67 @click.option("-i", "--identifier", required=True,
68 help="Base identifier of the simulation")
69 @click.option("-n", "--num-runs", type=click.INT, required=True,
70 help="Total number of runs to execute")
71 @click.option("-l", "--length-per-run", type=click.FLOAT, required=True,
72 help="Length (in seconds) of each run")
73 @click.option("-c", "--veros-cmd", type=ShellCommand(), required=True,
74 help="The command that is used to call veros (quoted)")
75 @click.option("--callback", metavar="CMD", type=ShellCommand(), default=None,
76 help="Command to call after each run has finished (quoted, default: call self)")
77 @functools.wraps(resubmit)
78 def cli(*args, **kwargs):
79 if kwargs["callback"] is None:
80 kwargs["callback"] = sys.argv
81 resubmit(*args, **kwargs)
82
83
84 if __name__ == "__main__":
85 cli()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/veros/cli/veros_resubmit.py b/veros/cli/veros_resubmit.py
--- a/veros/cli/veros_resubmit.py
+++ b/veros/cli/veros_resubmit.py
@@ -5,10 +5,13 @@
import shlex
import sys
import os
+import time
import click
LAST_N_FILENAME = "{identifier}.current_run"
+CHILD_TIMEOUT = 10
+POLL_DELAY = 0.1
class ShellCommand(click.ParamType):
@@ -60,7 +63,21 @@
call_veros(veros_cmd, identifier, current_n, length_per_run)
write_next_n(current_n + 1, last_n_filename)
- subprocess.Popen(callback)
+ next_proc = subprocess.Popen(callback)
+
+ # catch immediately crashing processes
+ timeout = CHILD_TIMEOUT
+
+ while timeout > 0:
+ retcode = next_proc.poll()
+ if retcode is not None:
+ if retcode > 0:
+ # process crashed
+ raise RuntimeError("Callback exited with {}".format(retcode))
+ else:
+ break
+ time.sleep(POLL_DELAY)
+ timeout -= POLL_DELAY
@click.command("veros-resubmit", short_help="Re-run a Veros setup several times")
@@ -78,6 +95,7 @@
def cli(*args, **kwargs):
if kwargs["callback"] is None:
kwargs["callback"] = sys.argv
+
resubmit(*args, **kwargs)
| {"golden_diff": "diff --git a/veros/cli/veros_resubmit.py b/veros/cli/veros_resubmit.py\n--- a/veros/cli/veros_resubmit.py\n+++ b/veros/cli/veros_resubmit.py\n@@ -5,10 +5,13 @@\n import shlex\n import sys\n import os\n+import time\n \n import click\n \n LAST_N_FILENAME = \"{identifier}.current_run\"\n+CHILD_TIMEOUT = 10\n+POLL_DELAY = 0.1\n \n \n class ShellCommand(click.ParamType):\n@@ -60,7 +63,21 @@\n \n call_veros(veros_cmd, identifier, current_n, length_per_run)\n write_next_n(current_n + 1, last_n_filename)\n- subprocess.Popen(callback)\n+ next_proc = subprocess.Popen(callback)\n+\n+ # catch immediately crashing processes\n+ timeout = CHILD_TIMEOUT\n+\n+ while timeout > 0:\n+ retcode = next_proc.poll()\n+ if retcode is not None:\n+ if retcode > 0:\n+ # process crashed\n+ raise RuntimeError(\"Callback exited with {}\".format(retcode))\n+ else:\n+ break\n+ time.sleep(POLL_DELAY)\n+ timeout -= POLL_DELAY\n \n \n @click.command(\"veros-resubmit\", short_help=\"Re-run a Veros setup several times\")\n@@ -78,6 +95,7 @@\n def cli(*args, **kwargs):\n if kwargs[\"callback\"] is None:\n kwargs[\"callback\"] = sys.argv\n+\n resubmit(*args, **kwargs)\n", "issue": "Job resubmission with job scheduler doesn't work \nI was not able to find out the reason behind resubmission issue with job scheduler, such as:\r\n`veros-resubmit -i acc.lowres -n 50 -l 62208000 -c \"python acc.py -b bohrium -v debug\" --callback \"/usr/bin/sbatch /groups/ocean/nutrik/veros_cases/paper/acc/veros_batch.sh\"`\r\nAlthough jobs with run length of up to 29 days are resubmitted fine, those with longer run length are not resubmitted and no errors or messages are reported.\r\n\r\nIn fact, jobs are successfully resubmitted without scheduler (`--callback \"./veros_batch.sh\"`) for any run length.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport functools\nimport subprocess\nimport shlex\nimport sys\nimport os\n\nimport click\n\nLAST_N_FILENAME = \"{identifier}.current_run\"\n\n\nclass ShellCommand(click.ParamType):\n name = \"command\"\n\n def convert(self, value, param, ctx):\n return shlex.split(value)\n\n\ndef get_current_n(filename):\n if not os.path.isfile(filename):\n return 0\n\n with open(filename, \"r\") as f:\n return int(f.read())\n\n\ndef write_next_n(n, filename):\n with open(filename, \"w\") as f:\n f.write(str(n))\n\n\ndef call_veros(cmd, name, n, runlen):\n identifier = \"{name}.{n:0>4}\".format(name=name, n=n)\n prev_id = \"{name}.{n:0>4}\".format(name=name, n=n - 1)\n args = [\"-s\", \"identifier\", identifier, \"-s\", \"restart_output_filename\",\n \"{identifier}.restart.h5\", \"-s\", \"runlen\", \"{}\".format(runlen)]\n if n:\n args += [\"-s\", \"restart_input_filename\", \"{prev_id}.restart.h5\".format(prev_id=prev_id)]\n sys.stdout.write(\"\\n >>> {}\\n\\n\".format(\" \".join(cmd + args)))\n sys.stdout.flush()\n try:\n subprocess.check_call(cmd + args)\n except subprocess.CalledProcessError:\n raise RuntimeError(\"Run {} failed, exiting\".format(n))\n\n\ndef resubmit(identifier, num_runs, length_per_run, veros_cmd, callback):\n \"\"\"Performs several runs of Veros back to back, using the previous run as restart input.\n\n Intended to be used with scheduling systems (e.g. SLURM or PBS).\n\n \"\"\"\n last_n_filename = LAST_N_FILENAME.format(identifier=identifier)\n\n current_n = get_current_n(last_n_filename)\n if current_n >= num_runs:\n return\n\n call_veros(veros_cmd, identifier, current_n, length_per_run)\n write_next_n(current_n + 1, last_n_filename)\n subprocess.Popen(callback)\n\n\[email protected](\"veros-resubmit\", short_help=\"Re-run a Veros setup several times\")\[email protected](\"-i\", \"--identifier\", required=True,\n help=\"Base identifier of the simulation\")\[email protected](\"-n\", \"--num-runs\", type=click.INT, required=True,\n help=\"Total number of runs to execute\")\[email protected](\"-l\", \"--length-per-run\", type=click.FLOAT, required=True,\n help=\"Length (in seconds) of each run\")\[email protected](\"-c\", \"--veros-cmd\", type=ShellCommand(), required=True,\n help=\"The command that is used to call veros (quoted)\")\[email protected](\"--callback\", metavar=\"CMD\", type=ShellCommand(), default=None,\n help=\"Command to call after each run has finished (quoted, default: call self)\")\[email protected](resubmit)\ndef cli(*args, **kwargs):\n if kwargs[\"callback\"] is None:\n kwargs[\"callback\"] = sys.argv\n resubmit(*args, **kwargs)\n\n\nif __name__ == \"__main__\":\n cli()\n", "path": "veros/cli/veros_resubmit.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport functools\nimport subprocess\nimport shlex\nimport sys\nimport os\nimport time\n\nimport click\n\nLAST_N_FILENAME = \"{identifier}.current_run\"\nCHILD_TIMEOUT = 10\nPOLL_DELAY = 0.1\n\n\nclass ShellCommand(click.ParamType):\n name = \"command\"\n\n def convert(self, value, param, ctx):\n return shlex.split(value)\n\n\ndef get_current_n(filename):\n if not os.path.isfile(filename):\n return 0\n\n with open(filename, \"r\") as f:\n return int(f.read())\n\n\ndef write_next_n(n, filename):\n with open(filename, \"w\") as f:\n f.write(str(n))\n\n\ndef call_veros(cmd, name, n, runlen):\n identifier = \"{name}.{n:0>4}\".format(name=name, n=n)\n prev_id = \"{name}.{n:0>4}\".format(name=name, n=n - 1)\n args = [\"-s\", \"identifier\", identifier, \"-s\", \"restart_output_filename\",\n \"{identifier}.restart.h5\", \"-s\", \"runlen\", \"{}\".format(runlen)]\n if n:\n args += [\"-s\", \"restart_input_filename\", \"{prev_id}.restart.h5\".format(prev_id=prev_id)]\n sys.stdout.write(\"\\n >>> {}\\n\\n\".format(\" \".join(cmd + args)))\n sys.stdout.flush()\n try:\n subprocess.check_call(cmd + args)\n except subprocess.CalledProcessError:\n raise RuntimeError(\"Run {} failed, exiting\".format(n))\n\n\ndef resubmit(identifier, num_runs, length_per_run, veros_cmd, callback):\n \"\"\"Performs several runs of Veros back to back, using the previous run as restart input.\n\n Intended to be used with scheduling systems (e.g. SLURM or PBS).\n\n \"\"\"\n last_n_filename = LAST_N_FILENAME.format(identifier=identifier)\n\n current_n = get_current_n(last_n_filename)\n if current_n >= num_runs:\n return\n\n call_veros(veros_cmd, identifier, current_n, length_per_run)\n write_next_n(current_n + 1, last_n_filename)\n next_proc = subprocess.Popen(callback)\n\n # catch immediately crashing processes\n timeout = CHILD_TIMEOUT\n\n while timeout > 0:\n retcode = next_proc.poll()\n if retcode is not None:\n if retcode > 0:\n # process crashed\n raise RuntimeError(\"Callback exited with {}\".format(retcode))\n else:\n break\n time.sleep(POLL_DELAY)\n timeout -= POLL_DELAY\n\n\[email protected](\"veros-resubmit\", short_help=\"Re-run a Veros setup several times\")\[email protected](\"-i\", \"--identifier\", required=True,\n help=\"Base identifier of the simulation\")\[email protected](\"-n\", \"--num-runs\", type=click.INT, required=True,\n help=\"Total number of runs to execute\")\[email protected](\"-l\", \"--length-per-run\", type=click.FLOAT, required=True,\n help=\"Length (in seconds) of each run\")\[email protected](\"-c\", \"--veros-cmd\", type=ShellCommand(), required=True,\n help=\"The command that is used to call veros (quoted)\")\[email protected](\"--callback\", metavar=\"CMD\", type=ShellCommand(), default=None,\n help=\"Command to call after each run has finished (quoted, default: call self)\")\[email protected](resubmit)\ndef cli(*args, **kwargs):\n if kwargs[\"callback\"] is None:\n kwargs[\"callback\"] = sys.argv\n\n resubmit(*args, **kwargs)\n\n\nif __name__ == \"__main__\":\n cli()\n", "path": "veros/cli/veros_resubmit.py"}]} | 1,290 | 347 |
gh_patches_debug_30694 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1599 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: was_wolfsburg_de stopped fetching data
### I Have A Problem With:
A specific source
### What's Your Problem
The Source was_wolfsburg_de stopped fetching data for 2024. I suspect because the request link is no longer accurate.
I have experimented a bit, and with the following address I receive current data:
https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php
It only concerns "Restmüll, Bioabfall und Papierabfall". "Gelber Sack" is still functioning.
### Source (if relevant)
was_wolfsburg_de
### Logs
_No response_
### Relevant Configuration
_No response_
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [ ] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py`
Content:
```
1 import datetime
2 import re
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6 from waste_collection_schedule.service.ICS import ICS
7
8 TITLE = "Wolfsburger Abfallwirtschaft und Straßenreinigung"
9 DESCRIPTION = "Source for waste collections for WAS-Wolfsburg, Germany."
10 URL = "https://was-wolfsburg.de"
11 TEST_CASES = {
12 "Barnstorf": {"city": "Barnstorf", "street": "Bahnhofspassage"},
13 "Sülfeld": {"city": "Sülfeld", "street": "Bärheide"},
14 }
15 CHARACTER_MAP = {
16 ord("ü"): "u",
17 ord("ö"): "o", # doesn't appear to be needed
18 ord("ä"): "a", # doesn't appear to be needed
19 }
20
21
22 class Source:
23 def __init__(self, city: str, street: str):
24 self._city = city.translate(CHARACTER_MAP)
25 self._street = street.translate(CHARACTER_MAP)
26 self._ics = ICS()
27
28 def fetch(self):
29 # fetch "Gelber Sack"
30 args = {"g": self._city}
31 r = requests.get(
32 "https://was-wolfsburg.de/subgelberweihgarten/php/abfuhrgelber.php",
33 params=args,
34 )
35
36 entries = []
37 match = re.findall(r"(\d{2})\.(\d{2})\.(\d{4})", r.text)
38 for m in match:
39 date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))
40 entries.append(Collection(date, "Gelber Sack"))
41
42 # fetch remaining collections
43 args = {"ortabf": self._street}
44 r = requests.post(
45 "https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php",
46 data=args,
47 )
48 dates = self._ics.convert(r.text)
49 for d in dates:
50 entries.append(Collection(d[0], d[1]))
51
52 return entries
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py
@@ -12,6 +12,14 @@
"Barnstorf": {"city": "Barnstorf", "street": "Bahnhofspassage"},
"Sülfeld": {"city": "Sülfeld", "street": "Bärheide"},
}
+
+ICON_MAP = {
+ "Gelber Sack": "mdi:recycle",
+ "Bioabfall": "mdi:leaf",
+ "Restabfall": "mdi:trash-can",
+ "Altpapier": "mdi:file-document-outline",
+}
+
CHARACTER_MAP = {
ord("ü"): "u",
ord("ö"): "o", # doesn't appear to be needed
@@ -37,16 +45,21 @@
match = re.findall(r"(\d{2})\.(\d{2})\.(\d{4})", r.text)
for m in match:
date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))
- entries.append(Collection(date, "Gelber Sack"))
+ entries.append(
+ Collection(date, "Gelber Sack", icon=ICON_MAP["Gelber Sack"])
+ )
# fetch remaining collections
- args = {"ortabf": self._street}
- r = requests.post(
- "https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php",
- data=args,
+ args = {"k": self._street}
+ r = requests.get(
+ "https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php",
+ params=args,
+ )
+ match = re.findall(
+ r"(\d{2})\.(\d{2})\.(\d{4}).*?<em>\s*([A-Za-z- ]+)\s*</em>", r.text
)
- dates = self._ics.convert(r.text)
- for d in dates:
- entries.append(Collection(d[0], d[1]))
+ for m in match:
+ date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))
+ entries.append(Collection(date, m[3], icon=ICON_MAP[m[3]]))
return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py\n@@ -12,6 +12,14 @@\n \"Barnstorf\": {\"city\": \"Barnstorf\", \"street\": \"Bahnhofspassage\"},\n \"S\u00fclfeld\": {\"city\": \"S\u00fclfeld\", \"street\": \"B\u00e4rheide\"},\n }\n+\n+ICON_MAP = {\n+ \"Gelber Sack\": \"mdi:recycle\",\n+ \"Bioabfall\": \"mdi:leaf\",\n+ \"Restabfall\": \"mdi:trash-can\",\n+ \"Altpapier\": \"mdi:file-document-outline\",\n+}\n+\n CHARACTER_MAP = {\n ord(\"\u00fc\"): \"u\",\n ord(\"\u00f6\"): \"o\", # doesn't appear to be needed\n@@ -37,16 +45,21 @@\n match = re.findall(r\"(\\d{2})\\.(\\d{2})\\.(\\d{4})\", r.text)\n for m in match:\n date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n- entries.append(Collection(date, \"Gelber Sack\"))\n+ entries.append(\n+ Collection(date, \"Gelber Sack\", icon=ICON_MAP[\"Gelber Sack\"])\n+ )\n \n # fetch remaining collections\n- args = {\"ortabf\": self._street}\n- r = requests.post(\n- \"https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php\",\n- data=args,\n+ args = {\"k\": self._street}\n+ r = requests.get(\n+ \"https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php\",\n+ params=args,\n+ )\n+ match = re.findall(\n+ r\"(\\d{2})\\.(\\d{2})\\.(\\d{4}).*?<em>\\s*([A-Za-z- ]+)\\s*</em>\", r.text\n )\n- dates = self._ics.convert(r.text)\n- for d in dates:\n- entries.append(Collection(d[0], d[1]))\n+ for m in match:\n+ date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n+ entries.append(Collection(date, m[3], icon=ICON_MAP[m[3]]))\n \n return entries\n", "issue": "[Bug]: was_wolfsburg_de stopped fetching data\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe Source was_wolfsburg_de stopped fetching data for 2024. I suspect because the request link is no longer accurate.\r\nI have experimented a bit, and with the following address I receive current data: \r\n\r\nhttps://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php\r\n\r\nIt only concerns \"Restm\u00fcll, Bioabfall und Papierabfall\". \"Gelber Sack\" is still functioning.\n\n### Source (if relevant)\n\nwas_wolfsburg_de\n\n### Logs\n\n_No response_\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [ ] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import datetime\nimport re\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"Wolfsburger Abfallwirtschaft und Stra\u00dfenreinigung\"\nDESCRIPTION = \"Source for waste collections for WAS-Wolfsburg, Germany.\"\nURL = \"https://was-wolfsburg.de\"\nTEST_CASES = {\n \"Barnstorf\": {\"city\": \"Barnstorf\", \"street\": \"Bahnhofspassage\"},\n \"S\u00fclfeld\": {\"city\": \"S\u00fclfeld\", \"street\": \"B\u00e4rheide\"},\n}\nCHARACTER_MAP = {\n ord(\"\u00fc\"): \"u\",\n ord(\"\u00f6\"): \"o\", # doesn't appear to be needed\n ord(\"\u00e4\"): \"a\", # doesn't appear to be needed\n}\n\n\nclass Source:\n def __init__(self, city: str, street: str):\n self._city = city.translate(CHARACTER_MAP)\n self._street = street.translate(CHARACTER_MAP)\n self._ics = ICS()\n\n def fetch(self):\n # fetch \"Gelber Sack\"\n args = {\"g\": self._city}\n r = requests.get(\n \"https://was-wolfsburg.de/subgelberweihgarten/php/abfuhrgelber.php\",\n params=args,\n )\n\n entries = []\n match = re.findall(r\"(\\d{2})\\.(\\d{2})\\.(\\d{4})\", r.text)\n for m in match:\n date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n entries.append(Collection(date, \"Gelber Sack\"))\n\n # fetch remaining collections\n args = {\"ortabf\": self._street}\n r = requests.post(\n \"https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php\",\n data=args,\n )\n dates = self._ics.convert(r.text)\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py"}], "after_files": [{"content": "import datetime\nimport re\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"Wolfsburger Abfallwirtschaft und Stra\u00dfenreinigung\"\nDESCRIPTION = \"Source for waste collections for WAS-Wolfsburg, Germany.\"\nURL = \"https://was-wolfsburg.de\"\nTEST_CASES = {\n \"Barnstorf\": {\"city\": \"Barnstorf\", \"street\": \"Bahnhofspassage\"},\n \"S\u00fclfeld\": {\"city\": \"S\u00fclfeld\", \"street\": \"B\u00e4rheide\"},\n}\n\nICON_MAP = {\n \"Gelber Sack\": \"mdi:recycle\",\n \"Bioabfall\": \"mdi:leaf\",\n \"Restabfall\": \"mdi:trash-can\",\n \"Altpapier\": \"mdi:file-document-outline\",\n}\n\nCHARACTER_MAP = {\n ord(\"\u00fc\"): \"u\",\n ord(\"\u00f6\"): \"o\", # doesn't appear to be needed\n ord(\"\u00e4\"): \"a\", # doesn't appear to be needed\n}\n\n\nclass Source:\n def __init__(self, city: str, street: str):\n self._city = city.translate(CHARACTER_MAP)\n self._street = street.translate(CHARACTER_MAP)\n self._ics = ICS()\n\n def fetch(self):\n # fetch \"Gelber Sack\"\n args = {\"g\": self._city}\n r = requests.get(\n \"https://was-wolfsburg.de/subgelberweihgarten/php/abfuhrgelber.php\",\n params=args,\n )\n\n entries = []\n match = re.findall(r\"(\\d{2})\\.(\\d{2})\\.(\\d{4})\", r.text)\n for m in match:\n date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n entries.append(\n Collection(date, \"Gelber Sack\", icon=ICON_MAP[\"Gelber Sack\"])\n )\n\n # fetch remaining collections\n args = {\"k\": self._street}\n r = requests.get(\n \"https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php\",\n params=args,\n )\n match = re.findall(\n r\"(\\d{2})\\.(\\d{2})\\.(\\d{4}).*?<em>\\s*([A-Za-z- ]+)\\s*</em>\", r.text\n )\n for m in match:\n date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n entries.append(Collection(date, m[3], icon=ICON_MAP[m[3]]))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py"}]} | 1,193 | 609 |
gh_patches_debug_18547 | rasdani/github-patches | git_diff | searx__searx-1501 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Asksteem is gone
The API has been discontinued so it should probably be removed as an option entirely.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/asksteem.py`
Content:
```
1 """
2 Asksteem (general)
3
4 @website https://asksteem.com/
5 @provide-api yes
6
7 @using-api yes
8 @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)
9 @stable yes
10 @parse url, title, content
11 """
12
13 from json import loads
14 from searx.url_utils import urlencode
15
16 # engine dependent config
17 categories = ['general']
18 paging = True
19 language_support = False
20 disabled = True
21
22 # search-url
23 search_url = 'https://api.asksteem.com/search?{params}'
24 result_url = 'https://steemit.com/@{author}/{title}'
25
26
27 # do search-request
28 def request(query, params):
29 url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))
30 params['url'] = url
31 return params
32
33
34 # get response from search-request
35 def response(resp):
36 json = loads(resp.text)
37
38 results = []
39
40 for result in json.get('results', []):
41 results.append({'url': result_url.format(author=result['author'], title=result['permlink']),
42 'title': result['title'],
43 'content': result['summary']})
44 return results
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/asksteem.py b/searx/engines/asksteem.py
deleted file mode 100644
--- a/searx/engines/asksteem.py
+++ /dev/null
@@ -1,44 +0,0 @@
-"""
- Asksteem (general)
-
- @website https://asksteem.com/
- @provide-api yes
-
- @using-api yes
- @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)
- @stable yes
- @parse url, title, content
-"""
-
-from json import loads
-from searx.url_utils import urlencode
-
-# engine dependent config
-categories = ['general']
-paging = True
-language_support = False
-disabled = True
-
-# search-url
-search_url = 'https://api.asksteem.com/search?{params}'
-result_url = 'https://steemit.com/@{author}/{title}'
-
-
-# do search-request
-def request(query, params):
- url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))
- params['url'] = url
- return params
-
-
-# get response from search-request
-def response(resp):
- json = loads(resp.text)
-
- results = []
-
- for result in json.get('results', []):
- results.append({'url': result_url.format(author=result['author'], title=result['permlink']),
- 'title': result['title'],
- 'content': result['summary']})
- return results
| {"golden_diff": "diff --git a/searx/engines/asksteem.py b/searx/engines/asksteem.py\ndeleted file mode 100644\n--- a/searx/engines/asksteem.py\n+++ /dev/null\n@@ -1,44 +0,0 @@\n-\"\"\"\n- Asksteem (general)\n-\n- @website https://asksteem.com/\n- @provide-api yes\n-\n- @using-api yes\n- @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)\n- @stable yes\n- @parse url, title, content\n-\"\"\"\n-\n-from json import loads\n-from searx.url_utils import urlencode\n-\n-# engine dependent config\n-categories = ['general']\n-paging = True\n-language_support = False\n-disabled = True\n-\n-# search-url\n-search_url = 'https://api.asksteem.com/search?{params}'\n-result_url = 'https://steemit.com/@{author}/{title}'\n-\n-\n-# do search-request\n-def request(query, params):\n- url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))\n- params['url'] = url\n- return params\n-\n-\n-# get response from search-request\n-def response(resp):\n- json = loads(resp.text)\n-\n- results = []\n-\n- for result in json.get('results', []):\n- results.append({'url': result_url.format(author=result['author'], title=result['permlink']),\n- 'title': result['title'],\n- 'content': result['summary']})\n- return results\n", "issue": "Asksteem is gone\nThe API has been discontinued so it should probably be removed as an option entirely.\n", "before_files": [{"content": "\"\"\"\n Asksteem (general)\n\n @website https://asksteem.com/\n @provide-api yes\n\n @using-api yes\n @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)\n @stable yes\n @parse url, title, content\n\"\"\"\n\nfrom json import loads\nfrom searx.url_utils import urlencode\n\n# engine dependent config\ncategories = ['general']\npaging = True\nlanguage_support = False\ndisabled = True\n\n# search-url\nsearch_url = 'https://api.asksteem.com/search?{params}'\nresult_url = 'https://steemit.com/@{author}/{title}'\n\n\n# do search-request\ndef request(query, params):\n url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))\n params['url'] = url\n return params\n\n\n# get response from search-request\ndef response(resp):\n json = loads(resp.text)\n\n results = []\n\n for result in json.get('results', []):\n results.append({'url': result_url.format(author=result['author'], title=result['permlink']),\n 'title': result['title'],\n 'content': result['summary']})\n return results\n", "path": "searx/engines/asksteem.py"}], "after_files": [{"content": null, "path": "searx/engines/asksteem.py"}]} | 641 | 359 |
gh_patches_debug_18029 | rasdani/github-patches | git_diff | python-poetry__poetry-1796 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installing directory poetry package with dependencies in secondary source fails
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **MacOS 10.14**: <!-- Replace with version + name -->
- **1.0.0b8**: <!-- Replace with version -->
## Issue
Due to https://github.com/pypa/pip/issues/7444 installing a directory which is managed by poetry or has a pyproject.toml file present will cause the `--no-deps` argument to be ignored.
This can go unnoticed as long as you are only working with pypi dependencies but when your package depends on a private pypi repository this causes installs to fail.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/packages/file_dependency.py`
Content:
```
1 import hashlib
2 import io
3
4 from pkginfo.distribution import HEADER_ATTRS
5 from pkginfo.distribution import HEADER_ATTRS_2_0
6
7 from poetry.utils._compat import Path
8
9 from .dependency import Dependency
10
11
12 # Patching pkginfo to support Metadata version 2.1 (PEP 566)
13 HEADER_ATTRS.update(
14 {"2.1": HEADER_ATTRS_2_0 + (("Provides-Extra", "provides_extra", True),)}
15 )
16
17
18 class FileDependency(Dependency):
19 def __init__(
20 self,
21 name,
22 path, # type: Path
23 category="main", # type: str
24 optional=False, # type: bool
25 base=None, # type: Path
26 ):
27 self._path = path
28 self._base = base
29 self._full_path = path
30
31 if self._base and not self._path.is_absolute():
32 self._full_path = self._base / self._path
33
34 if not self._full_path.exists():
35 raise ValueError("File {} does not exist".format(self._path))
36
37 if self._full_path.is_dir():
38 raise ValueError("{} is a directory, expected a file".format(self._path))
39
40 super(FileDependency, self).__init__(
41 name, "*", category=category, optional=optional, allows_prereleases=True
42 )
43
44 @property
45 def path(self):
46 return self._path
47
48 @property
49 def full_path(self):
50 return self._full_path.resolve()
51
52 def is_file(self):
53 return True
54
55 def hash(self):
56 h = hashlib.sha256()
57 with self._full_path.open("rb") as fp:
58 for content in iter(lambda: fp.read(io.DEFAULT_BUFFER_SIZE), b""):
59 h.update(content)
60
61 return h.hexdigest()
62
```
Path: `poetry/packages/directory_dependency.py`
Content:
```
1 from pkginfo.distribution import HEADER_ATTRS
2 from pkginfo.distribution import HEADER_ATTRS_2_0
3
4 from poetry.utils._compat import Path
5 from poetry.utils.toml_file import TomlFile
6
7 from .dependency import Dependency
8
9
10 # Patching pkginfo to support Metadata version 2.1 (PEP 566)
11 HEADER_ATTRS.update(
12 {"2.1": HEADER_ATTRS_2_0 + (("Provides-Extra", "provides_extra", True),)}
13 )
14
15
16 class DirectoryDependency(Dependency):
17 def __init__(
18 self,
19 name,
20 path, # type: Path
21 category="main", # type: str
22 optional=False, # type: bool
23 base=None, # type: Path
24 develop=True, # type: bool
25 ):
26 self._path = path
27 self._base = base
28 self._full_path = path
29 self._develop = develop
30 self._supports_poetry = False
31
32 if self._base and not self._path.is_absolute():
33 self._full_path = self._base / self._path
34
35 if not self._full_path.exists():
36 raise ValueError("Directory {} does not exist".format(self._path))
37
38 if self._full_path.is_file():
39 raise ValueError("{} is a file, expected a directory".format(self._path))
40
41 # Checking content to determine actions
42 setup = self._full_path / "setup.py"
43 pyproject = TomlFile(self._full_path / "pyproject.toml")
44 if pyproject.exists():
45 pyproject_content = pyproject.read()
46 self._supports_poetry = (
47 "tool" in pyproject_content and "poetry" in pyproject_content["tool"]
48 )
49
50 if not setup.exists() and not self._supports_poetry:
51 raise ValueError(
52 "Directory {} does not seem to be a Python package".format(
53 self._full_path
54 )
55 )
56
57 super(DirectoryDependency, self).__init__(
58 name, "*", category=category, optional=optional, allows_prereleases=True
59 )
60
61 @property
62 def path(self):
63 return self._path
64
65 @property
66 def full_path(self):
67 return self._full_path.resolve()
68
69 @property
70 def base(self):
71 return self._base
72
73 @property
74 def develop(self):
75 return self._develop
76
77 def supports_poetry(self):
78 return self._supports_poetry
79
80 def is_directory(self):
81 return True
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/packages/directory_dependency.py b/poetry/packages/directory_dependency.py
--- a/poetry/packages/directory_dependency.py
+++ b/poetry/packages/directory_dependency.py
@@ -74,6 +74,17 @@
def develop(self):
return self._develop
+ @property
+ def base_pep_508_name(self): # type: () -> str
+ requirement = self.pretty_name
+
+ if self.extras:
+ requirement += "[{}]".format(",".join(self.extras))
+
+ requirement += " @ {}".format(self._path)
+
+ return requirement
+
def supports_poetry(self):
return self._supports_poetry
diff --git a/poetry/packages/file_dependency.py b/poetry/packages/file_dependency.py
--- a/poetry/packages/file_dependency.py
+++ b/poetry/packages/file_dependency.py
@@ -49,6 +49,17 @@
def full_path(self):
return self._full_path.resolve()
+ @property
+ def base_pep_508_name(self): # type: () -> str
+ requirement = self.pretty_name
+
+ if self.extras:
+ requirement += "[{}]".format(",".join(self.extras))
+
+ requirement += " @ {}".format(self._path)
+
+ return requirement
+
def is_file(self):
return True
| {"golden_diff": "diff --git a/poetry/packages/directory_dependency.py b/poetry/packages/directory_dependency.py\n--- a/poetry/packages/directory_dependency.py\n+++ b/poetry/packages/directory_dependency.py\n@@ -74,6 +74,17 @@\n def develop(self):\n return self._develop\n \n+ @property\n+ def base_pep_508_name(self): # type: () -> str\n+ requirement = self.pretty_name\n+\n+ if self.extras:\n+ requirement += \"[{}]\".format(\",\".join(self.extras))\n+\n+ requirement += \" @ {}\".format(self._path)\n+\n+ return requirement\n+\n def supports_poetry(self):\n return self._supports_poetry\n \ndiff --git a/poetry/packages/file_dependency.py b/poetry/packages/file_dependency.py\n--- a/poetry/packages/file_dependency.py\n+++ b/poetry/packages/file_dependency.py\n@@ -49,6 +49,17 @@\n def full_path(self):\n return self._full_path.resolve()\n \n+ @property\n+ def base_pep_508_name(self): # type: () -> str\n+ requirement = self.pretty_name\n+\n+ if self.extras:\n+ requirement += \"[{}]\".format(\",\".join(self.extras))\n+\n+ requirement += \" @ {}\".format(self._path)\n+\n+ return requirement\n+\n def is_file(self):\n return True\n", "issue": "Installing directory poetry package with dependencies in secondary source fails\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **MacOS 10.14**: <!-- Replace with version + name -->\r\n- **1.0.0b8**: <!-- Replace with version -->\r\n\r\n## Issue\r\nDue to https://github.com/pypa/pip/issues/7444 installing a directory which is managed by poetry or has a pyproject.toml file present will cause the `--no-deps` argument to be ignored. \r\n\r\nThis can go unnoticed as long as you are only working with pypi dependencies but when your package depends on a private pypi repository this causes installs to fail. \r\n\n", "before_files": [{"content": "import hashlib\nimport io\n\nfrom pkginfo.distribution import HEADER_ATTRS\nfrom pkginfo.distribution import HEADER_ATTRS_2_0\n\nfrom poetry.utils._compat import Path\n\nfrom .dependency import Dependency\n\n\n# Patching pkginfo to support Metadata version 2.1 (PEP 566)\nHEADER_ATTRS.update(\n {\"2.1\": HEADER_ATTRS_2_0 + ((\"Provides-Extra\", \"provides_extra\", True),)}\n)\n\n\nclass FileDependency(Dependency):\n def __init__(\n self,\n name,\n path, # type: Path\n category=\"main\", # type: str\n optional=False, # type: bool\n base=None, # type: Path\n ):\n self._path = path\n self._base = base\n self._full_path = path\n\n if self._base and not self._path.is_absolute():\n self._full_path = self._base / self._path\n\n if not self._full_path.exists():\n raise ValueError(\"File {} does not exist\".format(self._path))\n\n if self._full_path.is_dir():\n raise ValueError(\"{} is a directory, expected a file\".format(self._path))\n\n super(FileDependency, self).__init__(\n name, \"*\", category=category, optional=optional, allows_prereleases=True\n )\n\n @property\n def path(self):\n return self._path\n\n @property\n def full_path(self):\n return self._full_path.resolve()\n\n def is_file(self):\n return True\n\n def hash(self):\n h = hashlib.sha256()\n with self._full_path.open(\"rb\") as fp:\n for content in iter(lambda: fp.read(io.DEFAULT_BUFFER_SIZE), b\"\"):\n h.update(content)\n\n return h.hexdigest()\n", "path": "poetry/packages/file_dependency.py"}, {"content": "from pkginfo.distribution import HEADER_ATTRS\nfrom pkginfo.distribution import HEADER_ATTRS_2_0\n\nfrom poetry.utils._compat import Path\nfrom poetry.utils.toml_file import TomlFile\n\nfrom .dependency import Dependency\n\n\n# Patching pkginfo to support Metadata version 2.1 (PEP 566)\nHEADER_ATTRS.update(\n {\"2.1\": HEADER_ATTRS_2_0 + ((\"Provides-Extra\", \"provides_extra\", True),)}\n)\n\n\nclass DirectoryDependency(Dependency):\n def __init__(\n self,\n name,\n path, # type: Path\n category=\"main\", # type: str\n optional=False, # type: bool\n base=None, # type: Path\n develop=True, # type: bool\n ):\n self._path = path\n self._base = base\n self._full_path = path\n self._develop = develop\n self._supports_poetry = False\n\n if self._base and not self._path.is_absolute():\n self._full_path = self._base / self._path\n\n if not self._full_path.exists():\n raise ValueError(\"Directory {} does not exist\".format(self._path))\n\n if self._full_path.is_file():\n raise ValueError(\"{} is a file, expected a directory\".format(self._path))\n\n # Checking content to determine actions\n setup = self._full_path / \"setup.py\"\n pyproject = TomlFile(self._full_path / \"pyproject.toml\")\n if pyproject.exists():\n pyproject_content = pyproject.read()\n self._supports_poetry = (\n \"tool\" in pyproject_content and \"poetry\" in pyproject_content[\"tool\"]\n )\n\n if not setup.exists() and not self._supports_poetry:\n raise ValueError(\n \"Directory {} does not seem to be a Python package\".format(\n self._full_path\n )\n )\n\n super(DirectoryDependency, self).__init__(\n name, \"*\", category=category, optional=optional, allows_prereleases=True\n )\n\n @property\n def path(self):\n return self._path\n\n @property\n def full_path(self):\n return self._full_path.resolve()\n\n @property\n def base(self):\n return self._base\n\n @property\n def develop(self):\n return self._develop\n\n def supports_poetry(self):\n return self._supports_poetry\n\n def is_directory(self):\n return True\n", "path": "poetry/packages/directory_dependency.py"}], "after_files": [{"content": "import hashlib\nimport io\n\nfrom pkginfo.distribution import HEADER_ATTRS\nfrom pkginfo.distribution import HEADER_ATTRS_2_0\n\nfrom poetry.utils._compat import Path\n\nfrom .dependency import Dependency\n\n\n# Patching pkginfo to support Metadata version 2.1 (PEP 566)\nHEADER_ATTRS.update(\n {\"2.1\": HEADER_ATTRS_2_0 + ((\"Provides-Extra\", \"provides_extra\", True),)}\n)\n\n\nclass FileDependency(Dependency):\n def __init__(\n self,\n name,\n path, # type: Path\n category=\"main\", # type: str\n optional=False, # type: bool\n base=None, # type: Path\n ):\n self._path = path\n self._base = base\n self._full_path = path\n\n if self._base and not self._path.is_absolute():\n self._full_path = self._base / self._path\n\n if not self._full_path.exists():\n raise ValueError(\"File {} does not exist\".format(self._path))\n\n if self._full_path.is_dir():\n raise ValueError(\"{} is a directory, expected a file\".format(self._path))\n\n super(FileDependency, self).__init__(\n name, \"*\", category=category, optional=optional, allows_prereleases=True\n )\n\n @property\n def path(self):\n return self._path\n\n @property\n def full_path(self):\n return self._full_path.resolve()\n\n @property\n def base_pep_508_name(self): # type: () -> str\n requirement = self.pretty_name\n\n if self.extras:\n requirement += \"[{}]\".format(\",\".join(self.extras))\n\n requirement += \" @ {}\".format(self._path)\n\n return requirement\n\n def is_file(self):\n return True\n\n def hash(self):\n h = hashlib.sha256()\n with self._full_path.open(\"rb\") as fp:\n for content in iter(lambda: fp.read(io.DEFAULT_BUFFER_SIZE), b\"\"):\n h.update(content)\n\n return h.hexdigest()\n", "path": "poetry/packages/file_dependency.py"}, {"content": "from pkginfo.distribution import HEADER_ATTRS\nfrom pkginfo.distribution import HEADER_ATTRS_2_0\n\nfrom poetry.utils._compat import Path\nfrom poetry.utils.toml_file import TomlFile\n\nfrom .dependency import Dependency\n\n\n# Patching pkginfo to support Metadata version 2.1 (PEP 566)\nHEADER_ATTRS.update(\n {\"2.1\": HEADER_ATTRS_2_0 + ((\"Provides-Extra\", \"provides_extra\", True),)}\n)\n\n\nclass DirectoryDependency(Dependency):\n def __init__(\n self,\n name,\n path, # type: Path\n category=\"main\", # type: str\n optional=False, # type: bool\n base=None, # type: Path\n develop=True, # type: bool\n ):\n self._path = path\n self._base = base\n self._full_path = path\n self._develop = develop\n self._supports_poetry = False\n\n if self._base and not self._path.is_absolute():\n self._full_path = self._base / self._path\n\n if not self._full_path.exists():\n raise ValueError(\"Directory {} does not exist\".format(self._path))\n\n if self._full_path.is_file():\n raise ValueError(\"{} is a file, expected a directory\".format(self._path))\n\n # Checking content to determine actions\n setup = self._full_path / \"setup.py\"\n pyproject = TomlFile(self._full_path / \"pyproject.toml\")\n if pyproject.exists():\n pyproject_content = pyproject.read()\n self._supports_poetry = (\n \"tool\" in pyproject_content and \"poetry\" in pyproject_content[\"tool\"]\n )\n\n if not setup.exists() and not self._supports_poetry:\n raise ValueError(\n \"Directory {} does not seem to be a Python package\".format(\n self._full_path\n )\n )\n\n super(DirectoryDependency, self).__init__(\n name, \"*\", category=category, optional=optional, allows_prereleases=True\n )\n\n @property\n def path(self):\n return self._path\n\n @property\n def full_path(self):\n return self._full_path.resolve()\n\n @property\n def base(self):\n return self._base\n\n @property\n def develop(self):\n return self._develop\n\n @property\n def base_pep_508_name(self): # type: () -> str\n requirement = self.pretty_name\n\n if self.extras:\n requirement += \"[{}]\".format(\",\".join(self.extras))\n\n requirement += \" @ {}\".format(self._path)\n\n return requirement\n\n def supports_poetry(self):\n return self._supports_poetry\n\n def is_directory(self):\n return True\n", "path": "poetry/packages/directory_dependency.py"}]} | 1,823 | 315 |
gh_patches_debug_9 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1038 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update the version number on the logo and footer.
For sprint 25, we will increment to 0.3.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version='v0.3.1'
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version='v0.3.1'
\ No newline at end of file
+hdx_version='v0.3.2'
\ No newline at end of file
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version='v0.3.1'\n\\ No newline at end of file\n+hdx_version='v0.3.2'\n\\ No newline at end of file\n", "issue": "Update the version number on the logo and footer.\nFor sprint 25, we will increment to 0.3.2\n\n", "before_files": [{"content": "hdx_version='v0.3.1'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version='v0.3.2'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 307 | 120 |
gh_patches_debug_693 | rasdani/github-patches | git_diff | Azure__azure-cli-extensions-4911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`az webpubsub client start` errors with `TypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary`
- If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)
### Related command
```console
$ az webpubsub client start --name twitch-pubsub --resource-group twitchRG --user user1 --hub-name hub1
The command failed with an unexpected error. Here is the traceback:
As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary
Traceback (most recent call last):
File "/opt/az/lib/python3.10/site-packages/knack/cli.py", line 231, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 121, in handler
return op(**command_args)
File "/home/anthony/.azure/cliextensions/webpubsub/azext_webpubsub/client.py", line 58, in start_client
asyncio.get_event_loop().run_until_complete(connect(token['url']))
File "/opt/az/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/anthony/.azure/cliextensions/webpubsub/azext_webpubsub/client.py", line 43, in connect
async with websockets.connect(url, subprotocols=['json.webpubsub.azure.v1']) as ws:
File "/home/anthony/.azure/cliextensions/webpubsub/websockets/client.py", line 517, in __aenter__
return await self
File "/home/anthony/.azure/cliextensions/webpubsub/websockets/client.py", line 535, in __await_impl__
transport, protocol = await self._create_connection()
File "/opt/az/lib/python3.10/asyncio/base_events.py", line 1089, in create_connection
transport, protocol = await self._create_connection_transport(
File "/opt/az/lib/python3.10/asyncio/base_events.py", line 1107, in _create_connection_transport
protocol = protocol_factory()
File "/home/anthony/.azure/cliextensions/webpubsub/websockets/client.py", line 69, in __init__
super().__init__(**kwargs)
File "/home/anthony/.azure/cliextensions/webpubsub/websockets/protocol.py", line 235, in __init__
self._drain_lock = asyncio.Lock(
File "/opt/az/lib/python3.10/asyncio/locks.py", line 78, in __init__
super().__init__(loop=loop)
File "/opt/az/lib/python3.10/asyncio/mixins.py", line 17, in __init__
raise TypeError(
TypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary
```
### Extension name (the extension in question)
webpubsub
### Description of issue (in as much detail as possible)
appears this just needs an upgrade
I was able to work around by running (I'm in azure cloud shell):
```bash
/opt/az/bin/python3.10 -m pip install websockets --upgrade --target ~/.azure/cliextensions/webpubsub/
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/webpubsub/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # --------------------------------------------------------------------------------------------
4 # Copyright (c) Microsoft Corporation. All rights reserved.
5 # Licensed under the MIT License. See License.txt in the project root for license information.
6 # --------------------------------------------------------------------------------------------
7
8
9 from codecs import open
10 from setuptools import setup, find_packages
11 try:
12 from azure_bdist_wheel import cmdclass
13 except ImportError:
14 from distutils import log as logger
15 logger.warn("Wheel is not available, disabling bdist_wheel hook")
16
17 # TODO: Confirm this is the right version number you want and it matches your
18 # HISTORY.rst entry.
19 VERSION = '1.1.0'
20
21 # The full list of classifiers is available at
22 # https://pypi.python.org/pypi?%3Aaction=list_classifiers
23 CLASSIFIERS = [
24 'Development Status :: 4 - Beta',
25 'Intended Audience :: Developers',
26 'Intended Audience :: System Administrators',
27 'Programming Language :: Python',
28 'Programming Language :: Python :: 3',
29 'Programming Language :: Python :: 3.6',
30 'Programming Language :: Python :: 3.7',
31 'Programming Language :: Python :: 3.8',
32 'License :: OSI Approved :: MIT License',
33 ]
34
35 # TODO: Add any additional SDK dependencies here
36 DEPENDENCIES = [
37 'websockets~=8.1'
38 ]
39
40 with open('README.rst', 'r', encoding='utf-8') as f:
41 README = f.read()
42 with open('HISTORY.rst', 'r', encoding='utf-8') as f:
43 HISTORY = f.read()
44
45 setup(
46 name='webpubsub',
47 version=VERSION,
48 description='Microsoft Azure Command-Line Tools Webpubsub Extension',
49 # TODO: Update author and email, if applicable
50 author='Microsoft Corporation',
51 author_email='[email protected]',
52 # TODO: change to your extension source code repo if the code will not be put in azure-cli-extensions repo
53 url='https://github.com/Azure/azure-cli-extensions/tree/main/src/webpubsub',
54 long_description=README + '\n\n' + HISTORY,
55 license='MIT',
56 classifiers=CLASSIFIERS,
57 packages=find_packages(),
58 install_requires=DEPENDENCIES,
59 package_data={'azext_webpubsub': ['azext_metadata.json']},
60 )
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/webpubsub/setup.py b/src/webpubsub/setup.py
--- a/src/webpubsub/setup.py
+++ b/src/webpubsub/setup.py
@@ -34,7 +34,7 @@
# TODO: Add any additional SDK dependencies here
DEPENDENCIES = [
- 'websockets~=8.1'
+ 'websockets>=8.1'
]
with open('README.rst', 'r', encoding='utf-8') as f:
| {"golden_diff": "diff --git a/src/webpubsub/setup.py b/src/webpubsub/setup.py\n--- a/src/webpubsub/setup.py\n+++ b/src/webpubsub/setup.py\n@@ -34,7 +34,7 @@\n \n # TODO: Add any additional SDK dependencies here\n DEPENDENCIES = [\n- 'websockets~=8.1'\n+ 'websockets>=8.1'\n ]\n \n with open('README.rst', 'r', encoding='utf-8') as f:\n", "issue": "`az webpubsub client start` errors with `TypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary`\n- If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)\r\n\r\n### Related command\r\n\r\n```console\r\n$ az webpubsub client start --name twitch-pubsub --resource-group twitchRG --user user1 --hub-name hub1\r\nThe command failed with an unexpected error. Here is the traceback:\r\nAs of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary\r\nTraceback (most recent call last):\r\n File \"/opt/az/lib/python3.10/site-packages/knack/cli.py\", line 231, in invoke\r\n cmd_result = self.invocation.execute(args)\r\n File \"/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py\", line 663, in execute\r\n raise ex\r\n File \"/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py\", line 726, in _run_jobs_serially\r\n results.append(self._run_job(expanded_arg, cmd_copy))\r\n File \"/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py\", line 697, in _run_job\r\n result = cmd_copy(params)\r\n File \"/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py\", line 333, in __call__\r\n return self.handler(*args, **kwargs)\r\n File \"/opt/az/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py\", line 121, in handler\r\n return op(**command_args)\r\n File \"/home/anthony/.azure/cliextensions/webpubsub/azext_webpubsub/client.py\", line 58, in start_client\r\n asyncio.get_event_loop().run_until_complete(connect(token['url']))\r\n File \"/opt/az/lib/python3.10/asyncio/base_events.py\", line 646, in run_until_complete\r\n return future.result()\r\n File \"/home/anthony/.azure/cliextensions/webpubsub/azext_webpubsub/client.py\", line 43, in connect\r\n async with websockets.connect(url, subprotocols=['json.webpubsub.azure.v1']) as ws:\r\n File \"/home/anthony/.azure/cliextensions/webpubsub/websockets/client.py\", line 517, in __aenter__\r\n return await self\r\n File \"/home/anthony/.azure/cliextensions/webpubsub/websockets/client.py\", line 535, in __await_impl__\r\n transport, protocol = await self._create_connection()\r\n File \"/opt/az/lib/python3.10/asyncio/base_events.py\", line 1089, in create_connection\r\n transport, protocol = await self._create_connection_transport(\r\n File \"/opt/az/lib/python3.10/asyncio/base_events.py\", line 1107, in _create_connection_transport\r\n protocol = protocol_factory()\r\n File \"/home/anthony/.azure/cliextensions/webpubsub/websockets/client.py\", line 69, in __init__\r\n super().__init__(**kwargs)\r\n File \"/home/anthony/.azure/cliextensions/webpubsub/websockets/protocol.py\", line 235, in __init__\r\n self._drain_lock = asyncio.Lock(\r\n File \"/opt/az/lib/python3.10/asyncio/locks.py\", line 78, in __init__\r\n super().__init__(loop=loop)\r\n File \"/opt/az/lib/python3.10/asyncio/mixins.py\", line 17, in __init__\r\n raise TypeError(\r\nTypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary\r\n```\r\n\r\n### Extension name (the extension in question)\r\n\r\nwebpubsub\r\n\r\n### Description of issue (in as much detail as possible)\r\n\r\nappears this just needs an upgrade\r\n\r\nI was able to work around by running (I'm in azure cloud shell):\r\n\r\n```bash\r\n/opt/az/bin/python3.10 -m pip install websockets --upgrade --target ~/.azure/cliextensions/webpubsub/\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\ntry:\n from azure_bdist_wheel import cmdclass\nexcept ImportError:\n from distutils import log as logger\n logger.warn(\"Wheel is not available, disabling bdist_wheel hook\")\n\n# TODO: Confirm this is the right version number you want and it matches your\n# HISTORY.rst entry.\nVERSION = '1.1.0'\n\n# The full list of classifiers is available at\n# https://pypi.python.org/pypi?%3Aaction=list_classifiers\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'License :: OSI Approved :: MIT License',\n]\n\n# TODO: Add any additional SDK dependencies here\nDEPENDENCIES = [\n 'websockets~=8.1'\n]\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='webpubsub',\n version=VERSION,\n description='Microsoft Azure Command-Line Tools Webpubsub Extension',\n # TODO: Update author and email, if applicable\n author='Microsoft Corporation',\n author_email='[email protected]',\n # TODO: change to your extension source code repo if the code will not be put in azure-cli-extensions repo\n url='https://github.com/Azure/azure-cli-extensions/tree/main/src/webpubsub',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n classifiers=CLASSIFIERS,\n packages=find_packages(),\n install_requires=DEPENDENCIES,\n package_data={'azext_webpubsub': ['azext_metadata.json']},\n)\n", "path": "src/webpubsub/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\ntry:\n from azure_bdist_wheel import cmdclass\nexcept ImportError:\n from distutils import log as logger\n logger.warn(\"Wheel is not available, disabling bdist_wheel hook\")\n\n# TODO: Confirm this is the right version number you want and it matches your\n# HISTORY.rst entry.\nVERSION = '1.1.0'\n\n# The full list of classifiers is available at\n# https://pypi.python.org/pypi?%3Aaction=list_classifiers\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'License :: OSI Approved :: MIT License',\n]\n\n# TODO: Add any additional SDK dependencies here\nDEPENDENCIES = [\n 'websockets>=8.1'\n]\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='webpubsub',\n version=VERSION,\n description='Microsoft Azure Command-Line Tools Webpubsub Extension',\n # TODO: Update author and email, if applicable\n author='Microsoft Corporation',\n author_email='[email protected]',\n # TODO: change to your extension source code repo if the code will not be put in azure-cli-extensions repo\n url='https://github.com/Azure/azure-cli-extensions/tree/main/src/webpubsub',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n classifiers=CLASSIFIERS,\n packages=find_packages(),\n install_requires=DEPENDENCIES,\n package_data={'azext_webpubsub': ['azext_metadata.json']},\n)\n", "path": "src/webpubsub/setup.py"}]} | 1,829 | 105 |
gh_patches_debug_42493 | rasdani/github-patches | git_diff | PrefectHQ__prefect-3725 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow exporter arguments in Jupyter ExecuteNotebook task
## Current behavior
When running the `jupyter.jupyter.ExecuteNotebook` task with `output_format='html'` the default settings for the HTMLExporter are used. There is no way to pass arguments to this exporter.
## Proposed behavior
Allow passing arguments to the HTMLExporter.
## Implementation suggestion
Something like `html_exporter = nbconvert.HTMLExporter(**exporter_kwargs)` on the following line:
https://github.com/PrefectHQ/prefect/blob/master/src/prefect/tasks/jupyter/jupyter.py#L65
## Example usecase
This allows you to exclude code cells, only showing their output, in the exported html document by passing the `exclude_input=True` argument to the exporter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/jupyter/jupyter.py`
Content:
```
1 import nbconvert
2 import nbformat
3 import papermill as pm
4
5 from prefect import Task
6 from prefect.utilities.tasks import defaults_from_attrs
7
8
9 class ExecuteNotebook(Task):
10 """
11 Task for running Jupyter Notebooks.
12 In order to parametrize the notebook, you need to mark the parameters cell as described in
13 the papermill documentation: https://papermill.readthedocs.io/en/latest/usage-parameterize.html
14
15 Args:
16 - path (string, optional): path to fetch the notebook from.
17 Can be a cloud storage path.
18 Can also be provided post-initialization by calling this task instance
19 - parameters (dict, optional): dictionary of parameters to use for the notebook
20 Can also be provided at runtime
21 - output_format (str, optional): Notebook output format.
22 Currently supported: json, html (default: json)
23 - kernel_name (string, optional): kernel name to run the notebook with.
24 If not provided, the default kernel will be used.
25 - **kwargs: additional keyword arguments to pass to the Task constructor
26 """
27
28 def __init__(
29 self,
30 path: str = None,
31 parameters: dict = None,
32 output_format: str = "json",
33 kernel_name: str = None,
34 **kwargs
35 ):
36 self.path = path
37 self.parameters = parameters
38 self.output_format = output_format
39 self.kernel_name = kernel_name
40 super().__init__(**kwargs)
41
42 @defaults_from_attrs("path", "parameters", "output_format")
43 def run(
44 self,
45 path: str = None,
46 parameters: dict = None,
47 output_format: str = None,
48 ) -> str:
49 """
50 Run a Jupyter notebook and output as HTML or JSON
51
52 Args:
53 - path (string, optional): path to fetch the notebook from; can also be
54 a cloud storage path
55 - parameters (dict, optional): dictionary of parameters to use for the notebook
56 - output_format (str, optional): Notebook output format.
57 Currently supported: json, html (default: json)
58 """
59 nb: nbformat.NotebookNode = pm.execute_notebook(
60 path, "-", parameters=parameters, kernel_name=self.kernel_name
61 )
62 if output_format == "json":
63 return nbformat.writes(nb)
64 if output_format == "html":
65 html_exporter = nbconvert.HTMLExporter()
66 (body, resources) = html_exporter.from_notebook_node(nb)
67 return body
68
69 raise NotImplementedError("Notebook output %s not supported", output_format)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/tasks/jupyter/jupyter.py b/src/prefect/tasks/jupyter/jupyter.py
--- a/src/prefect/tasks/jupyter/jupyter.py
+++ b/src/prefect/tasks/jupyter/jupyter.py
@@ -18,8 +18,12 @@
Can also be provided post-initialization by calling this task instance
- parameters (dict, optional): dictionary of parameters to use for the notebook
Can also be provided at runtime
- - output_format (str, optional): Notebook output format.
- Currently supported: json, html (default: json)
+ - output_format (str, optional): Notebook output format, should be a valid
+ nbconvert Exporter name. 'json' is treated as 'notebook'.
+ Valid exporter names: asciidoc, custom, html, latex, markdown,
+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)
+ - exporter_kwargs (dict, optional): The arguments used for initializing
+ the exporter.
- kernel_name (string, optional): kernel name to run the notebook with.
If not provided, the default kernel will be used.
- **kwargs: additional keyword arguments to pass to the Task constructor
@@ -29,7 +33,8 @@
self,
path: str = None,
parameters: dict = None,
- output_format: str = "json",
+ output_format: str = "notebook",
+ exporter_kwargs: dict = None,
kernel_name: str = None,
**kwargs
):
@@ -37,33 +42,40 @@
self.parameters = parameters
self.output_format = output_format
self.kernel_name = kernel_name
+ self.exporter_kwargs = exporter_kwargs
super().__init__(**kwargs)
- @defaults_from_attrs("path", "parameters", "output_format")
+ @defaults_from_attrs("path", "parameters", "output_format", "exporter_kwargs")
def run(
self,
path: str = None,
parameters: dict = None,
output_format: str = None,
+ exporter_kwargs: dict = None,
) -> str:
"""
- Run a Jupyter notebook and output as HTML or JSON
+ Run a Jupyter notebook and output as HTML, notebook, or other formats.
Args:
- path (string, optional): path to fetch the notebook from; can also be
a cloud storage path
- parameters (dict, optional): dictionary of parameters to use for the notebook
- - output_format (str, optional): Notebook output format.
- Currently supported: json, html (default: json)
+ - output_format (str, optional): Notebook output format, should be a valid
+ nbconvert Exporter name. 'json' is treated as 'notebook'.
+ Valid exporter names: asciidoc, custom, html, latex, markdown,
+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)
+ - exporter_kwargs (dict, optional): The arguments used for initializing
+ the exporter.
"""
nb: nbformat.NotebookNode = pm.execute_notebook(
path, "-", parameters=parameters, kernel_name=self.kernel_name
)
if output_format == "json":
- return nbformat.writes(nb)
- if output_format == "html":
- html_exporter = nbconvert.HTMLExporter()
- (body, resources) = html_exporter.from_notebook_node(nb)
- return body
+ output_format = "notebook"
- raise NotImplementedError("Notebook output %s not supported", output_format)
+ if exporter_kwargs is None:
+ exporter_kwargs = {}
+
+ exporter = nbconvert.get_exporter(output_format)
+ body, resources = nbconvert.export(exporter, nb, **exporter_kwargs)
+ return body
| {"golden_diff": "diff --git a/src/prefect/tasks/jupyter/jupyter.py b/src/prefect/tasks/jupyter/jupyter.py\n--- a/src/prefect/tasks/jupyter/jupyter.py\n+++ b/src/prefect/tasks/jupyter/jupyter.py\n@@ -18,8 +18,12 @@\n Can also be provided post-initialization by calling this task instance\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n Can also be provided at runtime\n- - output_format (str, optional): Notebook output format.\n- Currently supported: json, html (default: json)\n+ - output_format (str, optional): Notebook output format, should be a valid\n+ nbconvert Exporter name. 'json' is treated as 'notebook'.\n+ Valid exporter names: asciidoc, custom, html, latex, markdown,\n+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)\n+ - exporter_kwargs (dict, optional): The arguments used for initializing\n+ the exporter.\n - kernel_name (string, optional): kernel name to run the notebook with.\n If not provided, the default kernel will be used.\n - **kwargs: additional keyword arguments to pass to the Task constructor\n@@ -29,7 +33,8 @@\n self,\n path: str = None,\n parameters: dict = None,\n- output_format: str = \"json\",\n+ output_format: str = \"notebook\",\n+ exporter_kwargs: dict = None,\n kernel_name: str = None,\n **kwargs\n ):\n@@ -37,33 +42,40 @@\n self.parameters = parameters\n self.output_format = output_format\n self.kernel_name = kernel_name\n+ self.exporter_kwargs = exporter_kwargs\n super().__init__(**kwargs)\n \n- @defaults_from_attrs(\"path\", \"parameters\", \"output_format\")\n+ @defaults_from_attrs(\"path\", \"parameters\", \"output_format\", \"exporter_kwargs\")\n def run(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = None,\n+ exporter_kwargs: dict = None,\n ) -> str:\n \"\"\"\n- Run a Jupyter notebook and output as HTML or JSON\n+ Run a Jupyter notebook and output as HTML, notebook, or other formats.\n \n Args:\n - path (string, optional): path to fetch the notebook from; can also be\n a cloud storage path\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n- - output_format (str, optional): Notebook output format.\n- Currently supported: json, html (default: json)\n+ - output_format (str, optional): Notebook output format, should be a valid\n+ nbconvert Exporter name. 'json' is treated as 'notebook'.\n+ Valid exporter names: asciidoc, custom, html, latex, markdown,\n+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)\n+ - exporter_kwargs (dict, optional): The arguments used for initializing\n+ the exporter.\n \"\"\"\n nb: nbformat.NotebookNode = pm.execute_notebook(\n path, \"-\", parameters=parameters, kernel_name=self.kernel_name\n )\n if output_format == \"json\":\n- return nbformat.writes(nb)\n- if output_format == \"html\":\n- html_exporter = nbconvert.HTMLExporter()\n- (body, resources) = html_exporter.from_notebook_node(nb)\n- return body\n+ output_format = \"notebook\"\n \n- raise NotImplementedError(\"Notebook output %s not supported\", output_format)\n+ if exporter_kwargs is None:\n+ exporter_kwargs = {}\n+\n+ exporter = nbconvert.get_exporter(output_format)\n+ body, resources = nbconvert.export(exporter, nb, **exporter_kwargs)\n+ return body\n", "issue": "Allow exporter arguments in Jupyter ExecuteNotebook task\n## Current behavior\r\n\r\nWhen running the `jupyter.jupyter.ExecuteNotebook` task with `output_format='html'` the default settings for the HTMLExporter are used. There is no way to pass arguments to this exporter.\r\n\r\n## Proposed behavior\r\n\r\nAllow passing arguments to the HTMLExporter.\r\n\r\n## Implementation suggestion\r\n\r\nSomething like `html_exporter = nbconvert.HTMLExporter(**exporter_kwargs)` on the following line:\r\nhttps://github.com/PrefectHQ/prefect/blob/master/src/prefect/tasks/jupyter/jupyter.py#L65\r\n\r\n## Example usecase\r\n\r\nThis allows you to exclude code cells, only showing their output, in the exported html document by passing the `exclude_input=True` argument to the exporter.\n", "before_files": [{"content": "import nbconvert\nimport nbformat\nimport papermill as pm\n\nfrom prefect import Task\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass ExecuteNotebook(Task):\n \"\"\"\n Task for running Jupyter Notebooks.\n In order to parametrize the notebook, you need to mark the parameters cell as described in\n the papermill documentation: https://papermill.readthedocs.io/en/latest/usage-parameterize.html\n\n Args:\n - path (string, optional): path to fetch the notebook from.\n Can be a cloud storage path.\n Can also be provided post-initialization by calling this task instance\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n Can also be provided at runtime\n - output_format (str, optional): Notebook output format.\n Currently supported: json, html (default: json)\n - kernel_name (string, optional): kernel name to run the notebook with.\n If not provided, the default kernel will be used.\n - **kwargs: additional keyword arguments to pass to the Task constructor\n \"\"\"\n\n def __init__(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = \"json\",\n kernel_name: str = None,\n **kwargs\n ):\n self.path = path\n self.parameters = parameters\n self.output_format = output_format\n self.kernel_name = kernel_name\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"path\", \"parameters\", \"output_format\")\n def run(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = None,\n ) -> str:\n \"\"\"\n Run a Jupyter notebook and output as HTML or JSON\n\n Args:\n - path (string, optional): path to fetch the notebook from; can also be\n a cloud storage path\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n - output_format (str, optional): Notebook output format.\n Currently supported: json, html (default: json)\n \"\"\"\n nb: nbformat.NotebookNode = pm.execute_notebook(\n path, \"-\", parameters=parameters, kernel_name=self.kernel_name\n )\n if output_format == \"json\":\n return nbformat.writes(nb)\n if output_format == \"html\":\n html_exporter = nbconvert.HTMLExporter()\n (body, resources) = html_exporter.from_notebook_node(nb)\n return body\n\n raise NotImplementedError(\"Notebook output %s not supported\", output_format)\n", "path": "src/prefect/tasks/jupyter/jupyter.py"}], "after_files": [{"content": "import nbconvert\nimport nbformat\nimport papermill as pm\n\nfrom prefect import Task\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass ExecuteNotebook(Task):\n \"\"\"\n Task for running Jupyter Notebooks.\n In order to parametrize the notebook, you need to mark the parameters cell as described in\n the papermill documentation: https://papermill.readthedocs.io/en/latest/usage-parameterize.html\n\n Args:\n - path (string, optional): path to fetch the notebook from.\n Can be a cloud storage path.\n Can also be provided post-initialization by calling this task instance\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n Can also be provided at runtime\n - output_format (str, optional): Notebook output format, should be a valid\n nbconvert Exporter name. 'json' is treated as 'notebook'.\n Valid exporter names: asciidoc, custom, html, latex, markdown,\n notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)\n - exporter_kwargs (dict, optional): The arguments used for initializing\n the exporter.\n - kernel_name (string, optional): kernel name to run the notebook with.\n If not provided, the default kernel will be used.\n - **kwargs: additional keyword arguments to pass to the Task constructor\n \"\"\"\n\n def __init__(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = \"notebook\",\n exporter_kwargs: dict = None,\n kernel_name: str = None,\n **kwargs\n ):\n self.path = path\n self.parameters = parameters\n self.output_format = output_format\n self.kernel_name = kernel_name\n self.exporter_kwargs = exporter_kwargs\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"path\", \"parameters\", \"output_format\", \"exporter_kwargs\")\n def run(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = None,\n exporter_kwargs: dict = None,\n ) -> str:\n \"\"\"\n Run a Jupyter notebook and output as HTML, notebook, or other formats.\n\n Args:\n - path (string, optional): path to fetch the notebook from; can also be\n a cloud storage path\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n - output_format (str, optional): Notebook output format, should be a valid\n nbconvert Exporter name. 'json' is treated as 'notebook'.\n Valid exporter names: asciidoc, custom, html, latex, markdown,\n notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)\n - exporter_kwargs (dict, optional): The arguments used for initializing\n the exporter.\n \"\"\"\n nb: nbformat.NotebookNode = pm.execute_notebook(\n path, \"-\", parameters=parameters, kernel_name=self.kernel_name\n )\n if output_format == \"json\":\n output_format = \"notebook\"\n\n if exporter_kwargs is None:\n exporter_kwargs = {}\n\n exporter = nbconvert.get_exporter(output_format)\n body, resources = nbconvert.export(exporter, nb, **exporter_kwargs)\n return body\n", "path": "src/prefect/tasks/jupyter/jupyter.py"}]} | 1,100 | 857 |
gh_patches_debug_19122 | rasdani/github-patches | git_diff | aimhubio__aim-1917 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pytorch track_gradients_dists errors out if some parameters don't have gradients
## 🐛 Bug
When collecting gradients for each layer weight of a model, the function `get_model_layers` errors out if some model parameters don't have gradients.
### Expected behavior
Ignore weights if grad is None.
### Environment
- Aim Version (e.g., 3.11.1)
- Python version 3.10
- pip version 22.0
- Any OS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aim/sdk/adapters/pytorch.py`
Content:
```
1 def track_params_dists(model, run):
2 from aim import Distribution
3 data_hist = get_model_layers(model, 'data')
4
5 for name, params in data_hist.items():
6 if 'weight' in params:
7 run.track(
8 Distribution(params['weight']),
9 name=name,
10 context={
11 'type': 'data',
12 'params': 'weights',
13 }
14 )
15 if 'bias' in params:
16 run.track(
17 Distribution(params['bias']),
18 name=name,
19 context={
20 'type': 'data',
21 'params': 'biases',
22 }
23 )
24
25
26 def track_gradients_dists(model, run):
27 from aim import Distribution
28 grad_hist = get_model_layers(model, 'grad')
29
30 for name, params in grad_hist.items():
31 if 'weight' in params:
32 run.track(
33 Distribution(params['weight']),
34 name=name,
35 context={
36 'type': 'gradients',
37 'params': 'weights',
38 }
39 )
40 if 'bias' in params:
41 run.track(
42 Distribution(params['bias']),
43 name=name,
44 context={
45 'type': 'gradients',
46 'params': 'biases',
47 }
48 )
49
50
51 def get_model_layers(model, dt, parent_name=None):
52 layers = {}
53 for name, m in model.named_children():
54 layer_name = '{}__{}'.format(parent_name, name) \
55 if parent_name \
56 else name
57 layer_name += '.{}'.format(type(m).__name__)
58
59 if len(list(m.named_children())):
60 layers.update(get_model_layers(m, dt, layer_name))
61 else:
62 layers[layer_name] = {}
63 if hasattr(m, 'weight') \
64 and m.weight is not None \
65 and hasattr(m.weight, dt):
66 layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()
67
68 if hasattr(m, 'bias') \
69 and m.bias is not None \
70 and hasattr(m.bias, dt):
71 layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()
72
73 return layers
74
75
76 # Move tensor from GPU to CPU
77 def get_pt_tensor(t):
78 return t.cpu() if hasattr(t, 'is_cuda') and t.is_cuda else t
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aim/sdk/adapters/pytorch.py b/aim/sdk/adapters/pytorch.py
--- a/aim/sdk/adapters/pytorch.py
+++ b/aim/sdk/adapters/pytorch.py
@@ -60,15 +60,17 @@
layers.update(get_model_layers(m, dt, layer_name))
else:
layers[layer_name] = {}
- if hasattr(m, 'weight') \
- and m.weight is not None \
- and hasattr(m.weight, dt):
- layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()
+ weight = None
+ if hasattr(m, 'weight') and m.weight is not None:
+ weight = getattr(m.weight, dt, None)
+ if weight is not None:
+ layers[layer_name]['weight'] = get_pt_tensor(weight).numpy()
- if hasattr(m, 'bias') \
- and m.bias is not None \
- and hasattr(m.bias, dt):
- layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()
+ bias = None
+ if hasattr(m, 'bias') and m.bias is not None:
+ bias = getattr(m.bias, dt, None)
+ if bias is not None:
+ layers[layer_name]['bias'] = get_pt_tensor(bias).numpy()
return layers
| {"golden_diff": "diff --git a/aim/sdk/adapters/pytorch.py b/aim/sdk/adapters/pytorch.py\n--- a/aim/sdk/adapters/pytorch.py\n+++ b/aim/sdk/adapters/pytorch.py\n@@ -60,15 +60,17 @@\n layers.update(get_model_layers(m, dt, layer_name))\n else:\n layers[layer_name] = {}\n- if hasattr(m, 'weight') \\\n- and m.weight is not None \\\n- and hasattr(m.weight, dt):\n- layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()\n+ weight = None\n+ if hasattr(m, 'weight') and m.weight is not None:\n+ weight = getattr(m.weight, dt, None)\n+ if weight is not None:\n+ layers[layer_name]['weight'] = get_pt_tensor(weight).numpy()\n \n- if hasattr(m, 'bias') \\\n- and m.bias is not None \\\n- and hasattr(m.bias, dt):\n- layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()\n+ bias = None\n+ if hasattr(m, 'bias') and m.bias is not None:\n+ bias = getattr(m.bias, dt, None)\n+ if bias is not None:\n+ layers[layer_name]['bias'] = get_pt_tensor(bias).numpy()\n \n return layers\n", "issue": "Pytorch track_gradients_dists errors out if some parameters don't have gradients\n## \ud83d\udc1b Bug\r\n\r\nWhen collecting gradients for each layer weight of a model, the function `get_model_layers` errors out if some model parameters don't have gradients.\r\n\r\n### Expected behavior\r\n\r\nIgnore weights if grad is None.\r\n\r\n### Environment\r\n\r\n- Aim Version (e.g., 3.11.1)\r\n- Python version 3.10\r\n- pip version 22.0\r\n- Any OS\r\n\r\n\n", "before_files": [{"content": "def track_params_dists(model, run):\n from aim import Distribution\n data_hist = get_model_layers(model, 'data')\n\n for name, params in data_hist.items():\n if 'weight' in params:\n run.track(\n Distribution(params['weight']),\n name=name,\n context={\n 'type': 'data',\n 'params': 'weights',\n }\n )\n if 'bias' in params:\n run.track(\n Distribution(params['bias']),\n name=name,\n context={\n 'type': 'data',\n 'params': 'biases',\n }\n )\n\n\ndef track_gradients_dists(model, run):\n from aim import Distribution\n grad_hist = get_model_layers(model, 'grad')\n\n for name, params in grad_hist.items():\n if 'weight' in params:\n run.track(\n Distribution(params['weight']),\n name=name,\n context={\n 'type': 'gradients',\n 'params': 'weights',\n }\n )\n if 'bias' in params:\n run.track(\n Distribution(params['bias']),\n name=name,\n context={\n 'type': 'gradients',\n 'params': 'biases',\n }\n )\n\n\ndef get_model_layers(model, dt, parent_name=None):\n layers = {}\n for name, m in model.named_children():\n layer_name = '{}__{}'.format(parent_name, name) \\\n if parent_name \\\n else name\n layer_name += '.{}'.format(type(m).__name__)\n\n if len(list(m.named_children())):\n layers.update(get_model_layers(m, dt, layer_name))\n else:\n layers[layer_name] = {}\n if hasattr(m, 'weight') \\\n and m.weight is not None \\\n and hasattr(m.weight, dt):\n layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()\n\n if hasattr(m, 'bias') \\\n and m.bias is not None \\\n and hasattr(m.bias, dt):\n layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()\n\n return layers\n\n\n# Move tensor from GPU to CPU\ndef get_pt_tensor(t):\n return t.cpu() if hasattr(t, 'is_cuda') and t.is_cuda else t\n", "path": "aim/sdk/adapters/pytorch.py"}], "after_files": [{"content": "def track_params_dists(model, run):\n from aim import Distribution\n data_hist = get_model_layers(model, 'data')\n\n for name, params in data_hist.items():\n if 'weight' in params:\n run.track(\n Distribution(params['weight']),\n name=name,\n context={\n 'type': 'data',\n 'params': 'weights',\n }\n )\n if 'bias' in params:\n run.track(\n Distribution(params['bias']),\n name=name,\n context={\n 'type': 'data',\n 'params': 'biases',\n }\n )\n\n\ndef track_gradients_dists(model, run):\n from aim import Distribution\n grad_hist = get_model_layers(model, 'grad')\n\n for name, params in grad_hist.items():\n if 'weight' in params:\n run.track(\n Distribution(params['weight']),\n name=name,\n context={\n 'type': 'gradients',\n 'params': 'weights',\n }\n )\n if 'bias' in params:\n run.track(\n Distribution(params['bias']),\n name=name,\n context={\n 'type': 'gradients',\n 'params': 'biases',\n }\n )\n\n\ndef get_model_layers(model, dt, parent_name=None):\n layers = {}\n for name, m in model.named_children():\n layer_name = '{}__{}'.format(parent_name, name) \\\n if parent_name \\\n else name\n layer_name += '.{}'.format(type(m).__name__)\n\n if len(list(m.named_children())):\n layers.update(get_model_layers(m, dt, layer_name))\n else:\n layers[layer_name] = {}\n weight = None\n if hasattr(m, 'weight') and m.weight is not None:\n weight = getattr(m.weight, dt, None)\n if weight is not None:\n layers[layer_name]['weight'] = get_pt_tensor(weight).numpy()\n\n bias = None\n if hasattr(m, 'bias') and m.bias is not None:\n bias = getattr(m.bias, dt, None)\n if bias is not None:\n layers[layer_name]['bias'] = get_pt_tensor(bias).numpy()\n\n return layers\n\n\n# Move tensor from GPU to CPU\ndef get_pt_tensor(t):\n return t.cpu() if hasattr(t, 'is_cuda') and t.is_cuda else t\n", "path": "aim/sdk/adapters/pytorch.py"}]} | 996 | 302 |
gh_patches_debug_9161 | rasdani/github-patches | git_diff | ciudadanointeligente__votainteligente-portal-electoral-765 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ordernar Propuestas
Por:
- [x] últimas creadas
- [x] Creadas por organización
- [x] Con más orazones.
Y por *defecto* puede ser:
- Random
- Por corazones, encuentro local, es organización.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `popular_proposal/filters.py`
Content:
```
1 # coding=utf-8
2 from django_filters import (FilterSet,
3 ChoiceFilter,
4 ModelChoiceFilter,
5 )
6 from popular_proposal.models import PopularProposal
7 from popular_proposal.forms.form_texts import TOPIC_CHOICES
8 from elections.models import Area
9 from django.conf import settings
10 from constance import config
11 from django.forms import CharField, Form, ChoiceField
12 from haystack.query import SearchQuerySet
13
14
15 def filterable_areas(request):
16 if settings.FILTERABLE_AREAS_TYPE:
17 return Area.public.filter(classification__in=settings.FILTERABLE_AREAS_TYPE)
18 return Area.public.all()
19
20
21 class TextSearchForm(Form):
22 text = CharField(label=u'Qué buscas?', required=False)
23 order_by = ChoiceField(required=False,
24 label=u"Ordenar por",
25 choices=[('', u'Por apoyos'),
26 ('-created', u'Últimas primero'),
27 ])
28
29 def full_clean(self):
30 super(TextSearchForm, self).full_clean()
31 cleaned_data = {}
32 for k in self.cleaned_data:
33 v = self.cleaned_data.get(k, '')
34
35 if (isinstance(v, unicode) or isinstance(v, str)) and not v.strip():
36 cleaned_data[k] = None
37 self.cleaned_data.update(cleaned_data)
38
39
40 class ProposalWithoutAreaFilter(FilterSet):
41 clasification = ChoiceFilter(choices=TOPIC_CHOICES,
42 empty_label=u"Selecciona",
43 label=u"Clasificación")
44
45 def __init__(self,
46 data=None,
47 queryset=None,
48 prefix=None,
49 strict=None,
50 **kwargs):
51 self.area = kwargs.pop('area', None)
52 if self.area is None and data is not None:
53 self.area = data.get('area', None)
54 if self.area:
55 self.area = Area.objects.get(id=self.area)
56 if queryset is None:
57 queryset = PopularProposal.ordered.all()
58 if self.area is not None:
59 queryset = queryset.filter(area=self.area)
60 super(ProposalWithoutAreaFilter, self).__init__(data=data,
61 queryset=queryset,
62 prefix=prefix,
63 strict=strict)
64
65 @property
66 def form(self):
67 super(ProposalWithoutAreaFilter, self).form
68 is_filled_search = False
69 for k in self.data:
70 i = self.data[k]
71 is_filled_search = True
72 self._form.fields[k].initial = i
73 self._form.is_filled_search = is_filled_search
74 return self._form
75
76 @property
77 def qs(self):
78
79 super(ProposalWithoutAreaFilter, self).qs
80 self._qs = self._qs.exclude(area__id=config.HIDDEN_AREAS)
81 if not self.form.is_valid():
82 return self._qs
83 order_by = self.form.cleaned_data.get('order_by', None)
84 if order_by:
85 self._qs = self._qs.order_by(order_by)
86 else:
87 self._qs = self._qs.by_likers()
88 text = self.form.cleaned_data.get('text', '')
89
90 if text:
91 pks = []
92 text_search = SearchQuerySet().models(self._meta.model).auto_query(text)
93 for r in text_search:
94 pks.append(r.pk)
95 return self._qs.filter(id__in=pks)
96 return self._qs
97
98 class Meta:
99 model = PopularProposal
100 fields = ['clasification', ]
101 form = TextSearchForm
102
103
104 def possible_areas(request):
105 as_ = Area.public.all()
106 return as_
107
108
109 class ProposalWithAreaFilter(ProposalWithoutAreaFilter):
110 area = ModelChoiceFilter(queryset=possible_areas, label="Comuna donde fue generada")
111
112
113 class ProposalGeneratedAtFilter(ProposalWithoutAreaFilter):
114 generated_at = ModelChoiceFilter(queryset=filterable_areas,
115 empty_label=u"Selecciona",
116 label="Comuna donde fue generada")
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/popular_proposal/filters.py b/popular_proposal/filters.py
--- a/popular_proposal/filters.py
+++ b/popular_proposal/filters.py
@@ -24,6 +24,8 @@
label=u"Ordenar por",
choices=[('', u'Por apoyos'),
('-created', u'Últimas primero'),
+ ('-proposer__profile__is_organization', u'De organizaciones primero'),
+ ('-is_local_meeting', u'Encuentros locales primero'),
])
def full_clean(self):
| {"golden_diff": "diff --git a/popular_proposal/filters.py b/popular_proposal/filters.py\n--- a/popular_proposal/filters.py\n+++ b/popular_proposal/filters.py\n@@ -24,6 +24,8 @@\n label=u\"Ordenar por\",\n choices=[('', u'Por apoyos'),\n ('-created', u'\u00daltimas primero'),\n+ ('-proposer__profile__is_organization', u'De organizaciones primero'),\n+ ('-is_local_meeting', u'Encuentros locales primero'),\n ])\n \n def full_clean(self):\n", "issue": "Ordernar Propuestas\nPor:\r\n- [x] \u00faltimas creadas\r\n- [x] Creadas por organizaci\u00f3n\r\n- [x] Con m\u00e1s orazones.\r\n\r\nY por *defecto* puede ser:\r\n- Random\r\n- Por corazones, encuentro local, es organizaci\u00f3n.\n", "before_files": [{"content": "# coding=utf-8\nfrom django_filters import (FilterSet,\n ChoiceFilter,\n ModelChoiceFilter,\n )\nfrom popular_proposal.models import PopularProposal\nfrom popular_proposal.forms.form_texts import TOPIC_CHOICES\nfrom elections.models import Area\nfrom django.conf import settings\nfrom constance import config\nfrom django.forms import CharField, Form, ChoiceField\nfrom haystack.query import SearchQuerySet\n\n\ndef filterable_areas(request):\n if settings.FILTERABLE_AREAS_TYPE:\n return Area.public.filter(classification__in=settings.FILTERABLE_AREAS_TYPE)\n return Area.public.all()\n\n\nclass TextSearchForm(Form):\n text = CharField(label=u'Qu\u00e9 buscas?', required=False)\n order_by = ChoiceField(required=False,\n label=u\"Ordenar por\",\n choices=[('', u'Por apoyos'),\n ('-created', u'\u00daltimas primero'),\n ])\n\n def full_clean(self):\n super(TextSearchForm, self).full_clean()\n cleaned_data = {}\n for k in self.cleaned_data:\n v = self.cleaned_data.get(k, '')\n\n if (isinstance(v, unicode) or isinstance(v, str)) and not v.strip():\n cleaned_data[k] = None\n self.cleaned_data.update(cleaned_data)\n\n\nclass ProposalWithoutAreaFilter(FilterSet):\n clasification = ChoiceFilter(choices=TOPIC_CHOICES,\n empty_label=u\"Selecciona\",\n label=u\"Clasificaci\u00f3n\")\n\n def __init__(self,\n data=None,\n queryset=None,\n prefix=None,\n strict=None,\n **kwargs):\n self.area = kwargs.pop('area', None)\n if self.area is None and data is not None:\n self.area = data.get('area', None)\n if self.area:\n self.area = Area.objects.get(id=self.area)\n if queryset is None:\n queryset = PopularProposal.ordered.all()\n if self.area is not None:\n queryset = queryset.filter(area=self.area)\n super(ProposalWithoutAreaFilter, self).__init__(data=data,\n queryset=queryset,\n prefix=prefix,\n strict=strict)\n\n @property\n def form(self):\n super(ProposalWithoutAreaFilter, self).form\n is_filled_search = False\n for k in self.data:\n i = self.data[k]\n is_filled_search = True\n self._form.fields[k].initial = i\n self._form.is_filled_search = is_filled_search\n return self._form\n\n @property\n def qs(self):\n\n super(ProposalWithoutAreaFilter, self).qs\n self._qs = self._qs.exclude(area__id=config.HIDDEN_AREAS)\n if not self.form.is_valid():\n return self._qs\n order_by = self.form.cleaned_data.get('order_by', None)\n if order_by:\n self._qs = self._qs.order_by(order_by)\n else:\n self._qs = self._qs.by_likers()\n text = self.form.cleaned_data.get('text', '')\n\n if text:\n pks = []\n text_search = SearchQuerySet().models(self._meta.model).auto_query(text)\n for r in text_search:\n pks.append(r.pk)\n return self._qs.filter(id__in=pks)\n return self._qs\n\n class Meta:\n model = PopularProposal\n fields = ['clasification', ]\n form = TextSearchForm\n\n\ndef possible_areas(request):\n as_ = Area.public.all()\n return as_\n\n\nclass ProposalWithAreaFilter(ProposalWithoutAreaFilter):\n area = ModelChoiceFilter(queryset=possible_areas, label=\"Comuna donde fue generada\")\n\n\nclass ProposalGeneratedAtFilter(ProposalWithoutAreaFilter):\n generated_at = ModelChoiceFilter(queryset=filterable_areas,\n empty_label=u\"Selecciona\",\n label=\"Comuna donde fue generada\")\n", "path": "popular_proposal/filters.py"}], "after_files": [{"content": "# coding=utf-8\nfrom django_filters import (FilterSet,\n ChoiceFilter,\n ModelChoiceFilter,\n )\nfrom popular_proposal.models import PopularProposal\nfrom popular_proposal.forms.form_texts import TOPIC_CHOICES\nfrom elections.models import Area\nfrom django.conf import settings\nfrom constance import config\nfrom django.forms import CharField, Form, ChoiceField\nfrom haystack.query import SearchQuerySet\n\n\ndef filterable_areas(request):\n if settings.FILTERABLE_AREAS_TYPE:\n return Area.public.filter(classification__in=settings.FILTERABLE_AREAS_TYPE)\n return Area.public.all()\n\n\nclass TextSearchForm(Form):\n text = CharField(label=u'Qu\u00e9 buscas?', required=False)\n order_by = ChoiceField(required=False,\n label=u\"Ordenar por\",\n choices=[('', u'Por apoyos'),\n ('-created', u'\u00daltimas primero'),\n ('-proposer__profile__is_organization', u'De organizaciones primero'),\n ('-is_local_meeting', u'Encuentros locales primero'),\n ])\n\n def full_clean(self):\n super(TextSearchForm, self).full_clean()\n cleaned_data = {}\n for k in self.cleaned_data:\n v = self.cleaned_data.get(k, '')\n\n if (isinstance(v, unicode) or isinstance(v, str)) and not v.strip():\n cleaned_data[k] = None\n self.cleaned_data.update(cleaned_data)\n\n\nclass ProposalWithoutAreaFilter(FilterSet):\n clasification = ChoiceFilter(choices=TOPIC_CHOICES,\n empty_label=u\"Selecciona\",\n label=u\"Clasificaci\u00f3n\")\n\n def __init__(self,\n data=None,\n queryset=None,\n prefix=None,\n strict=None,\n **kwargs):\n self.area = kwargs.pop('area', None)\n if self.area is None and data is not None:\n self.area = data.get('area', None)\n if self.area:\n self.area = Area.objects.get(id=self.area)\n if queryset is None:\n queryset = PopularProposal.ordered.all()\n if self.area is not None:\n queryset = queryset.filter(area=self.area)\n super(ProposalWithoutAreaFilter, self).__init__(data=data,\n queryset=queryset,\n prefix=prefix,\n strict=strict)\n\n @property\n def form(self):\n super(ProposalWithoutAreaFilter, self).form\n is_filled_search = False\n for k in self.data:\n i = self.data[k]\n is_filled_search = True\n self._form.fields[k].initial = i\n self._form.is_filled_search = is_filled_search\n return self._form\n\n @property\n def qs(self):\n\n super(ProposalWithoutAreaFilter, self).qs\n self._qs = self._qs.exclude(area__id=config.HIDDEN_AREAS)\n if not self.form.is_valid():\n return self._qs\n order_by = self.form.cleaned_data.get('order_by', None)\n if order_by:\n self._qs = self._qs.order_by(order_by)\n else:\n self._qs = self._qs.by_likers()\n text = self.form.cleaned_data.get('text', '')\n\n if text:\n pks = []\n text_search = SearchQuerySet().models(self._meta.model).auto_query(text)\n for r in text_search:\n pks.append(r.pk)\n return self._qs.filter(id__in=pks)\n return self._qs\n\n class Meta:\n model = PopularProposal\n fields = ['clasification', ]\n form = TextSearchForm\n\n\ndef possible_areas(request):\n as_ = Area.public.all()\n return as_\n\n\nclass ProposalWithAreaFilter(ProposalWithoutAreaFilter):\n area = ModelChoiceFilter(queryset=possible_areas, label=\"Comuna donde fue generada\")\n\n\nclass ProposalGeneratedAtFilter(ProposalWithoutAreaFilter):\n generated_at = ModelChoiceFilter(queryset=filterable_areas,\n empty_label=u\"Selecciona\",\n label=\"Comuna donde fue generada\")\n", "path": "popular_proposal/filters.py"}]} | 1,393 | 128 |
gh_patches_debug_5999 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-4515 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Homebase spider webpage regex is too restrictive
The homebase_gb_ie.py spider contains a regex in sitemap_rules to restrict things to store pages:
`sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-\w]+\/[-.\w]+$", "parse_sd")]`
This regex is slightly too strict, as there's a store with a "." in the place level: https://store.homebase.co.uk/st.-albans/the-courtyard-alban-park , which is currently not returned.
To include this store, the regex should presumably be changed to
`sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-.\w]+\/[-.\w]+$", "parse_sd")]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/homebase_gb_ie.py`
Content:
```
1 from scrapy.spiders import SitemapSpider
2
3 from locations.structured_data_spider import StructuredDataSpider
4
5
6 class HomebaseGBIESpider(SitemapSpider, StructuredDataSpider):
7 name = "homebase_gb_ie"
8 item_attributes = {"brand": "Homebase", "brand_wikidata": "Q9293447"}
9 sitemap_urls = ["https://store.homebase.co.uk/robots.txt"]
10 sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-\w]+\/[-.\w]+$", "parse_sd")]
11 skip_auto_cc = True
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/homebase_gb_ie.py b/locations/spiders/homebase_gb_ie.py
--- a/locations/spiders/homebase_gb_ie.py
+++ b/locations/spiders/homebase_gb_ie.py
@@ -7,5 +7,5 @@
name = "homebase_gb_ie"
item_attributes = {"brand": "Homebase", "brand_wikidata": "Q9293447"}
sitemap_urls = ["https://store.homebase.co.uk/robots.txt"]
- sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-\w]+\/[-.\w]+$", "parse_sd")]
+ sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-.\w]+\/[-.\w]+$", "parse_sd")]
skip_auto_cc = True
| {"golden_diff": "diff --git a/locations/spiders/homebase_gb_ie.py b/locations/spiders/homebase_gb_ie.py\n--- a/locations/spiders/homebase_gb_ie.py\n+++ b/locations/spiders/homebase_gb_ie.py\n@@ -7,5 +7,5 @@\n name = \"homebase_gb_ie\"\n item_attributes = {\"brand\": \"Homebase\", \"brand_wikidata\": \"Q9293447\"}\n sitemap_urls = [\"https://store.homebase.co.uk/robots.txt\"]\n- sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n+ sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-.\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n skip_auto_cc = True\n", "issue": "Homebase spider webpage regex is too restrictive\nThe homebase_gb_ie.py spider contains a regex in sitemap_rules to restrict things to store pages:\r\n`sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-\\w]+\\/[-.\\w]+$\", \"parse_sd\")]`\r\n\r\nThis regex is slightly too strict, as there's a store with a \".\" in the place level: https://store.homebase.co.uk/st.-albans/the-courtyard-alban-park , which is currently not returned.\r\n\r\nTo include this store, the regex should presumably be changed to\r\n`sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-.\\w]+\\/[-.\\w]+$\", \"parse_sd\")]`\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass HomebaseGBIESpider(SitemapSpider, StructuredDataSpider):\n name = \"homebase_gb_ie\"\n item_attributes = {\"brand\": \"Homebase\", \"brand_wikidata\": \"Q9293447\"}\n sitemap_urls = [\"https://store.homebase.co.uk/robots.txt\"]\n sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n skip_auto_cc = True\n", "path": "locations/spiders/homebase_gb_ie.py"}], "after_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass HomebaseGBIESpider(SitemapSpider, StructuredDataSpider):\n name = \"homebase_gb_ie\"\n item_attributes = {\"brand\": \"Homebase\", \"brand_wikidata\": \"Q9293447\"}\n sitemap_urls = [\"https://store.homebase.co.uk/robots.txt\"]\n sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-.\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n skip_auto_cc = True\n", "path": "locations/spiders/homebase_gb_ie.py"}]} | 565 | 184 |
gh_patches_debug_1557 | rasdani/github-patches | git_diff | WordPress__openverse-api-637 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Return secure URLs for the fields thumbnail, detail_url and related_url.
_(Framed the verbiage of the title as a feature request)_ 🙏
## Problem
The response for search and detail requests includes insecure URLs (`http` instead of `https`) in the fields `thumbnail`, `detail_url` and `related_url`.

e.g.:
**Search**
https://api.openverse.engineering/v1/images/?q=flower
**Detail:**
https://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/
## Description
When trying to integrate Openverse with some code on the browser I ended up having to replace the scheme part of the URL for avoiding notices like ```xxxx was loaded over HTTPS, but requested an insecure resource 'http://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/'. This request has been blocked; the content must be served over HTTPS.`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/catalog/api/serializers/base.py`
Content:
```
1 import re
2
3 from django.conf import settings
4 from rest_framework import serializers
5
6
7 class SchemableHyperlinkedIdentityField(serializers.HyperlinkedIdentityField):
8 """
9 This field returns the link but allows the option to replace the URL scheme.
10 """
11
12 def __init__(self, scheme=settings.API_LINK_SCHEME, *args, **kwargs):
13 super().__init__(*args, **kwargs)
14
15 self.scheme = scheme
16
17 def get_url(self, *args, **kwargs):
18 url = super().get_url(*args, **kwargs)
19
20 # Only rewrite URLs if a fixed scheme is provided
21 if self.scheme is not None:
22 re.sub(r"^\w+://", f"{self.scheme}://", url, 1)
23
24 return url
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/catalog/api/serializers/base.py b/api/catalog/api/serializers/base.py
--- a/api/catalog/api/serializers/base.py
+++ b/api/catalog/api/serializers/base.py
@@ -19,6 +19,6 @@
# Only rewrite URLs if a fixed scheme is provided
if self.scheme is not None:
- re.sub(r"^\w+://", f"{self.scheme}://", url, 1)
+ url = re.sub(r"^\w+://", f"{self.scheme}://", url, 1)
return url
| {"golden_diff": "diff --git a/api/catalog/api/serializers/base.py b/api/catalog/api/serializers/base.py\n--- a/api/catalog/api/serializers/base.py\n+++ b/api/catalog/api/serializers/base.py\n@@ -19,6 +19,6 @@\n \n # Only rewrite URLs if a fixed scheme is provided\n if self.scheme is not None:\n- re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n+ url = re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n \n return url\n", "issue": "Return secure URLs for the fields thumbnail, detail_url and related_url.\n_(Framed the verbiage of the title as a feature request)_ \ud83d\ude4f \r\n\r\n## Problem\r\n\r\nThe response for search and detail requests includes insecure URLs (`http` instead of `https`) in the fields `thumbnail`, `detail_url` and `related_url`.\r\n\r\n\r\n\r\n\r\ne.g.:\r\n\r\n**Search**\r\n\r\nhttps://api.openverse.engineering/v1/images/?q=flower\r\n\r\n**Detail:**\r\n\r\nhttps://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/\r\n\r\n## Description\r\n\r\nWhen trying to integrate Openverse with some code on the browser I ended up having to replace the scheme part of the URL for avoiding notices like ```xxxx was loaded over HTTPS, but requested an insecure resource 'http://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/'. This request has been blocked; the content must be served over HTTPS.`\r\n \r\n\n", "before_files": [{"content": "import re\n\nfrom django.conf import settings\nfrom rest_framework import serializers\n\n\nclass SchemableHyperlinkedIdentityField(serializers.HyperlinkedIdentityField):\n \"\"\"\n This field returns the link but allows the option to replace the URL scheme.\n \"\"\"\n\n def __init__(self, scheme=settings.API_LINK_SCHEME, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.scheme = scheme\n\n def get_url(self, *args, **kwargs):\n url = super().get_url(*args, **kwargs)\n\n # Only rewrite URLs if a fixed scheme is provided\n if self.scheme is not None:\n re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n\n return url\n", "path": "api/catalog/api/serializers/base.py"}], "after_files": [{"content": "import re\n\nfrom django.conf import settings\nfrom rest_framework import serializers\n\n\nclass SchemableHyperlinkedIdentityField(serializers.HyperlinkedIdentityField):\n \"\"\"\n This field returns the link but allows the option to replace the URL scheme.\n \"\"\"\n\n def __init__(self, scheme=settings.API_LINK_SCHEME, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.scheme = scheme\n\n def get_url(self, *args, **kwargs):\n url = super().get_url(*args, **kwargs)\n\n # Only rewrite URLs if a fixed scheme is provided\n if self.scheme is not None:\n url = re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n\n return url\n", "path": "api/catalog/api/serializers/base.py"}]} | 780 | 130 |
gh_patches_debug_57595 | rasdani/github-patches | git_diff | joke2k__faker-704 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: module 'faker.providers' has no attribute '__file__'
I converted my python code to .exe using cx_Freeze. While opening my .exe file I am getting this error.
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\cx_Freeze\initscripts\__startup__.py", line 14, in run
module.run()
File "C:\Program Files\Python36\lib\site-packages\cx_Freeze\initscripts\Console.py", line 26, in run
exec(code, m.__dict__)
File "DataGenerator.py", line 7, in <module>
File "C:\Program Files\Python36\lib\site-packages\faker\__init__.py", line 4, in <module>
from faker.factory import Factory
File "C:\Program Files\Python36\lib\site-packages\faker\factory.py", line 10, in <module>
from faker.config import DEFAULT_LOCALE, PROVIDERS, AVAILABLE_LOCALES
File "C:\Program Files\Python36\lib\site-packages\faker\config.py", line 11, in <module>
PROVIDERS = find_available_providers([import_module(path) for path in META_PROVIDERS_MODULES])
File "C:\Program Files\Python36\lib\site-packages\faker\utils\loading.py", line 29, in find_available_providers
providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]
File "C:\Program Files\Python36\lib\site-packages\faker\utils\loading.py", line 7, in list_module
path = os.path.dirname(module.__file__)
AttributeError: module 'faker.providers' has no attribute '__file__'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/utils/loading.py`
Content:
```
1 import os
2 import sys
3 from importlib import import_module
4 import pkgutil
5
6
7 def get_path(module):
8 if getattr(sys, 'frozen', False):
9 # frozen
10 path = os.path.dirname(sys.executable)
11 else:
12 # unfrozen
13 path = os.path.dirname(os.path.realpath(module.__file__))
14 return path
15
16
17 def list_module(module):
18 path = get_path(module)
19 modules = [name for finder, name,
20 is_pkg in pkgutil.iter_modules([path]) if is_pkg]
21 return modules
22
23
24 def find_available_locales(providers):
25 available_locales = set()
26
27 for provider_path in providers:
28
29 provider_module = import_module(provider_path)
30 if getattr(provider_module, 'localized', False):
31 langs = list_module(provider_module)
32 available_locales.update(langs)
33 return available_locales
34
35
36 def find_available_providers(modules):
37 available_providers = set()
38 for providers_mod in modules:
39 providers = [
40 '.'.join([providers_mod.__package__, mod])
41 for mod in list_module(providers_mod) if mod != '__pycache__'
42 ]
43 available_providers.update(providers)
44 return sorted(available_providers)
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/utils/loading.py b/faker/utils/loading.py
--- a/faker/utils/loading.py
+++ b/faker/utils/loading.py
@@ -7,7 +7,10 @@
def get_path(module):
if getattr(sys, 'frozen', False):
# frozen
- path = os.path.dirname(sys.executable)
+ base_dir = os.path.dirname(sys.executable)
+ lib_dir = os.path.join(base_dir, "lib")
+ module_to_rel_path = os.path.join(*module.__package__.split("."))
+ path = os.path.join(lib_dir, module_to_rel_path)
else:
# unfrozen
path = os.path.dirname(os.path.realpath(module.__file__))
| {"golden_diff": "diff --git a/faker/utils/loading.py b/faker/utils/loading.py\n--- a/faker/utils/loading.py\n+++ b/faker/utils/loading.py\n@@ -7,7 +7,10 @@\n def get_path(module):\n if getattr(sys, 'frozen', False):\n # frozen\n- path = os.path.dirname(sys.executable)\n+ base_dir = os.path.dirname(sys.executable)\n+ lib_dir = os.path.join(base_dir, \"lib\")\n+ module_to_rel_path = os.path.join(*module.__package__.split(\".\"))\n+ path = os.path.join(lib_dir, module_to_rel_path)\n else:\n # unfrozen\n path = os.path.dirname(os.path.realpath(module.__file__))\n", "issue": "AttributeError: module 'faker.providers' has no attribute '__file__'\nI converted my python code to .exe using cx_Freeze. While opening my .exe file I am getting this error.\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\cx_Freeze\\initscripts\\__startup__.py\", line 14, in run\r\n module.run()\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\cx_Freeze\\initscripts\\Console.py\", line 26, in run\r\n exec(code, m.__dict__)\r\n File \"DataGenerator.py\", line 7, in <module>\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\__init__.py\", line 4, in <module>\r\n from faker.factory import Factory\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\factory.py\", line 10, in <module>\r\n from faker.config import DEFAULT_LOCALE, PROVIDERS, AVAILABLE_LOCALES\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\config.py\", line 11, in <module>\r\n PROVIDERS = find_available_providers([import_module(path) for path in META_PROVIDERS_MODULES])\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\utils\\loading.py\", line 29, in find_available_providers\r\n providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\utils\\loading.py\", line 7, in list_module\r\n path = os.path.dirname(module.__file__)\r\nAttributeError: module 'faker.providers' has no attribute '__file__'\n", "before_files": [{"content": "import os\nimport sys\nfrom importlib import import_module\nimport pkgutil\n\n\ndef get_path(module):\n if getattr(sys, 'frozen', False):\n # frozen\n path = os.path.dirname(sys.executable)\n else:\n # unfrozen\n path = os.path.dirname(os.path.realpath(module.__file__))\n return path\n\n\ndef list_module(module):\n path = get_path(module)\n modules = [name for finder, name,\n is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n return modules\n\n\ndef find_available_locales(providers):\n available_locales = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n return available_locales\n\n\ndef find_available_providers(modules):\n available_providers = set()\n for providers_mod in modules:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}], "after_files": [{"content": "import os\nimport sys\nfrom importlib import import_module\nimport pkgutil\n\n\ndef get_path(module):\n if getattr(sys, 'frozen', False):\n # frozen\n base_dir = os.path.dirname(sys.executable)\n lib_dir = os.path.join(base_dir, \"lib\")\n module_to_rel_path = os.path.join(*module.__package__.split(\".\"))\n path = os.path.join(lib_dir, module_to_rel_path)\n else:\n # unfrozen\n path = os.path.dirname(os.path.realpath(module.__file__))\n return path\n\n\ndef list_module(module):\n path = get_path(module)\n modules = [name for finder, name,\n is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n return modules\n\n\ndef find_available_locales(providers):\n available_locales = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n return available_locales\n\n\ndef find_available_providers(modules):\n available_providers = set()\n for providers_mod in modules:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}]} | 1,004 | 154 |
gh_patches_debug_8619 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-647 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in indoor_converter
If `pkl_prefix=='sunrgbd'` we go to this [else](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/indoor_converter.py#L89) for `s3dis` and get `FileNotFoundError`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/data_converter/indoor_converter.py`
Content:
```
1 import mmcv
2 import numpy as np
3 import os
4
5 from tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData
6 from tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData
7 from tools.data_converter.sunrgbd_data_utils import SUNRGBDData
8
9
10 def create_indoor_info_file(data_path,
11 pkl_prefix='sunrgbd',
12 save_path=None,
13 use_v1=False,
14 workers=4):
15 """Create indoor information file.
16
17 Get information of the raw data and save it to the pkl file.
18
19 Args:
20 data_path (str): Path of the data.
21 pkl_prefix (str): Prefix of the pkl to be saved. Default: 'sunrgbd'.
22 save_path (str): Path of the pkl to be saved. Default: None.
23 use_v1 (bool): Whether to use v1. Default: False.
24 workers (int): Number of threads to be used. Default: 4.
25 """
26 assert os.path.exists(data_path)
27 assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \
28 f'unsupported indoor dataset {pkl_prefix}'
29 save_path = data_path if save_path is None else save_path
30 assert os.path.exists(save_path)
31
32 # generate infos for both detection and segmentation task
33 if pkl_prefix in ['sunrgbd', 'scannet']:
34 train_filename = os.path.join(save_path,
35 f'{pkl_prefix}_infos_train.pkl')
36 val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')
37 if pkl_prefix == 'sunrgbd':
38 # SUN RGB-D has a train-val split
39 train_dataset = SUNRGBDData(
40 root_path=data_path, split='train', use_v1=use_v1)
41 val_dataset = SUNRGBDData(
42 root_path=data_path, split='val', use_v1=use_v1)
43 else:
44 # ScanNet has a train-val-test split
45 train_dataset = ScanNetData(root_path=data_path, split='train')
46 val_dataset = ScanNetData(root_path=data_path, split='val')
47 test_dataset = ScanNetData(root_path=data_path, split='test')
48 test_filename = os.path.join(save_path,
49 f'{pkl_prefix}_infos_test.pkl')
50
51 infos_train = train_dataset.get_infos(
52 num_workers=workers, has_label=True)
53 mmcv.dump(infos_train, train_filename, 'pkl')
54 print(f'{pkl_prefix} info train file is saved to {train_filename}')
55
56 infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)
57 mmcv.dump(infos_val, val_filename, 'pkl')
58 print(f'{pkl_prefix} info val file is saved to {val_filename}')
59
60 if pkl_prefix == 'scannet':
61 infos_test = test_dataset.get_infos(
62 num_workers=workers, has_label=False)
63 mmcv.dump(infos_test, test_filename, 'pkl')
64 print(f'{pkl_prefix} info test file is saved to {test_filename}')
65
66 # generate infos for the semantic segmentation task
67 # e.g. re-sampled scene indexes and label weights
68 # scene indexes are used to re-sample rooms with different number of points
69 # label weights are used to balance classes with different number of points
70 if pkl_prefix == 'scannet':
71 # label weight computation function is adopted from
72 # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24
73 train_dataset = ScanNetSegData(
74 data_root=data_path,
75 ann_file=train_filename,
76 split='train',
77 num_points=8192,
78 label_weight_func=lambda x: 1.0 / np.log(1.2 + x))
79 # TODO: do we need to generate on val set?
80 val_dataset = ScanNetSegData(
81 data_root=data_path,
82 ann_file=val_filename,
83 split='val',
84 num_points=8192,
85 label_weight_func=lambda x: 1.0 / np.log(1.2 + x))
86 # no need to generate for test set
87 train_dataset.get_seg_infos()
88 val_dataset.get_seg_infos()
89 else:
90 # S3DIS doesn't have a fixed train-val split
91 # it has 6 areas instead, so we generate info file for each of them
92 # in training, we will use dataset to wrap different areas
93 splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]]
94 for split in splits:
95 dataset = S3DISData(root_path=data_path, split=split)
96 info = dataset.get_infos(num_workers=workers, has_label=True)
97 filename = os.path.join(save_path,
98 f'{pkl_prefix}_infos_{split}.pkl')
99 mmcv.dump(info, filename, 'pkl')
100 print(f'{pkl_prefix} info {split} file is saved to {filename}')
101 seg_dataset = S3DISSegData(
102 data_root=data_path,
103 ann_file=filename,
104 split=split,
105 num_points=4096,
106 label_weight_func=lambda x: 1.0 / np.log(1.2 + x))
107 seg_dataset.get_seg_infos()
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/data_converter/indoor_converter.py b/tools/data_converter/indoor_converter.py
--- a/tools/data_converter/indoor_converter.py
+++ b/tools/data_converter/indoor_converter.py
@@ -86,7 +86,7 @@
# no need to generate for test set
train_dataset.get_seg_infos()
val_dataset.get_seg_infos()
- else:
+ elif pkl_prefix == 's3dis':
# S3DIS doesn't have a fixed train-val split
# it has 6 areas instead, so we generate info file for each of them
# in training, we will use dataset to wrap different areas
| {"golden_diff": "diff --git a/tools/data_converter/indoor_converter.py b/tools/data_converter/indoor_converter.py\n--- a/tools/data_converter/indoor_converter.py\n+++ b/tools/data_converter/indoor_converter.py\n@@ -86,7 +86,7 @@\n # no need to generate for test set\n train_dataset.get_seg_infos()\n val_dataset.get_seg_infos()\n- else:\n+ elif pkl_prefix == 's3dis':\n # S3DIS doesn't have a fixed train-val split\n # it has 6 areas instead, so we generate info file for each of them\n # in training, we will use dataset to wrap different areas\n", "issue": "Bug in indoor_converter\nIf `pkl_prefix=='sunrgbd'` we go to this [else](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/indoor_converter.py#L89) for `s3dis` and get `FileNotFoundError`.\n", "before_files": [{"content": "import mmcv\nimport numpy as np\nimport os\n\nfrom tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData\nfrom tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData\nfrom tools.data_converter.sunrgbd_data_utils import SUNRGBDData\n\n\ndef create_indoor_info_file(data_path,\n pkl_prefix='sunrgbd',\n save_path=None,\n use_v1=False,\n workers=4):\n \"\"\"Create indoor information file.\n\n Get information of the raw data and save it to the pkl file.\n\n Args:\n data_path (str): Path of the data.\n pkl_prefix (str): Prefix of the pkl to be saved. Default: 'sunrgbd'.\n save_path (str): Path of the pkl to be saved. Default: None.\n use_v1 (bool): Whether to use v1. Default: False.\n workers (int): Number of threads to be used. Default: 4.\n \"\"\"\n assert os.path.exists(data_path)\n assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \\\n f'unsupported indoor dataset {pkl_prefix}'\n save_path = data_path if save_path is None else save_path\n assert os.path.exists(save_path)\n\n # generate infos for both detection and segmentation task\n if pkl_prefix in ['sunrgbd', 'scannet']:\n train_filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_train.pkl')\n val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')\n if pkl_prefix == 'sunrgbd':\n # SUN RGB-D has a train-val split\n train_dataset = SUNRGBDData(\n root_path=data_path, split='train', use_v1=use_v1)\n val_dataset = SUNRGBDData(\n root_path=data_path, split='val', use_v1=use_v1)\n else:\n # ScanNet has a train-val-test split\n train_dataset = ScanNetData(root_path=data_path, split='train')\n val_dataset = ScanNetData(root_path=data_path, split='val')\n test_dataset = ScanNetData(root_path=data_path, split='test')\n test_filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_test.pkl')\n\n infos_train = train_dataset.get_infos(\n num_workers=workers, has_label=True)\n mmcv.dump(infos_train, train_filename, 'pkl')\n print(f'{pkl_prefix} info train file is saved to {train_filename}')\n\n infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)\n mmcv.dump(infos_val, val_filename, 'pkl')\n print(f'{pkl_prefix} info val file is saved to {val_filename}')\n\n if pkl_prefix == 'scannet':\n infos_test = test_dataset.get_infos(\n num_workers=workers, has_label=False)\n mmcv.dump(infos_test, test_filename, 'pkl')\n print(f'{pkl_prefix} info test file is saved to {test_filename}')\n\n # generate infos for the semantic segmentation task\n # e.g. re-sampled scene indexes and label weights\n # scene indexes are used to re-sample rooms with different number of points\n # label weights are used to balance classes with different number of points\n if pkl_prefix == 'scannet':\n # label weight computation function is adopted from\n # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24\n train_dataset = ScanNetSegData(\n data_root=data_path,\n ann_file=train_filename,\n split='train',\n num_points=8192,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n # TODO: do we need to generate on val set?\n val_dataset = ScanNetSegData(\n data_root=data_path,\n ann_file=val_filename,\n split='val',\n num_points=8192,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n # no need to generate for test set\n train_dataset.get_seg_infos()\n val_dataset.get_seg_infos()\n else:\n # S3DIS doesn't have a fixed train-val split\n # it has 6 areas instead, so we generate info file for each of them\n # in training, we will use dataset to wrap different areas\n splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]]\n for split in splits:\n dataset = S3DISData(root_path=data_path, split=split)\n info = dataset.get_infos(num_workers=workers, has_label=True)\n filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_{split}.pkl')\n mmcv.dump(info, filename, 'pkl')\n print(f'{pkl_prefix} info {split} file is saved to {filename}')\n seg_dataset = S3DISSegData(\n data_root=data_path,\n ann_file=filename,\n split=split,\n num_points=4096,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n seg_dataset.get_seg_infos()\n", "path": "tools/data_converter/indoor_converter.py"}], "after_files": [{"content": "import mmcv\nimport numpy as np\nimport os\n\nfrom tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData\nfrom tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData\nfrom tools.data_converter.sunrgbd_data_utils import SUNRGBDData\n\n\ndef create_indoor_info_file(data_path,\n pkl_prefix='sunrgbd',\n save_path=None,\n use_v1=False,\n workers=4):\n \"\"\"Create indoor information file.\n\n Get information of the raw data and save it to the pkl file.\n\n Args:\n data_path (str): Path of the data.\n pkl_prefix (str): Prefix of the pkl to be saved. Default: 'sunrgbd'.\n save_path (str): Path of the pkl to be saved. Default: None.\n use_v1 (bool): Whether to use v1. Default: False.\n workers (int): Number of threads to be used. Default: 4.\n \"\"\"\n assert os.path.exists(data_path)\n assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \\\n f'unsupported indoor dataset {pkl_prefix}'\n save_path = data_path if save_path is None else save_path\n assert os.path.exists(save_path)\n\n # generate infos for both detection and segmentation task\n if pkl_prefix in ['sunrgbd', 'scannet']:\n train_filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_train.pkl')\n val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')\n if pkl_prefix == 'sunrgbd':\n # SUN RGB-D has a train-val split\n train_dataset = SUNRGBDData(\n root_path=data_path, split='train', use_v1=use_v1)\n val_dataset = SUNRGBDData(\n root_path=data_path, split='val', use_v1=use_v1)\n else:\n # ScanNet has a train-val-test split\n train_dataset = ScanNetData(root_path=data_path, split='train')\n val_dataset = ScanNetData(root_path=data_path, split='val')\n test_dataset = ScanNetData(root_path=data_path, split='test')\n test_filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_test.pkl')\n\n infos_train = train_dataset.get_infos(\n num_workers=workers, has_label=True)\n mmcv.dump(infos_train, train_filename, 'pkl')\n print(f'{pkl_prefix} info train file is saved to {train_filename}')\n\n infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)\n mmcv.dump(infos_val, val_filename, 'pkl')\n print(f'{pkl_prefix} info val file is saved to {val_filename}')\n\n if pkl_prefix == 'scannet':\n infos_test = test_dataset.get_infos(\n num_workers=workers, has_label=False)\n mmcv.dump(infos_test, test_filename, 'pkl')\n print(f'{pkl_prefix} info test file is saved to {test_filename}')\n\n # generate infos for the semantic segmentation task\n # e.g. re-sampled scene indexes and label weights\n # scene indexes are used to re-sample rooms with different number of points\n # label weights are used to balance classes with different number of points\n if pkl_prefix == 'scannet':\n # label weight computation function is adopted from\n # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24\n train_dataset = ScanNetSegData(\n data_root=data_path,\n ann_file=train_filename,\n split='train',\n num_points=8192,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n # TODO: do we need to generate on val set?\n val_dataset = ScanNetSegData(\n data_root=data_path,\n ann_file=val_filename,\n split='val',\n num_points=8192,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n # no need to generate for test set\n train_dataset.get_seg_infos()\n val_dataset.get_seg_infos()\n elif pkl_prefix == 's3dis':\n # S3DIS doesn't have a fixed train-val split\n # it has 6 areas instead, so we generate info file for each of them\n # in training, we will use dataset to wrap different areas\n splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]]\n for split in splits:\n dataset = S3DISData(root_path=data_path, split=split)\n info = dataset.get_infos(num_workers=workers, has_label=True)\n filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_{split}.pkl')\n mmcv.dump(info, filename, 'pkl')\n print(f'{pkl_prefix} info {split} file is saved to {filename}')\n seg_dataset = S3DISSegData(\n data_root=data_path,\n ann_file=filename,\n split=split,\n num_points=4096,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n seg_dataset.get_seg_infos()\n", "path": "tools/data_converter/indoor_converter.py"}]} | 1,717 | 143 |
gh_patches_debug_36598 | rasdani/github-patches | git_diff | getredash__redash-1944 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Redash Permissions not working for some use cases
### Issue Summary
Currently, when query owner grants permission to another user for a query, the user is still unable to perform the following tasks:
* change data source
* schedule the query
* add and save new visualisation
I believe the user should have the ability to do all the things that the owner could do once permission has been granted.
### Technical details:
* Redash Version: 1.0.3
* Browser/OS: Chrome
* How did you install Redash: AWS using the AMI
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/visualizations.py`
Content:
```
1 import json
2 from flask import request
3
4 from redash import models
5 from redash.permissions import require_permission, require_admin_or_owner
6 from redash.handlers.base import BaseResource, get_object_or_404
7
8
9 class VisualizationListResource(BaseResource):
10 @require_permission('edit_query')
11 def post(self):
12 kwargs = request.get_json(force=True)
13
14 query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)
15 require_admin_or_owner(query.user_id)
16
17 kwargs['options'] = json.dumps(kwargs['options'])
18 kwargs['query_rel'] = query
19
20 vis = models.Visualization(**kwargs)
21 models.db.session.add(vis)
22 models.db.session.commit()
23 d = vis.to_dict(with_query=False)
24 return d
25
26
27 class VisualizationResource(BaseResource):
28 @require_permission('edit_query')
29 def post(self, visualization_id):
30 vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
31 require_admin_or_owner(vis.query_rel.user_id)
32
33 kwargs = request.get_json(force=True)
34 if 'options' in kwargs:
35 kwargs['options'] = json.dumps(kwargs['options'])
36
37 kwargs.pop('id', None)
38 kwargs.pop('query_id', None)
39
40 self.update_model(vis, kwargs)
41 d = vis.to_dict(with_query=False)
42 models.db.session.commit()
43 return d
44
45 @require_permission('edit_query')
46 def delete(self, visualization_id):
47 vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
48 require_admin_or_owner(vis.query_rel.user_id)
49 models.db.session.delete(vis)
50 models.db.session.commit()
51
```
Path: `redash/permissions.py`
Content:
```
1 from flask_login import current_user
2 from flask_restful import abort
3 import functools
4 from funcy import flatten
5
6 view_only = True
7 not_view_only = False
8
9 ACCESS_TYPE_VIEW = 'view'
10 ACCESS_TYPE_MODIFY = 'modify'
11 ACCESS_TYPE_DELETE = 'delete'
12
13 ACCESS_TYPES = (ACCESS_TYPE_VIEW, ACCESS_TYPE_MODIFY, ACCESS_TYPE_DELETE)
14
15
16 def has_access(object_groups, user, need_view_only):
17 if 'admin' in user.permissions:
18 return True
19
20 matching_groups = set(object_groups.keys()).intersection(user.group_ids)
21
22 if not matching_groups:
23 return False
24
25 required_level = 1 if need_view_only else 2
26
27 group_level = 1 if all(flatten([object_groups[group] for group in matching_groups])) else 2
28
29 return required_level <= group_level
30
31
32 def require_access(object_groups, user, need_view_only):
33 if not has_access(object_groups, user, need_view_only):
34 abort(403)
35
36
37 class require_permissions(object):
38 def __init__(self, permissions):
39 self.permissions = permissions
40
41 def __call__(self, fn):
42 @functools.wraps(fn)
43 def decorated(*args, **kwargs):
44 has_permissions = current_user.has_permissions(self.permissions)
45
46 if has_permissions:
47 return fn(*args, **kwargs)
48 else:
49 abort(403)
50
51 return decorated
52
53
54 def require_permission(permission):
55 return require_permissions((permission,))
56
57
58 def require_admin(fn):
59 return require_permission('admin')(fn)
60
61
62 def require_super_admin(fn):
63 return require_permission('super_admin')(fn)
64
65
66 def has_permission_or_owner(permission, object_owner_id):
67 return int(object_owner_id) == current_user.id or current_user.has_permission(permission)
68
69
70 def is_admin_or_owner(object_owner_id):
71 return has_permission_or_owner('admin', object_owner_id)
72
73
74 def require_permission_or_owner(permission, object_owner_id):
75 if not has_permission_or_owner(permission, object_owner_id):
76 abort(403)
77
78
79 def require_admin_or_owner(object_owner_id):
80 if not is_admin_or_owner(object_owner_id):
81 abort(403, message="You don't have permission to edit this resource.")
82
83
84 def can_modify(obj, user):
85 return is_admin_or_owner(obj.user_id) or user.has_access(obj, ACCESS_TYPE_MODIFY)
86
87
88 def require_object_modify_permission(obj, user):
89 if not can_modify(obj, user):
90 abort(403)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/handlers/visualizations.py b/redash/handlers/visualizations.py
--- a/redash/handlers/visualizations.py
+++ b/redash/handlers/visualizations.py
@@ -1,9 +1,12 @@
import json
+
from flask import request
from redash import models
-from redash.permissions import require_permission, require_admin_or_owner
from redash.handlers.base import BaseResource, get_object_or_404
+from redash.permissions import (require_admin_or_owner,
+ require_object_modify_permission,
+ require_permission)
class VisualizationListResource(BaseResource):
@@ -12,7 +15,7 @@
kwargs = request.get_json(force=True)
query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)
- require_admin_or_owner(query.user_id)
+ require_object_modify_permission(query, self.current_user)
kwargs['options'] = json.dumps(kwargs['options'])
kwargs['query_rel'] = query
@@ -28,7 +31,7 @@
@require_permission('edit_query')
def post(self, visualization_id):
vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
- require_admin_or_owner(vis.query_rel.user_id)
+ require_object_modify_permission(vis.query_rel, self.current_user)
kwargs = request.get_json(force=True)
if 'options' in kwargs:
@@ -45,6 +48,6 @@
@require_permission('edit_query')
def delete(self, visualization_id):
vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
- require_admin_or_owner(vis.query_rel.user_id)
+ require_object_modify_permission(vis.query_rel, self.current_user)
models.db.session.delete(vis)
models.db.session.commit()
diff --git a/redash/permissions.py b/redash/permissions.py
--- a/redash/permissions.py
+++ b/redash/permissions.py
@@ -1,6 +1,7 @@
+import functools
+
from flask_login import current_user
from flask_restful import abort
-import functools
from funcy import flatten
view_only = True
| {"golden_diff": "diff --git a/redash/handlers/visualizations.py b/redash/handlers/visualizations.py\n--- a/redash/handlers/visualizations.py\n+++ b/redash/handlers/visualizations.py\n@@ -1,9 +1,12 @@\n import json\n+\n from flask import request\n \n from redash import models\n-from redash.permissions import require_permission, require_admin_or_owner\n from redash.handlers.base import BaseResource, get_object_or_404\n+from redash.permissions import (require_admin_or_owner,\n+ require_object_modify_permission,\n+ require_permission)\n \n \n class VisualizationListResource(BaseResource):\n@@ -12,7 +15,7 @@\n kwargs = request.get_json(force=True)\n \n query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)\n- require_admin_or_owner(query.user_id)\n+ require_object_modify_permission(query, self.current_user)\n \n kwargs['options'] = json.dumps(kwargs['options'])\n kwargs['query_rel'] = query\n@@ -28,7 +31,7 @@\n @require_permission('edit_query')\n def post(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n- require_admin_or_owner(vis.query_rel.user_id)\n+ require_object_modify_permission(vis.query_rel, self.current_user)\n \n kwargs = request.get_json(force=True)\n if 'options' in kwargs:\n@@ -45,6 +48,6 @@\n @require_permission('edit_query')\n def delete(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n- require_admin_or_owner(vis.query_rel.user_id)\n+ require_object_modify_permission(vis.query_rel, self.current_user)\n models.db.session.delete(vis)\n models.db.session.commit()\ndiff --git a/redash/permissions.py b/redash/permissions.py\n--- a/redash/permissions.py\n+++ b/redash/permissions.py\n@@ -1,6 +1,7 @@\n+import functools\n+\n from flask_login import current_user\n from flask_restful import abort\n-import functools\n from funcy import flatten\n \n view_only = True\n", "issue": "Redash Permissions not working for some use cases\n### Issue Summary\r\n\r\nCurrently, when query owner grants permission to another user for a query, the user is still unable to perform the following tasks:\r\n\r\n* change data source\r\n* schedule the query\r\n* add and save new visualisation\r\n\r\nI believe the user should have the ability to do all the things that the owner could do once permission has been granted.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 1.0.3\r\n* Browser/OS: Chrome\r\n* How did you install Redash: AWS using the AMI\r\n\n", "before_files": [{"content": "import json\nfrom flask import request\n\nfrom redash import models\nfrom redash.permissions import require_permission, require_admin_or_owner\nfrom redash.handlers.base import BaseResource, get_object_or_404\n\n\nclass VisualizationListResource(BaseResource):\n @require_permission('edit_query')\n def post(self):\n kwargs = request.get_json(force=True)\n\n query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)\n require_admin_or_owner(query.user_id)\n\n kwargs['options'] = json.dumps(kwargs['options'])\n kwargs['query_rel'] = query\n\n vis = models.Visualization(**kwargs)\n models.db.session.add(vis)\n models.db.session.commit()\n d = vis.to_dict(with_query=False)\n return d\n\n\nclass VisualizationResource(BaseResource):\n @require_permission('edit_query')\n def post(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n require_admin_or_owner(vis.query_rel.user_id)\n\n kwargs = request.get_json(force=True)\n if 'options' in kwargs:\n kwargs['options'] = json.dumps(kwargs['options'])\n\n kwargs.pop('id', None)\n kwargs.pop('query_id', None)\n\n self.update_model(vis, kwargs)\n d = vis.to_dict(with_query=False)\n models.db.session.commit()\n return d\n\n @require_permission('edit_query')\n def delete(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n require_admin_or_owner(vis.query_rel.user_id)\n models.db.session.delete(vis)\n models.db.session.commit()\n", "path": "redash/handlers/visualizations.py"}, {"content": "from flask_login import current_user\nfrom flask_restful import abort\nimport functools\nfrom funcy import flatten\n\nview_only = True\nnot_view_only = False\n\nACCESS_TYPE_VIEW = 'view'\nACCESS_TYPE_MODIFY = 'modify'\nACCESS_TYPE_DELETE = 'delete'\n\nACCESS_TYPES = (ACCESS_TYPE_VIEW, ACCESS_TYPE_MODIFY, ACCESS_TYPE_DELETE)\n\n\ndef has_access(object_groups, user, need_view_only):\n if 'admin' in user.permissions:\n return True\n\n matching_groups = set(object_groups.keys()).intersection(user.group_ids)\n\n if not matching_groups:\n return False\n\n required_level = 1 if need_view_only else 2\n\n group_level = 1 if all(flatten([object_groups[group] for group in matching_groups])) else 2\n\n return required_level <= group_level\n\n\ndef require_access(object_groups, user, need_view_only):\n if not has_access(object_groups, user, need_view_only):\n abort(403)\n\n\nclass require_permissions(object):\n def __init__(self, permissions):\n self.permissions = permissions\n\n def __call__(self, fn):\n @functools.wraps(fn)\n def decorated(*args, **kwargs):\n has_permissions = current_user.has_permissions(self.permissions)\n\n if has_permissions:\n return fn(*args, **kwargs)\n else:\n abort(403)\n\n return decorated\n\n\ndef require_permission(permission):\n return require_permissions((permission,))\n\n\ndef require_admin(fn):\n return require_permission('admin')(fn)\n\n\ndef require_super_admin(fn):\n return require_permission('super_admin')(fn)\n\n\ndef has_permission_or_owner(permission, object_owner_id):\n return int(object_owner_id) == current_user.id or current_user.has_permission(permission)\n\n\ndef is_admin_or_owner(object_owner_id):\n return has_permission_or_owner('admin', object_owner_id)\n\n\ndef require_permission_or_owner(permission, object_owner_id):\n if not has_permission_or_owner(permission, object_owner_id):\n abort(403)\n\n\ndef require_admin_or_owner(object_owner_id):\n if not is_admin_or_owner(object_owner_id):\n abort(403, message=\"You don't have permission to edit this resource.\")\n\n\ndef can_modify(obj, user):\n return is_admin_or_owner(obj.user_id) or user.has_access(obj, ACCESS_TYPE_MODIFY)\n\n\ndef require_object_modify_permission(obj, user):\n if not can_modify(obj, user):\n abort(403)\n", "path": "redash/permissions.py"}], "after_files": [{"content": "import json\n\nfrom flask import request\n\nfrom redash import models\nfrom redash.handlers.base import BaseResource, get_object_or_404\nfrom redash.permissions import (require_admin_or_owner,\n require_object_modify_permission,\n require_permission)\n\n\nclass VisualizationListResource(BaseResource):\n @require_permission('edit_query')\n def post(self):\n kwargs = request.get_json(force=True)\n\n query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)\n require_object_modify_permission(query, self.current_user)\n\n kwargs['options'] = json.dumps(kwargs['options'])\n kwargs['query_rel'] = query\n\n vis = models.Visualization(**kwargs)\n models.db.session.add(vis)\n models.db.session.commit()\n d = vis.to_dict(with_query=False)\n return d\n\n\nclass VisualizationResource(BaseResource):\n @require_permission('edit_query')\n def post(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n require_object_modify_permission(vis.query_rel, self.current_user)\n\n kwargs = request.get_json(force=True)\n if 'options' in kwargs:\n kwargs['options'] = json.dumps(kwargs['options'])\n\n kwargs.pop('id', None)\n kwargs.pop('query_id', None)\n\n self.update_model(vis, kwargs)\n d = vis.to_dict(with_query=False)\n models.db.session.commit()\n return d\n\n @require_permission('edit_query')\n def delete(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n require_object_modify_permission(vis.query_rel, self.current_user)\n models.db.session.delete(vis)\n models.db.session.commit()\n", "path": "redash/handlers/visualizations.py"}, {"content": "import functools\n\nfrom flask_login import current_user\nfrom flask_restful import abort\nfrom funcy import flatten\n\nview_only = True\nnot_view_only = False\n\nACCESS_TYPE_VIEW = 'view'\nACCESS_TYPE_MODIFY = 'modify'\nACCESS_TYPE_DELETE = 'delete'\n\nACCESS_TYPES = (ACCESS_TYPE_VIEW, ACCESS_TYPE_MODIFY, ACCESS_TYPE_DELETE)\n\n\ndef has_access(object_groups, user, need_view_only):\n if 'admin' in user.permissions:\n return True\n\n matching_groups = set(object_groups.keys()).intersection(user.group_ids)\n\n if not matching_groups:\n return False\n\n required_level = 1 if need_view_only else 2\n\n group_level = 1 if all(flatten([object_groups[group] for group in matching_groups])) else 2\n\n return required_level <= group_level\n\n\ndef require_access(object_groups, user, need_view_only):\n if not has_access(object_groups, user, need_view_only):\n abort(403)\n\n\nclass require_permissions(object):\n def __init__(self, permissions):\n self.permissions = permissions\n\n def __call__(self, fn):\n @functools.wraps(fn)\n def decorated(*args, **kwargs):\n has_permissions = current_user.has_permissions(self.permissions)\n\n if has_permissions:\n return fn(*args, **kwargs)\n else:\n abort(403)\n\n return decorated\n\n\ndef require_permission(permission):\n return require_permissions((permission,))\n\n\ndef require_admin(fn):\n return require_permission('admin')(fn)\n\n\ndef require_super_admin(fn):\n return require_permission('super_admin')(fn)\n\n\ndef has_permission_or_owner(permission, object_owner_id):\n return int(object_owner_id) == current_user.id or current_user.has_permission(permission)\n\n\ndef is_admin_or_owner(object_owner_id):\n return has_permission_or_owner('admin', object_owner_id)\n\n\ndef require_permission_or_owner(permission, object_owner_id):\n if not has_permission_or_owner(permission, object_owner_id):\n abort(403)\n\n\ndef require_admin_or_owner(object_owner_id):\n if not is_admin_or_owner(object_owner_id):\n abort(403, message=\"You don't have permission to edit this resource.\")\n\n\ndef can_modify(obj, user):\n return is_admin_or_owner(obj.user_id) or user.has_access(obj, ACCESS_TYPE_MODIFY)\n\n\ndef require_object_modify_permission(obj, user):\n if not can_modify(obj, user):\n abort(403)\n", "path": "redash/permissions.py"}]} | 1,581 | 496 |
gh_patches_debug_3019 | rasdani/github-patches | git_diff | rucio__rucio-4790 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix setup_webui script
Motivation
----------
Script has a wrong import, needs to be fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup_webui.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015-2021 CERN
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 # Authors:
17 # - Vincent Garonne <[email protected]>, 2015-2017
18 # - Martin Barisits <[email protected]>, 2016-2021
19 # - Benedikt Ziemons <[email protected]>, 2021
20
21 import os
22 import sys
23
24 from setuptools import setup
25
26
27 if sys.version_info < (3, 6):
28 print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')
29 sys.exit(1)
30
31 try:
32 from setuputil import get_rucio_version
33 except ImportError:
34 sys.path.append(os.path.abspath(os.path.dirname(__file__)))
35 from setuputil import get_rucio_version
36
37 name = 'rucio-webui'
38 packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']
39 data_files = []
40 description = "Rucio WebUI Package"
41
42 setup(
43 name=name,
44 version=get_rucio_version(),
45 packages=packages,
46 package_dir={'': 'lib'},
47 data_files=None,
48 include_package_data=True,
49 scripts=None,
50 author="Rucio",
51 author_email="[email protected]",
52 description=description,
53 license="Apache License, Version 2.0",
54 url="https://rucio.cern.ch/",
55 python_requires=">=3.6, <4",
56 classifiers=[
57 'Development Status :: 5 - Production/Stable',
58 'License :: OSI Approved :: Apache Software License',
59 'Intended Audience :: Information Technology',
60 'Intended Audience :: System Administrators',
61 'Operating System :: POSIX :: Linux',
62 'Natural Language :: English',
63 'Programming Language :: Python',
64 'Programming Language :: Python :: 3',
65 'Programming Language :: Python :: 3.6',
66 'Programming Language :: Python :: 3.7',
67 'Programming Language :: Python :: 3.8',
68 'Programming Language :: Python :: 3.9',
69 'Environment :: No Input/Output (Daemon)', ],
70 install_requires=['rucio>=1.2.5', ],
71 )
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup_webui.py b/setup_webui.py
--- a/setup_webui.py
+++ b/setup_webui.py
@@ -35,7 +35,7 @@
from setuputil import get_rucio_version
name = 'rucio-webui'
-packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']
+packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']
data_files = []
description = "Rucio WebUI Package"
| {"golden_diff": "diff --git a/setup_webui.py b/setup_webui.py\n--- a/setup_webui.py\n+++ b/setup_webui.py\n@@ -35,7 +35,7 @@\n from setuputil import get_rucio_version\n \n name = 'rucio-webui'\n-packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']\n+packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']\n data_files = []\n description = \"Rucio WebUI Package\"\n", "issue": "Fix setup_webui script\nMotivation\r\n----------\r\nScript has a wrong import, needs to be fixed.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne <[email protected]>, 2015-2017\n# - Martin Barisits <[email protected]>, 2016-2021\n# - Benedikt Ziemons <[email protected]>, 2021\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info < (3, 6):\n print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')\n sys.exit(1)\n\ntry:\n from setuputil import get_rucio_version\nexcept ImportError:\n sys.path.append(os.path.abspath(os.path.dirname(__file__)))\n from setuputil import get_rucio_version\n\nname = 'rucio-webui'\npackages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']\ndata_files = []\ndescription = \"Rucio WebUI Package\"\n\nsetup(\n name=name,\n version=get_rucio_version(),\n packages=packages,\n package_dir={'': 'lib'},\n data_files=None,\n include_package_data=True,\n scripts=None,\n author=\"Rucio\",\n author_email=\"[email protected]\",\n description=description,\n license=\"Apache License, Version 2.0\",\n url=\"https://rucio.cern.ch/\",\n python_requires=\">=3.6, <4\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'Operating System :: POSIX :: Linux',\n 'Natural Language :: English',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Environment :: No Input/Output (Daemon)', ],\n install_requires=['rucio>=1.2.5', ],\n)\n", "path": "setup_webui.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne <[email protected]>, 2015-2017\n# - Martin Barisits <[email protected]>, 2016-2021\n# - Benedikt Ziemons <[email protected]>, 2021\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info < (3, 6):\n print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')\n sys.exit(1)\n\ntry:\n from setuputil import get_rucio_version\nexcept ImportError:\n sys.path.append(os.path.abspath(os.path.dirname(__file__)))\n from setuputil import get_rucio_version\n\nname = 'rucio-webui'\npackages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']\ndata_files = []\ndescription = \"Rucio WebUI Package\"\n\nsetup(\n name=name,\n version=get_rucio_version(),\n packages=packages,\n package_dir={'': 'lib'},\n data_files=None,\n include_package_data=True,\n scripts=None,\n author=\"Rucio\",\n author_email=\"[email protected]\",\n description=description,\n license=\"Apache License, Version 2.0\",\n url=\"https://rucio.cern.ch/\",\n python_requires=\">=3.6, <4\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'Operating System :: POSIX :: Linux',\n 'Natural Language :: English',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Environment :: No Input/Output (Daemon)', ],\n install_requires=['rucio>=1.2.5', ],\n)\n", "path": "setup_webui.py"}]} | 1,045 | 141 |
gh_patches_debug_26431 | rasdani/github-patches | git_diff | tensorflow__tfx-91 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import errors when trying to run Chicago Taxi on Dataflow
Similarly as in issue [#47](https://github.com/tensorflow/tfx/issues/47), I still have a problem with running CTE on Dataflow. When I use the code with no modifications, the error from previous issue persists - it seems that somehow the `try-except` around the imports doesn't do its job.
When I changed the code to include only the relative import in my fork [here](https://github.com/mwalenia/tfx/tree/import-fix), the problem disappeared, but another one manifested.
This time, there's a problem with importing `estimator` from tensorflow somewhere in the dependencies. Stacktrace:
```Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 269, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session
module = unpickler.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 828, in _import_module
return getattr(__import__(module, None, None, [obj]), obj)
File "/usr/local/lib/python2.7/dist-packages/trainer/taxi.py", line 19, in <module>
from tensorflow_transform import coders as tft_coders
File "/usr/local/lib/python2.7/dist-packages/tensorflow_transform/__init__.py", line 19, in <module>
from tensorflow_transform.analyzers import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow_transform/analyzers.py", line 39, in <module>
from tensorflow_transform import tf_utils
File "/usr/local/lib/python2.7/dist-packages/tensorflow_transform/tf_utils.py", line 24, in <module>
from tensorflow.contrib.proto.python.ops import encode_proto_op
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/__init__.py", line 48, in <module>
from tensorflow.contrib import distribute
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/__init__.py", line 34, in <module>
from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in <module>
from tensorflow.contrib.tpu.python.ops import tpu_ops
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/__init__.py", line 73, in <module>
from tensorflow.contrib.tpu.python.tpu.keras_support import tpu_model as keras_to_tpu_model
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 71, in <module>
from tensorflow.python.estimator import model_fn as model_fn_lib
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/__init__.py", line 25, in <module>
import tensorflow.python.estimator.estimator_lib
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator_lib.py", line 22, in <module>
from tensorflow.python.estimator.canned.baseline import BaselineClassifier
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/canned/baseline.py", line 50, in <module>
from tensorflow.python.estimator import estimator
ImportError: cannot import name estimator
```
Is there anything I can do to fix this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tfx/examples/chicago_taxi/setup.py`
Content:
```
1 # Copyright 2019 Google LLC. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Setup dependencies for local and cloud deployment."""
15 import setuptools
16
17 # LINT.IfChange
18 TF_VERSION = '1.12.0'
19 # LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)
20
21 # LINT.IfChange
22 BEAM_VERSION = '2.11.0'
23 # LINT.ThenChange(setup_beam_on_flink.sh)
24
25 if __name__ == '__main__':
26 setuptools.setup(
27 name='tfx_chicago_taxi',
28 version='0.12.0',
29 packages=setuptools.find_packages(),
30 install_requires=[
31 'apache-beam[gcp]==' + BEAM_VERSION,
32 'jupyter==1.0',
33 'numpy==1.14.5',
34 'protobuf==3.6.1',
35 'tensorflow==' + TF_VERSION,
36 'tensorflow-data-validation==0.12.0',
37 'tensorflow-metadata==0.12.1',
38 'tensorflow-model-analysis==0.12.1',
39 'tensorflow-serving-api==1.12.0',
40 'tensorflow-transform==0.12.0',
41 ],
42 python_requires='>=2.7,<3')
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tfx/examples/chicago_taxi/setup.py b/tfx/examples/chicago_taxi/setup.py
--- a/tfx/examples/chicago_taxi/setup.py
+++ b/tfx/examples/chicago_taxi/setup.py
@@ -15,28 +15,29 @@
import setuptools
# LINT.IfChange
-TF_VERSION = '1.12.0'
+TF_VERSION = '1.13.1'
# LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)
# LINT.IfChange
-BEAM_VERSION = '2.11.0'
+BEAM_VERSION = '2.12.0'
# LINT.ThenChange(setup_beam_on_flink.sh)
if __name__ == '__main__':
setuptools.setup(
name='tfx_chicago_taxi',
- version='0.12.0',
+ version='0.13.0',
packages=setuptools.find_packages(),
install_requires=[
- 'apache-beam[gcp]==' + BEAM_VERSION,
- 'jupyter==1.0',
- 'numpy==1.14.5',
- 'protobuf==3.6.1',
- 'tensorflow==' + TF_VERSION,
- 'tensorflow-data-validation==0.12.0',
- 'tensorflow-metadata==0.12.1',
- 'tensorflow-model-analysis==0.12.1',
- 'tensorflow-serving-api==1.12.0',
- 'tensorflow-transform==0.12.0',
+ 'apache-beam[gcp]>=' + BEAM_VERSION,
+ 'jupyter>=1.0,<2',
+ 'notebook>=5.7.8,<5.8',
+ 'numpy>=1.14.5,<2',
+ 'protobuf>=3.7.0,<3.8.0',
+ 'tensorflow>=' + TF_VERSION,
+ 'tensorflow-data-validation>=0.13.1,<0.14',
+ 'tensorflow-metadata>=0.13.1,<0.14',
+ 'tensorflow-model-analysis>=0.13.2,<0.14',
+ 'tensorflow-serving-api>=1.13.0,<1.14',
+ 'tensorflow-transform>=0.13.0,<0.14',
],
- python_requires='>=2.7,<3')
+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,<4',)
| {"golden_diff": "diff --git a/tfx/examples/chicago_taxi/setup.py b/tfx/examples/chicago_taxi/setup.py\n--- a/tfx/examples/chicago_taxi/setup.py\n+++ b/tfx/examples/chicago_taxi/setup.py\n@@ -15,28 +15,29 @@\n import setuptools\n \n # LINT.IfChange\n-TF_VERSION = '1.12.0'\n+TF_VERSION = '1.13.1'\n # LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)\n \n # LINT.IfChange\n-BEAM_VERSION = '2.11.0'\n+BEAM_VERSION = '2.12.0'\n # LINT.ThenChange(setup_beam_on_flink.sh)\n \n if __name__ == '__main__':\n setuptools.setup(\n name='tfx_chicago_taxi',\n- version='0.12.0',\n+ version='0.13.0',\n packages=setuptools.find_packages(),\n install_requires=[\n- 'apache-beam[gcp]==' + BEAM_VERSION,\n- 'jupyter==1.0',\n- 'numpy==1.14.5',\n- 'protobuf==3.6.1',\n- 'tensorflow==' + TF_VERSION,\n- 'tensorflow-data-validation==0.12.0',\n- 'tensorflow-metadata==0.12.1',\n- 'tensorflow-model-analysis==0.12.1',\n- 'tensorflow-serving-api==1.12.0',\n- 'tensorflow-transform==0.12.0',\n+ 'apache-beam[gcp]>=' + BEAM_VERSION,\n+ 'jupyter>=1.0,<2',\n+ 'notebook>=5.7.8,<5.8',\n+ 'numpy>=1.14.5,<2',\n+ 'protobuf>=3.7.0,<3.8.0',\n+ 'tensorflow>=' + TF_VERSION,\n+ 'tensorflow-data-validation>=0.13.1,<0.14',\n+ 'tensorflow-metadata>=0.13.1,<0.14',\n+ 'tensorflow-model-analysis>=0.13.2,<0.14',\n+ 'tensorflow-serving-api>=1.13.0,<1.14',\n+ 'tensorflow-transform>=0.13.0,<0.14',\n ],\n- python_requires='>=2.7,<3')\n+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,<4',)\n", "issue": "Import errors when trying to run Chicago Taxi on Dataflow\nSimilarly as in issue [#47](https://github.com/tensorflow/tfx/issues/47), I still have a problem with running CTE on Dataflow. When I use the code with no modifications, the error from previous issue persists - it seems that somehow the `try-except` around the imports doesn't do its job.\r\n\r\nWhen I changed the code to include only the relative import in my fork [here](https://github.com/mwalenia/tfx/tree/import-fix), the problem disappeared, but another one manifested.\r\n\r\nThis time, there's a problem with importing `estimator` from tensorflow somewhere in the dependencies. Stacktrace:\r\n\r\n```Traceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py\", line 773, in run\r\n self._load_main_session(self.local_staging_directory)\r\n File \"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py\", line 489, in _load_main_session\r\n pickler.load_session(session_file)\r\n File \"/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py\", line 269, in load_session\r\n return dill.load_session(file_path)\r\n File \"/usr/local/lib/python2.7/dist-packages/dill/_dill.py\", line 410, in load_session\r\n module = unpickler.load()\r\n File \"/usr/lib/python2.7/pickle.py\", line 864, in load\r\n dispatch[key](self)\r\n File \"/usr/lib/python2.7/pickle.py\", line 1139, in load_reduce\r\n value = func(*args)\r\n File \"/usr/local/lib/python2.7/dist-packages/dill/_dill.py\", line 828, in _import_module\r\n return getattr(__import__(module, None, None, [obj]), obj)\r\n File \"/usr/local/lib/python2.7/dist-packages/trainer/taxi.py\", line 19, in <module>\r\n from tensorflow_transform import coders as tft_coders\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow_transform/__init__.py\", line 19, in <module>\r\n from tensorflow_transform.analyzers import *\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow_transform/analyzers.py\", line 39, in <module>\r\n from tensorflow_transform import tf_utils\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow_transform/tf_utils.py\", line 24, in <module>\r\n from tensorflow.contrib.proto.python.ops import encode_proto_op\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/__init__.py\", line 48, in <module>\r\n from tensorflow.contrib import distribute\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/__init__.py\", line 34, in <module>\r\n from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/python/tpu_strategy.py\", line 27, in <module>\r\n from tensorflow.contrib.tpu.python.ops import tpu_ops\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/__init__.py\", line 73, in <module>\r\n from tensorflow.contrib.tpu.python.tpu.keras_support import tpu_model as keras_to_tpu_model\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py\", line 71, in <module>\r\n from tensorflow.python.estimator import model_fn as model_fn_lib\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/__init__.py\", line 25, in <module>\r\n import tensorflow.python.estimator.estimator_lib\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator_lib.py\", line 22, in <module>\r\n from tensorflow.python.estimator.canned.baseline import BaselineClassifier\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/canned/baseline.py\", line 50, in <module>\r\n from tensorflow.python.estimator import estimator\r\nImportError: cannot import name estimator\r\n```\r\n\r\nIs there anything I can do to fix this? \n", "before_files": [{"content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Setup dependencies for local and cloud deployment.\"\"\"\nimport setuptools\n\n# LINT.IfChange\nTF_VERSION = '1.12.0'\n# LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)\n\n# LINT.IfChange\nBEAM_VERSION = '2.11.0'\n# LINT.ThenChange(setup_beam_on_flink.sh)\n\nif __name__ == '__main__':\n setuptools.setup(\n name='tfx_chicago_taxi',\n version='0.12.0',\n packages=setuptools.find_packages(),\n install_requires=[\n 'apache-beam[gcp]==' + BEAM_VERSION,\n 'jupyter==1.0',\n 'numpy==1.14.5',\n 'protobuf==3.6.1',\n 'tensorflow==' + TF_VERSION,\n 'tensorflow-data-validation==0.12.0',\n 'tensorflow-metadata==0.12.1',\n 'tensorflow-model-analysis==0.12.1',\n 'tensorflow-serving-api==1.12.0',\n 'tensorflow-transform==0.12.0',\n ],\n python_requires='>=2.7,<3')\n", "path": "tfx/examples/chicago_taxi/setup.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Setup dependencies for local and cloud deployment.\"\"\"\nimport setuptools\n\n# LINT.IfChange\nTF_VERSION = '1.13.1'\n# LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)\n\n# LINT.IfChange\nBEAM_VERSION = '2.12.0'\n# LINT.ThenChange(setup_beam_on_flink.sh)\n\nif __name__ == '__main__':\n setuptools.setup(\n name='tfx_chicago_taxi',\n version='0.13.0',\n packages=setuptools.find_packages(),\n install_requires=[\n 'apache-beam[gcp]>=' + BEAM_VERSION,\n 'jupyter>=1.0,<2',\n 'notebook>=5.7.8,<5.8',\n 'numpy>=1.14.5,<2',\n 'protobuf>=3.7.0,<3.8.0',\n 'tensorflow>=' + TF_VERSION,\n 'tensorflow-data-validation>=0.13.1,<0.14',\n 'tensorflow-metadata>=0.13.1,<0.14',\n 'tensorflow-model-analysis>=0.13.2,<0.14',\n 'tensorflow-serving-api>=1.13.0,<1.14',\n 'tensorflow-transform>=0.13.0,<0.14',\n ],\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,<4',)\n", "path": "tfx/examples/chicago_taxi/setup.py"}]} | 1,700 | 567 |
gh_patches_debug_9250 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-1852 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[RPM M1] Add a new block to call the generation code for RPM
Tasks | Estimate | Status | Notes | Dependencies
-- | -- | -- | -- | --
The generation code should pull the artifacts from the build workflow to a temporary location | 1 | Complete | | Build workflow must provide usable artifacts
The code will call existing install function to install plugins on min artifacts | 1 | Complete | |
After installation, the code will execute a tool or utility to wrap all the content into a RPM package | 5 | Complete | Require writing a script to utilize FPM to start with and later implement in pure python code. <br><br>20220204: We might change to rpmbuild directly without using FPM. See comments. | FPM usages
The code will also add dependencies to the RPM package so that things like JDK and additional libs for plugins can be installed and pulled separately | 5 | Complete | Need to study on RPM dependency setups | RPM Build Dependencies and the locations of each dependent artifact
The code will move the RPM package from the temp location to dist folder | 2 | Complete | |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/assemble_workflow/bundle_rpm.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import logging
8 import os
9 import shutil
10 import subprocess
11
12 from manifests.build_manifest import BuildManifest
13 from system.os import rpm_architecture
14
15
16 class BundleRpm:
17
18 def __init__(self, filename: str, package_path: str, min_path: str) -> None:
19 self.filename = filename
20 self.package_path = package_path
21 self.min_path = min_path
22
23 def extract(self, dest: str) -> None:
24 cpio_basename = os.path.splitext(os.path.basename(self.package_path))[0]
25 cpio_path = os.path.join(dest, f"{cpio_basename}.cpio")
26 min_source_path = os.path.join(dest, 'usr', 'share', self.filename)
27 min_dest_path = os.path.join(dest, self.min_path)
28 min_config_path = os.path.join(dest, 'etc', self.filename)
29 min_bin_env_path = os.path.join(min_dest_path, 'bin', f"{self.filename}-env")
30
31 # Convert rpm to cpio so we can extract the content
32 logging.info(f"Convert rpm to cpio for extraction: {self.package_path} to {cpio_path}")
33 with open(cpio_path, 'wb') as fp:
34 subprocess.check_call(
35 [
36 'rpm2cpio',
37 self.package_path,
38 ],
39 stdout=fp,
40 cwd=dest,
41 )
42
43 # Extract cpio archive based on the rpm package
44 logging.info(f"Extract cpio {cpio_path} content to {dest}")
45 with open(cpio_path, 'rb') as fp:
46 subprocess.check_call(
47 [
48 'cpio',
49 '-imdv',
50 ],
51 stdin=fp,
52 stdout=subprocess.DEVNULL,
53 stderr=subprocess.STDOUT,
54 cwd=dest,
55 )
56
57 # Move core folder destination so plugin install can proceed
58 logging.info(f"Move {min_source_path} to {min_dest_path} for plugin installation")
59 shutil.move(min_source_path, min_dest_path)
60
61 # Multiple modifications and env vars setups before install plugins
62 # As bin/opensearch-env is different between archive and package
63 # https://github.com/opensearch-project/OpenSearch/issues/2092
64 os.environ[f"{self.filename.upper()}_PATH_CONF"] = min_config_path
65
66 if os.path.exists(min_bin_env_path):
67 # Backup original file
68 shutil.copy2(min_bin_env_path, f"{min_bin_env_path}.backup")
69 # Prevent sourcing as file is only in place after rpm installation
70 # So that min can install plugin zips
71 # Have to decode then encode back to ascii due to mypy complains TextIO not equals to BinaryIO
72 with open(min_bin_env_path, 'rb') as fp:
73 min_bin_env_lines = fp.read().decode('ascii')
74
75 with open(min_bin_env_path, 'wb') as fp:
76 fp.write(min_bin_env_lines.replace('source', '#source').encode('ascii'))
77
78 def build(self, name: str, dest: str, archive_path: str, build_cls: BuildManifest.Build) -> None:
79 # extract dest and build dest are not the same, this is restoring the extract dest
80 # mainly due to rpm requires several different setups compares to tarball and zip
81 ext_dest = os.path.dirname(archive_path)
82 min_source_path = os.path.join(ext_dest, 'usr', 'share', self.filename)
83 min_dest_path = os.path.join(ext_dest, self.min_path)
84 min_bin_env_path = os.path.join(min_dest_path, 'bin', f"{self.filename}-env")
85 bundle_artifact_path: str = None
86
87 # Remove env var
88 logging.info('Organize folder structure before generating rpm')
89 os.environ.pop('OPENSEARCH_PATH_CONF', None)
90
91 # Restore config file and core folder to original location
92 shutil.move(f"{min_bin_env_path}.backup", min_bin_env_path)
93 shutil.move(min_dest_path, min_source_path)
94
95 # Run bundle rpmbuild
96 bundle_cmd = " ".join(
97 [
98 'rpmbuild',
99 '-bb',
100 f"--define '_topdir {ext_dest}'",
101 f"--define '_version {build_cls.version}'",
102 f"--define '_architecture {rpm_architecture(build_cls.architecture)}'",
103 f"{self.filename}.rpm.spec",
104 ]
105 )
106
107 logging.info(f"Execute {bundle_cmd} in {ext_dest}")
108 subprocess.check_call(bundle_cmd, cwd=ext_dest, shell=True)
109
110 # Move artifact to repo root before being published to {dest}
111 for dirpath, dirnames, filenames in os.walk(os.path.join(ext_dest, 'RPMS')):
112 for filename in [file for file in filenames if file.endswith('.rpm')]:
113 bundle_artifact_path = os.path.join(dirpath, filename)
114 break
115
116 shutil.move(bundle_artifact_path, name)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/assemble_workflow/bundle_rpm.py b/src/assemble_workflow/bundle_rpm.py
--- a/src/assemble_workflow/bundle_rpm.py
+++ b/src/assemble_workflow/bundle_rpm.py
@@ -89,7 +89,10 @@
os.environ.pop('OPENSEARCH_PATH_CONF', None)
# Restore config file and core folder to original location
- shutil.move(f"{min_bin_env_path}.backup", min_bin_env_path)
+ if os.path.exists(f"{min_bin_env_path}.backup"):
+ logging.info(f"Restore {min_bin_env_path}.backup to {min_bin_env_path}")
+ shutil.move(f"{min_bin_env_path}.backup", min_bin_env_path)
+
shutil.move(min_dest_path, min_source_path)
# Run bundle rpmbuild
| {"golden_diff": "diff --git a/src/assemble_workflow/bundle_rpm.py b/src/assemble_workflow/bundle_rpm.py\n--- a/src/assemble_workflow/bundle_rpm.py\n+++ b/src/assemble_workflow/bundle_rpm.py\n@@ -89,7 +89,10 @@\n os.environ.pop('OPENSEARCH_PATH_CONF', None)\n \n # Restore config file and core folder to original location\n- shutil.move(f\"{min_bin_env_path}.backup\", min_bin_env_path)\n+ if os.path.exists(f\"{min_bin_env_path}.backup\"):\n+ logging.info(f\"Restore {min_bin_env_path}.backup to {min_bin_env_path}\")\n+ shutil.move(f\"{min_bin_env_path}.backup\", min_bin_env_path)\n+\n shutil.move(min_dest_path, min_source_path)\n \n # Run bundle rpmbuild\n", "issue": "[RPM M1] Add a new block to call the generation code for RPM\nTasks | Estimate | Status | Notes | Dependencies\r\n-- | -- | -- | -- | --\r\nThe generation code should pull the artifacts from the build workflow to a temporary location | 1 | Complete | \u00a0 | Build workflow must provide usable artifacts\r\nThe code will call existing install function to install plugins on min artifacts | 1 | Complete | \u00a0 | \u00a0\r\nAfter installation, the code will execute a tool or utility to wrap all the content into a RPM package | 5 | Complete | Require writing a script to utilize FPM to start with and later implement in pure python code. <br><br>20220204: We might change to rpmbuild directly without using FPM. See comments. | FPM usages\r\nThe code will also add dependencies to the RPM package so that things like JDK and additional libs for plugins can be installed and pulled separately | 5 | Complete | Need to study on RPM dependency setups | RPM Build Dependencies and the locations of each dependent artifact\r\nThe code will move the RPM package from the temp location to dist folder | 2 | Complete | \u00a0 | \u00a0\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\nimport shutil\nimport subprocess\n\nfrom manifests.build_manifest import BuildManifest\nfrom system.os import rpm_architecture\n\n\nclass BundleRpm:\n\n def __init__(self, filename: str, package_path: str, min_path: str) -> None:\n self.filename = filename\n self.package_path = package_path\n self.min_path = min_path\n\n def extract(self, dest: str) -> None:\n cpio_basename = os.path.splitext(os.path.basename(self.package_path))[0]\n cpio_path = os.path.join(dest, f\"{cpio_basename}.cpio\")\n min_source_path = os.path.join(dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(dest, self.min_path)\n min_config_path = os.path.join(dest, 'etc', self.filename)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n\n # Convert rpm to cpio so we can extract the content\n logging.info(f\"Convert rpm to cpio for extraction: {self.package_path} to {cpio_path}\")\n with open(cpio_path, 'wb') as fp:\n subprocess.check_call(\n [\n 'rpm2cpio',\n self.package_path,\n ],\n stdout=fp,\n cwd=dest,\n )\n\n # Extract cpio archive based on the rpm package\n logging.info(f\"Extract cpio {cpio_path} content to {dest}\")\n with open(cpio_path, 'rb') as fp:\n subprocess.check_call(\n [\n 'cpio',\n '-imdv',\n ],\n stdin=fp,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.STDOUT,\n cwd=dest,\n )\n\n # Move core folder destination so plugin install can proceed\n logging.info(f\"Move {min_source_path} to {min_dest_path} for plugin installation\")\n shutil.move(min_source_path, min_dest_path)\n\n # Multiple modifications and env vars setups before install plugins\n # As bin/opensearch-env is different between archive and package\n # https://github.com/opensearch-project/OpenSearch/issues/2092\n os.environ[f\"{self.filename.upper()}_PATH_CONF\"] = min_config_path\n\n if os.path.exists(min_bin_env_path):\n # Backup original file\n shutil.copy2(min_bin_env_path, f\"{min_bin_env_path}.backup\")\n # Prevent sourcing as file is only in place after rpm installation\n # So that min can install plugin zips\n # Have to decode then encode back to ascii due to mypy complains TextIO not equals to BinaryIO\n with open(min_bin_env_path, 'rb') as fp:\n min_bin_env_lines = fp.read().decode('ascii')\n\n with open(min_bin_env_path, 'wb') as fp:\n fp.write(min_bin_env_lines.replace('source', '#source').encode('ascii'))\n\n def build(self, name: str, dest: str, archive_path: str, build_cls: BuildManifest.Build) -> None:\n # extract dest and build dest are not the same, this is restoring the extract dest\n # mainly due to rpm requires several different setups compares to tarball and zip\n ext_dest = os.path.dirname(archive_path)\n min_source_path = os.path.join(ext_dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(ext_dest, self.min_path)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n bundle_artifact_path: str = None\n\n # Remove env var\n logging.info('Organize folder structure before generating rpm')\n os.environ.pop('OPENSEARCH_PATH_CONF', None)\n\n # Restore config file and core folder to original location\n shutil.move(f\"{min_bin_env_path}.backup\", min_bin_env_path)\n shutil.move(min_dest_path, min_source_path)\n\n # Run bundle rpmbuild\n bundle_cmd = \" \".join(\n [\n 'rpmbuild',\n '-bb',\n f\"--define '_topdir {ext_dest}'\",\n f\"--define '_version {build_cls.version}'\",\n f\"--define '_architecture {rpm_architecture(build_cls.architecture)}'\",\n f\"{self.filename}.rpm.spec\",\n ]\n )\n\n logging.info(f\"Execute {bundle_cmd} in {ext_dest}\")\n subprocess.check_call(bundle_cmd, cwd=ext_dest, shell=True)\n\n # Move artifact to repo root before being published to {dest}\n for dirpath, dirnames, filenames in os.walk(os.path.join(ext_dest, 'RPMS')):\n for filename in [file for file in filenames if file.endswith('.rpm')]:\n bundle_artifact_path = os.path.join(dirpath, filename)\n break\n\n shutil.move(bundle_artifact_path, name)\n", "path": "src/assemble_workflow/bundle_rpm.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\nimport shutil\nimport subprocess\n\nfrom manifests.build_manifest import BuildManifest\nfrom system.os import rpm_architecture\n\n\nclass BundleRpm:\n\n def __init__(self, filename: str, package_path: str, min_path: str) -> None:\n self.filename = filename\n self.package_path = package_path\n self.min_path = min_path\n\n def extract(self, dest: str) -> None:\n cpio_basename = os.path.splitext(os.path.basename(self.package_path))[0]\n cpio_path = os.path.join(dest, f\"{cpio_basename}.cpio\")\n min_source_path = os.path.join(dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(dest, self.min_path)\n min_config_path = os.path.join(dest, 'etc', self.filename)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n\n # Convert rpm to cpio so we can extract the content\n logging.info(f\"Convert rpm to cpio for extraction: {self.package_path} to {cpio_path}\")\n with open(cpio_path, 'wb') as fp:\n subprocess.check_call(\n [\n 'rpm2cpio',\n self.package_path,\n ],\n stdout=fp,\n cwd=dest,\n )\n\n # Extract cpio archive based on the rpm package\n logging.info(f\"Extract cpio {cpio_path} content to {dest}\")\n with open(cpio_path, 'rb') as fp:\n subprocess.check_call(\n [\n 'cpio',\n '-imdv',\n ],\n stdin=fp,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.STDOUT,\n cwd=dest,\n )\n\n # Move core folder destination so plugin install can proceed\n logging.info(f\"Move {min_source_path} to {min_dest_path} for plugin installation\")\n shutil.move(min_source_path, min_dest_path)\n\n # Multiple modifications and env vars setups before install plugins\n # As bin/opensearch-env is different between archive and package\n # https://github.com/opensearch-project/OpenSearch/issues/2092\n os.environ[f\"{self.filename.upper()}_PATH_CONF\"] = min_config_path\n\n if os.path.exists(min_bin_env_path):\n # Backup original file\n shutil.copy2(min_bin_env_path, f\"{min_bin_env_path}.backup\")\n # Prevent sourcing as file is only in place after rpm installation\n # So that min can install plugin zips\n # Have to decode then encode back to ascii due to mypy complains TextIO not equals to BinaryIO\n with open(min_bin_env_path, 'rb') as fp:\n min_bin_env_lines = fp.read().decode('ascii')\n\n with open(min_bin_env_path, 'wb') as fp:\n fp.write(min_bin_env_lines.replace('source', '#source').encode('ascii'))\n\n def build(self, name: str, dest: str, archive_path: str, build_cls: BuildManifest.Build) -> None:\n # extract dest and build dest are not the same, this is restoring the extract dest\n # mainly due to rpm requires several different setups compares to tarball and zip\n ext_dest = os.path.dirname(archive_path)\n min_source_path = os.path.join(ext_dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(ext_dest, self.min_path)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n bundle_artifact_path: str = None\n\n # Remove env var\n logging.info('Organize folder structure before generating rpm')\n os.environ.pop('OPENSEARCH_PATH_CONF', None)\n\n # Restore config file and core folder to original location\n if os.path.exists(f\"{min_bin_env_path}.backup\"):\n logging.info(f\"Restore {min_bin_env_path}.backup to {min_bin_env_path}\")\n shutil.move(f\"{min_bin_env_path}.backup\", min_bin_env_path)\n\n shutil.move(min_dest_path, min_source_path)\n\n # Run bundle rpmbuild\n bundle_cmd = \" \".join(\n [\n 'rpmbuild',\n '-bb',\n f\"--define '_topdir {ext_dest}'\",\n f\"--define '_version {build_cls.version}'\",\n f\"--define '_architecture {rpm_architecture(build_cls.architecture)}'\",\n f\"{self.filename}.rpm.spec\",\n ]\n )\n\n logging.info(f\"Execute {bundle_cmd} in {ext_dest}\")\n subprocess.check_call(bundle_cmd, cwd=ext_dest, shell=True)\n\n # Move artifact to repo root before being published to {dest}\n for dirpath, dirnames, filenames in os.walk(os.path.join(ext_dest, 'RPMS')):\n for filename in [file for file in filenames if file.endswith('.rpm')]:\n bundle_artifact_path = os.path.join(dirpath, filename)\n break\n\n shutil.move(bundle_artifact_path, name)\n", "path": "src/assemble_workflow/bundle_rpm.py"}]} | 1,842 | 180 |
gh_patches_debug_18748 | rasdani/github-patches | git_diff | microsoft__PubSec-Info-Assistant-356 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Text Enrichment function not quoting blob paths correctly
We have some files with percentage (%) symbols in them, which appear to cause an issue when getting to the Text Enrichment stage of the Function App due to the way the `get_blob_and_sas` function works. Example file name: `Unemployment rate back up to 3.7% in October _ Australian Bureau of Statistics.pdf`
I would suggest replacing the code that manually substitutes spaces (below) with a proper URL quoting function like `blob_path = urllib.parse.quote(blob_path)`
https://github.com/microsoft/PubSec-Info-Assistant/blob/7fa4561652211b023965d4522b2bfd7168af4060/functions/shared_code/utilities_helper.py#L52
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `functions/shared_code/utilities_helper.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 import os
5 import logging
6 from datetime import datetime, timedelta
7 from azure.storage.blob import generate_blob_sas, BlobSasPermissions
8
9 class UtilitiesHelper:
10 """ Helper class for utility functions"""
11 def __init__(self,
12 azure_blob_storage_account,
13 azure_blob_storage_endpoint,
14 azure_blob_storage_key
15 ):
16 self.azure_blob_storage_account = azure_blob_storage_account
17 self.azure_blob_storage_endpoint = azure_blob_storage_endpoint
18 self.azure_blob_storage_key = azure_blob_storage_key
19
20 def get_filename_and_extension(self, path):
21 """ Function to return the file name & type"""
22 # Split the path into base and extension
23 base_name = os.path.basename(path)
24 segments = path.split("/")
25 directory = "/".join(segments[1:-1]) + "/"
26 if directory == "/":
27 directory = ""
28 file_name, file_extension = os.path.splitext(base_name)
29 return file_name, file_extension, directory
30
31 def get_blob_and_sas(self, blob_path):
32 """ Function to retrieve the uri and sas token for a given blob in azure storage"""
33
34 # Get path and file name minus the root container
35 separator = "/"
36 file_path_w_name_no_cont = separator.join(
37 blob_path.split(separator)[1:])
38
39 container_name = separator.join(
40 blob_path.split(separator)[0:1])
41
42 # Gen SAS token
43 sas_token = generate_blob_sas(
44 account_name=self.azure_blob_storage_account,
45 container_name=container_name,
46 blob_name=file_path_w_name_no_cont,
47 account_key=self.azure_blob_storage_key,
48 permission=BlobSasPermissions(read=True),
49 expiry=datetime.utcnow() + timedelta(hours=1)
50 )
51 source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'
52 source_blob_path = source_blob_path.replace(" ", "%20")
53 logging.info("Path and SAS token for file in azure storage are now generated \n")
54 return source_blob_path
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/functions/shared_code/utilities_helper.py b/functions/shared_code/utilities_helper.py
--- a/functions/shared_code/utilities_helper.py
+++ b/functions/shared_code/utilities_helper.py
@@ -3,6 +3,7 @@
import os
import logging
+import urllib.parse
from datetime import datetime, timedelta
from azure.storage.blob import generate_blob_sas, BlobSasPermissions
@@ -48,7 +49,7 @@
permission=BlobSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(hours=1)
)
+ blob_path = urllib.parse.quote(blob_path)
source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'
- source_blob_path = source_blob_path.replace(" ", "%20")
logging.info("Path and SAS token for file in azure storage are now generated \n")
return source_blob_path
\ No newline at end of file
| {"golden_diff": "diff --git a/functions/shared_code/utilities_helper.py b/functions/shared_code/utilities_helper.py\n--- a/functions/shared_code/utilities_helper.py\n+++ b/functions/shared_code/utilities_helper.py\n@@ -3,6 +3,7 @@\n \n import os\n import logging\n+import urllib.parse\n from datetime import datetime, timedelta\n from azure.storage.blob import generate_blob_sas, BlobSasPermissions\n \n@@ -48,7 +49,7 @@\n permission=BlobSasPermissions(read=True),\n expiry=datetime.utcnow() + timedelta(hours=1)\n )\n+ blob_path = urllib.parse.quote(blob_path)\n source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'\n- source_blob_path = source_blob_path.replace(\" \", \"%20\")\n logging.info(\"Path and SAS token for file in azure storage are now generated \\n\")\n return source_blob_path\n\\ No newline at end of file\n", "issue": "Text Enrichment function not quoting blob paths correctly\nWe have some files with percentage (%) symbols in them, which appear to cause an issue when getting to the Text Enrichment stage of the Function App due to the way the `get_blob_and_sas` function works. Example file name: `Unemployment rate back up to 3.7% in October _ Australian Bureau of Statistics.pdf`\r\n\r\nI would suggest replacing the code that manually substitutes spaces (below) with a proper URL quoting function like `blob_path = urllib.parse.quote(blob_path)`\r\n\r\nhttps://github.com/microsoft/PubSec-Info-Assistant/blob/7fa4561652211b023965d4522b2bfd7168af4060/functions/shared_code/utilities_helper.py#L52\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport os\nimport logging\nfrom datetime import datetime, timedelta\nfrom azure.storage.blob import generate_blob_sas, BlobSasPermissions\n\nclass UtilitiesHelper:\n \"\"\" Helper class for utility functions\"\"\"\n def __init__(self,\n azure_blob_storage_account,\n azure_blob_storage_endpoint,\n azure_blob_storage_key\n ):\n self.azure_blob_storage_account = azure_blob_storage_account\n self.azure_blob_storage_endpoint = azure_blob_storage_endpoint\n self.azure_blob_storage_key = azure_blob_storage_key\n \n def get_filename_and_extension(self, path):\n \"\"\" Function to return the file name & type\"\"\"\n # Split the path into base and extension\n base_name = os.path.basename(path)\n segments = path.split(\"/\")\n directory = \"/\".join(segments[1:-1]) + \"/\"\n if directory == \"/\":\n directory = \"\"\n file_name, file_extension = os.path.splitext(base_name)\n return file_name, file_extension, directory\n \n def get_blob_and_sas(self, blob_path):\n \"\"\" Function to retrieve the uri and sas token for a given blob in azure storage\"\"\"\n\n # Get path and file name minus the root container\n separator = \"/\"\n file_path_w_name_no_cont = separator.join(\n blob_path.split(separator)[1:])\n \n container_name = separator.join(\n blob_path.split(separator)[0:1])\n\n # Gen SAS token\n sas_token = generate_blob_sas(\n account_name=self.azure_blob_storage_account,\n container_name=container_name,\n blob_name=file_path_w_name_no_cont,\n account_key=self.azure_blob_storage_key,\n permission=BlobSasPermissions(read=True),\n expiry=datetime.utcnow() + timedelta(hours=1)\n )\n source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'\n source_blob_path = source_blob_path.replace(\" \", \"%20\")\n logging.info(\"Path and SAS token for file in azure storage are now generated \\n\")\n return source_blob_path", "path": "functions/shared_code/utilities_helper.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport os\nimport logging\nimport urllib.parse\nfrom datetime import datetime, timedelta\nfrom azure.storage.blob import generate_blob_sas, BlobSasPermissions\n\nclass UtilitiesHelper:\n \"\"\" Helper class for utility functions\"\"\"\n def __init__(self,\n azure_blob_storage_account,\n azure_blob_storage_endpoint,\n azure_blob_storage_key\n ):\n self.azure_blob_storage_account = azure_blob_storage_account\n self.azure_blob_storage_endpoint = azure_blob_storage_endpoint\n self.azure_blob_storage_key = azure_blob_storage_key\n \n def get_filename_and_extension(self, path):\n \"\"\" Function to return the file name & type\"\"\"\n # Split the path into base and extension\n base_name = os.path.basename(path)\n segments = path.split(\"/\")\n directory = \"/\".join(segments[1:-1]) + \"/\"\n if directory == \"/\":\n directory = \"\"\n file_name, file_extension = os.path.splitext(base_name)\n return file_name, file_extension, directory\n \n def get_blob_and_sas(self, blob_path):\n \"\"\" Function to retrieve the uri and sas token for a given blob in azure storage\"\"\"\n\n # Get path and file name minus the root container\n separator = \"/\"\n file_path_w_name_no_cont = separator.join(\n blob_path.split(separator)[1:])\n \n container_name = separator.join(\n blob_path.split(separator)[0:1])\n\n # Gen SAS token\n sas_token = generate_blob_sas(\n account_name=self.azure_blob_storage_account,\n container_name=container_name,\n blob_name=file_path_w_name_no_cont,\n account_key=self.azure_blob_storage_key,\n permission=BlobSasPermissions(read=True),\n expiry=datetime.utcnow() + timedelta(hours=1)\n )\n blob_path = urllib.parse.quote(blob_path)\n source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'\n logging.info(\"Path and SAS token for file in azure storage are now generated \\n\")\n return source_blob_path", "path": "functions/shared_code/utilities_helper.py"}]} | 975 | 200 |
gh_patches_debug_27874 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-7673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Any way to filter on tags for Cognito identity-pool or user-pool?
### Discussed in https://github.com/orgs/cloud-custodian/discussions/7616
<div type='discussions-op-text'>
<sup>Originally posted by **stepkirk** August 5, 2022</sup>
We normally enforce tags on AWS resources by using Custodian to look for certain required tags on a resource and then, if the tags don't exist or aren't in the correct format, we mark the resource for deletion after a certain grace period. With the Cognito identity-pool and user-pool resources, it doesn't look like we can check for tags the normal way and it doesn't look like we can mark a resource for later deletion. Is that true or am I missing something?
Any plans to add tagging/marking support in the future for these Cognito resources?</div>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/cognito.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 from botocore.exceptions import ClientError
4
5 from c7n.actions import BaseAction
6 from c7n.manager import resources
7 from c7n.query import QueryResourceManager, TypeInfo
8 from c7n.utils import local_session, type_schema
9
10
11 @resources.register('identity-pool')
12 class CognitoIdentityPool(QueryResourceManager):
13
14 class resource_type(TypeInfo):
15 service = 'cognito-identity'
16 enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})
17 detail_spec = (
18 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)
19 id = 'IdentityPoolId'
20 name = 'IdentityPoolName'
21 arn_type = "identitypool"
22 cfn_type = 'AWS::Cognito::IdentityPool'
23
24
25 @CognitoIdentityPool.action_registry.register('delete')
26 class DeleteIdentityPool(BaseAction):
27 """Action to delete cognito identity pool
28
29 It is recommended to use a filter to avoid unwanted deletion of pools
30
31 :example:
32
33 .. code-block:: yaml
34
35 policies:
36 - name: identity-pool-delete
37 resource: identity-pool
38 actions:
39 - delete
40 """
41
42 schema = type_schema('delete')
43 permissions = ("cognito-identity:DeleteIdentityPool",)
44
45 def process(self, pools):
46 with self.executor_factory(max_workers=2) as w:
47 list(w.map(self.process_pool, pools))
48
49 def process_pool(self, pool):
50 client = local_session(
51 self.manager.session_factory).client('cognito-identity')
52 try:
53 client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])
54 except ClientError as e:
55 self.log.exception(
56 "Exception deleting identity pool:\n %s" % e)
57
58
59 @resources.register('user-pool')
60 class CognitoUserPool(QueryResourceManager):
61
62 class resource_type(TypeInfo):
63 service = "cognito-idp"
64 enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})
65 detail_spec = (
66 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')
67 id = 'Id'
68 name = 'Name'
69 arn_type = "userpool"
70 cfn_type = 'AWS::Cognito::UserPool'
71
72
73 @CognitoUserPool.action_registry.register('delete')
74 class DeleteUserPool(BaseAction):
75 """Action to delete cognito user pool
76
77 It is recommended to use a filter to avoid unwanted deletion of pools
78
79 :example:
80
81 .. code-block:: yaml
82
83 policies:
84 - name: user-pool-delete
85 resource: user-pool
86 actions:
87 - delete
88 """
89
90 schema = type_schema('delete')
91 permissions = ("cognito-idp:DeleteUserPool",)
92
93 def process(self, pools):
94 with self.executor_factory(max_workers=2) as w:
95 list(w.map(self.process_pool, pools))
96
97 def process_pool(self, pool):
98 client = local_session(
99 self.manager.session_factory).client('cognito-idp')
100 try:
101 client.delete_user_pool(UserPoolId=pool['Id'])
102 except ClientError as e:
103 self.log.exception(
104 "Exception deleting user pool:\n %s" % e)
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py
--- a/c7n/resources/cognito.py
+++ b/c7n/resources/cognito.py
@@ -4,10 +4,21 @@
from c7n.actions import BaseAction
from c7n.manager import resources
-from c7n.query import QueryResourceManager, TypeInfo
+from c7n.query import QueryResourceManager, TypeInfo, DescribeSource
+from c7n.tags import universal_augment
from c7n.utils import local_session, type_schema
+class DescribeIdentityPool(DescribeSource):
+ def augment(self, resources):
+ return universal_augment(self.manager, resources)
+
+
+class DescribeUserPool(DescribeSource):
+ def augment(self, resources):
+ return universal_augment(self.manager, resources)
+
+
@resources.register('identity-pool')
class CognitoIdentityPool(QueryResourceManager):
@@ -20,6 +31,11 @@
name = 'IdentityPoolName'
arn_type = "identitypool"
cfn_type = 'AWS::Cognito::IdentityPool'
+ universal_taggable = object()
+
+ source_mapping = {
+ 'describe': DescribeIdentityPool,
+ }
@CognitoIdentityPool.action_registry.register('delete')
@@ -69,6 +85,10 @@
arn_type = "userpool"
cfn_type = 'AWS::Cognito::UserPool'
+ source_mapping = {
+ 'describe': DescribeUserPool,
+ }
+
@CognitoUserPool.action_registry.register('delete')
class DeleteUserPool(BaseAction):
| {"golden_diff": "diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py\n--- a/c7n/resources/cognito.py\n+++ b/c7n/resources/cognito.py\n@@ -4,10 +4,21 @@\n \n from c7n.actions import BaseAction\n from c7n.manager import resources\n-from c7n.query import QueryResourceManager, TypeInfo\n+from c7n.query import QueryResourceManager, TypeInfo, DescribeSource\n+from c7n.tags import universal_augment\n from c7n.utils import local_session, type_schema\n \n \n+class DescribeIdentityPool(DescribeSource):\n+ def augment(self, resources):\n+ return universal_augment(self.manager, resources)\n+\n+\n+class DescribeUserPool(DescribeSource):\n+ def augment(self, resources):\n+ return universal_augment(self.manager, resources)\n+\n+\n @resources.register('identity-pool')\n class CognitoIdentityPool(QueryResourceManager):\n \n@@ -20,6 +31,11 @@\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n+ universal_taggable = object()\n+\n+ source_mapping = {\n+ 'describe': DescribeIdentityPool,\n+ }\n \n \n @CognitoIdentityPool.action_registry.register('delete')\n@@ -69,6 +85,10 @@\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n \n+ source_mapping = {\n+ 'describe': DescribeUserPool,\n+ }\n+\n \n @CognitoUserPool.action_registry.register('delete')\n class DeleteUserPool(BaseAction):\n", "issue": "Any way to filter on tags for Cognito identity-pool or user-pool?\n### Discussed in https://github.com/orgs/cloud-custodian/discussions/7616\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **stepkirk** August 5, 2022</sup>\r\nWe normally enforce tags on AWS resources by using Custodian to look for certain required tags on a resource and then, if the tags don't exist or aren't in the correct format, we mark the resource for deletion after a certain grace period. With the Cognito identity-pool and user-pool resources, it doesn't look like we can check for tags the normal way and it doesn't look like we can mark a resource for later deletion. Is that true or am I missing something?\r\n\r\nAny plans to add tagging/marking support in the future for these Cognito resources?</div>\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom botocore.exceptions import ClientError\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n\n\[email protected]_registry.register('delete')\nclass DeleteIdentityPool(BaseAction):\n \"\"\"Action to delete cognito identity pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: identity-pool-delete\n resource: identity-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-identity:DeleteIdentityPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-identity')\n try:\n client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting identity pool:\\n %s\" % e)\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n\n\[email protected]_registry.register('delete')\nclass DeleteUserPool(BaseAction):\n \"\"\"Action to delete cognito user pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: user-pool-delete\n resource: user-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-idp:DeleteUserPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-idp')\n try:\n client.delete_user_pool(UserPoolId=pool['Id'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting user pool:\\n %s\" % e)\n", "path": "c7n/resources/cognito.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom botocore.exceptions import ClientError\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo, DescribeSource\nfrom c7n.tags import universal_augment\nfrom c7n.utils import local_session, type_schema\n\n\nclass DescribeIdentityPool(DescribeSource):\n def augment(self, resources):\n return universal_augment(self.manager, resources)\n\n\nclass DescribeUserPool(DescribeSource):\n def augment(self, resources):\n return universal_augment(self.manager, resources)\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeIdentityPool,\n }\n\n\[email protected]_registry.register('delete')\nclass DeleteIdentityPool(BaseAction):\n \"\"\"Action to delete cognito identity pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: identity-pool-delete\n resource: identity-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-identity:DeleteIdentityPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-identity')\n try:\n client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting identity pool:\\n %s\" % e)\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n\n source_mapping = {\n 'describe': DescribeUserPool,\n }\n\n\[email protected]_registry.register('delete')\nclass DeleteUserPool(BaseAction):\n \"\"\"Action to delete cognito user pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: user-pool-delete\n resource: user-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-idp:DeleteUserPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-idp')\n try:\n client.delete_user_pool(UserPoolId=pool['Id'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting user pool:\\n %s\" % e)\n", "path": "c7n/resources/cognito.py"}]} | 1,391 | 355 |
gh_patches_debug_200 | rasdani/github-patches | git_diff | scrapy__scrapy-1566 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
signals docs are confusing
It seems it is not explained how to connect a callback to a singnal anywhere in Scrapy docs.
http://doc.scrapy.org/en/latest/topics/signals.html tells:
> You can connect to signals (or send your own) through the [Signals API](http://doc.scrapy.org/en/latest/topics/api.html#topics-api-signals).
But if you follow this link you get docs for scrapy.signalmanager.SignalManager - that's fine, but it is not explained where to get a SignalManager instance from.
There is an example in Extension docs (http://doc.scrapy.org/en/latest/topics/extensions.html#sample-extension), but
a) this is just an example;
b) it is not explained that crawler.signals is a SignalManager instance;
c) this example is neither in Signals docs nor in SignalManager docs.
There is also a bit of information here: http://doc.scrapy.org/en/latest/topics/api.html#scrapy.crawler.Crawler.signals, but
a) it is not linked to neither from Signal docs nor from SignalManager, so you can't find it if you don't know about it already;
b) it is not explained that crawler.signals is the only way to access signals.
So in the end users may get some luck connecting signals if they start from Crawler docs, but almost no luck if they start from Signals docs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/misc.py`
Content:
```
1 """Helper functions which doesn't fit anywhere else"""
2 import re
3 import hashlib
4 from importlib import import_module
5 from pkgutil import iter_modules
6
7 import six
8 from w3lib.html import replace_entities
9
10 from scrapy.utils.python import flatten, to_unicode
11 from scrapy.item import BaseItem
12
13
14 _ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes
15
16
17 def arg_to_iter(arg):
18 """Convert an argument to an iterable. The argument can be a None, single
19 value, or an iterable.
20
21 Exception: if arg is a dict, [arg] will be returned
22 """
23 if arg is None:
24 return []
25 elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):
26 return arg
27 else:
28 return [arg]
29
30
31 def load_object(path):
32 """Load an object given its absolute object path, and return it.
33
34 object can be a class, function, variable or an instance.
35 path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'
36 """
37
38 try:
39 dot = path.rindex('.')
40 except ValueError:
41 raise ValueError("Error loading object '%s': not a full path" % path)
42
43 module, name = path[:dot], path[dot+1:]
44 mod = import_module(module)
45
46 try:
47 obj = getattr(mod, name)
48 except AttributeError:
49 raise NameError("Module '%s' doesn't define any object named '%s'" % (module, name))
50
51 return obj
52
53
54 def walk_modules(path):
55 """Loads a module and all its submodules from the given module path and
56 returns them. If *any* module throws an exception while importing, that
57 exception is thrown back.
58
59 For example: walk_modules('scrapy.utils')
60 """
61
62 mods = []
63 mod = import_module(path)
64 mods.append(mod)
65 if hasattr(mod, '__path__'):
66 for _, subpath, ispkg in iter_modules(mod.__path__):
67 fullpath = path + '.' + subpath
68 if ispkg:
69 mods += walk_modules(fullpath)
70 else:
71 submod = import_module(fullpath)
72 mods.append(submod)
73 return mods
74
75
76 def extract_regex(regex, text, encoding='utf-8'):
77 """Extract a list of unicode strings from the given text/encoding using the following policies:
78
79 * if the regex contains a named group called "extract" that will be returned
80 * if the regex contains multiple numbered groups, all those will be returned (flattened)
81 * if the regex doesn't contain any group the entire regex matching is returned
82 """
83
84 if isinstance(regex, six.string_types):
85 regex = re.compile(regex, re.UNICODE)
86
87 try:
88 strings = [regex.search(text).group('extract')] # named group
89 except:
90 strings = regex.findall(text) # full regex or numbered groups
91 strings = flatten(strings)
92
93 if isinstance(text, six.text_type):
94 return [replace_entities(s, keep=['lt', 'amp']) for s in strings]
95 else:
96 return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])
97 for s in strings]
98
99
100 def md5sum(file):
101 """Calculate the md5 checksum of a file-like object without reading its
102 whole content in memory.
103
104 >>> from io import BytesIO
105 >>> md5sum(BytesIO(b'file content to hash'))
106 '784406af91dd5a54fbb9c84c2236595a'
107 """
108 m = hashlib.md5()
109 while True:
110 d = file.read(8096)
111 if not d:
112 break
113 m.update(d)
114 return m.hexdigest()
115
116 def rel_has_nofollow(rel):
117 """Return True if link rel attribute has nofollow type"""
118 return True if rel is not None and 'nofollow' in rel.split() else False
119
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/misc.py b/scrapy/utils/misc.py
--- a/scrapy/utils/misc.py
+++ b/scrapy/utils/misc.py
@@ -1,4 +1,4 @@
-"""Helper functions which doesn't fit anywhere else"""
+"""Helper functions which don't fit anywhere else"""
import re
import hashlib
from importlib import import_module
| {"golden_diff": "diff --git a/scrapy/utils/misc.py b/scrapy/utils/misc.py\n--- a/scrapy/utils/misc.py\n+++ b/scrapy/utils/misc.py\n@@ -1,4 +1,4 @@\n-\"\"\"Helper functions which doesn't fit anywhere else\"\"\"\n+\"\"\"Helper functions which don't fit anywhere else\"\"\"\n import re\n import hashlib\n from importlib import import_module\n", "issue": "signals docs are confusing\nIt seems it is not explained how to connect a callback to a singnal anywhere in Scrapy docs.\n\nhttp://doc.scrapy.org/en/latest/topics/signals.html tells:\n\n> You can connect to signals (or send your own) through the [Signals API](http://doc.scrapy.org/en/latest/topics/api.html#topics-api-signals).\n\nBut if you follow this link you get docs for scrapy.signalmanager.SignalManager - that's fine, but it is not explained where to get a SignalManager instance from.\n\nThere is an example in Extension docs (http://doc.scrapy.org/en/latest/topics/extensions.html#sample-extension), but\n\na) this is just an example;\nb) it is not explained that crawler.signals is a SignalManager instance;\nc) this example is neither in Signals docs nor in SignalManager docs.\n\nThere is also a bit of information here: http://doc.scrapy.org/en/latest/topics/api.html#scrapy.crawler.Crawler.signals, but\n\na) it is not linked to neither from Signal docs nor from SignalManager, so you can't find it if you don't know about it already;\nb) it is not explained that crawler.signals is the only way to access signals.\n\nSo in the end users may get some luck connecting signals if they start from Crawler docs, but almost no luck if they start from Signals docs.\n\n", "before_files": [{"content": "\"\"\"Helper functions which doesn't fit anywhere else\"\"\"\nimport re\nimport hashlib\nfrom importlib import import_module\nfrom pkgutil import iter_modules\n\nimport six\nfrom w3lib.html import replace_entities\n\nfrom scrapy.utils.python import flatten, to_unicode\nfrom scrapy.item import BaseItem\n\n\n_ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes\n\n\ndef arg_to_iter(arg):\n \"\"\"Convert an argument to an iterable. The argument can be a None, single\n value, or an iterable.\n\n Exception: if arg is a dict, [arg] will be returned\n \"\"\"\n if arg is None:\n return []\n elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):\n return arg\n else:\n return [arg]\n\n\ndef load_object(path):\n \"\"\"Load an object given its absolute object path, and return it.\n\n object can be a class, function, variable or an instance.\n path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'\n \"\"\"\n\n try:\n dot = path.rindex('.')\n except ValueError:\n raise ValueError(\"Error loading object '%s': not a full path\" % path)\n\n module, name = path[:dot], path[dot+1:]\n mod = import_module(module)\n\n try:\n obj = getattr(mod, name)\n except AttributeError:\n raise NameError(\"Module '%s' doesn't define any object named '%s'\" % (module, name))\n\n return obj\n\n\ndef walk_modules(path):\n \"\"\"Loads a module and all its submodules from the given module path and\n returns them. If *any* module throws an exception while importing, that\n exception is thrown back.\n\n For example: walk_modules('scrapy.utils')\n \"\"\"\n\n mods = []\n mod = import_module(path)\n mods.append(mod)\n if hasattr(mod, '__path__'):\n for _, subpath, ispkg in iter_modules(mod.__path__):\n fullpath = path + '.' + subpath\n if ispkg:\n mods += walk_modules(fullpath)\n else:\n submod = import_module(fullpath)\n mods.append(submod)\n return mods\n\n\ndef extract_regex(regex, text, encoding='utf-8'):\n \"\"\"Extract a list of unicode strings from the given text/encoding using the following policies:\n\n * if the regex contains a named group called \"extract\" that will be returned\n * if the regex contains multiple numbered groups, all those will be returned (flattened)\n * if the regex doesn't contain any group the entire regex matching is returned\n \"\"\"\n\n if isinstance(regex, six.string_types):\n regex = re.compile(regex, re.UNICODE)\n\n try:\n strings = [regex.search(text).group('extract')] # named group\n except:\n strings = regex.findall(text) # full regex or numbered groups\n strings = flatten(strings)\n\n if isinstance(text, six.text_type):\n return [replace_entities(s, keep=['lt', 'amp']) for s in strings]\n else:\n return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])\n for s in strings]\n\n\ndef md5sum(file):\n \"\"\"Calculate the md5 checksum of a file-like object without reading its\n whole content in memory.\n\n >>> from io import BytesIO\n >>> md5sum(BytesIO(b'file content to hash'))\n '784406af91dd5a54fbb9c84c2236595a'\n \"\"\"\n m = hashlib.md5()\n while True:\n d = file.read(8096)\n if not d:\n break\n m.update(d)\n return m.hexdigest()\n\ndef rel_has_nofollow(rel):\n \"\"\"Return True if link rel attribute has nofollow type\"\"\"\n return True if rel is not None and 'nofollow' in rel.split() else False\n \n", "path": "scrapy/utils/misc.py"}], "after_files": [{"content": "\"\"\"Helper functions which don't fit anywhere else\"\"\"\nimport re\nimport hashlib\nfrom importlib import import_module\nfrom pkgutil import iter_modules\n\nimport six\nfrom w3lib.html import replace_entities\n\nfrom scrapy.utils.python import flatten, to_unicode\nfrom scrapy.item import BaseItem\n\n\n_ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes\n\n\ndef arg_to_iter(arg):\n \"\"\"Convert an argument to an iterable. The argument can be a None, single\n value, or an iterable.\n\n Exception: if arg is a dict, [arg] will be returned\n \"\"\"\n if arg is None:\n return []\n elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):\n return arg\n else:\n return [arg]\n\n\ndef load_object(path):\n \"\"\"Load an object given its absolute object path, and return it.\n\n object can be a class, function, variable o instance.\n path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'\n \"\"\"\n\n try:\n dot = path.rindex('.')\n except ValueError:\n raise ValueError(\"Error loading object '%s': not a full path\" % path)\n\n module, name = path[:dot], path[dot+1:]\n mod = import_module(module)\n\n try:\n obj = getattr(mod, name)\n except AttributeError:\n raise NameError(\"Module '%s' doesn't define any object named '%s'\" % (module, name))\n\n return obj\n\n\ndef walk_modules(path):\n \"\"\"Loads a module and all its submodules from a the given module path and\n returns them. If *any* module throws an exception while importing, that\n exception is thrown back.\n\n For example: walk_modules('scrapy.utils')\n \"\"\"\n\n mods = []\n mod = import_module(path)\n mods.append(mod)\n if hasattr(mod, '__path__'):\n for _, subpath, ispkg in iter_modules(mod.__path__):\n fullpath = path + '.' + subpath\n if ispkg:\n mods += walk_modules(fullpath)\n else:\n submod = import_module(fullpath)\n mods.append(submod)\n return mods\n\n\ndef extract_regex(regex, text, encoding='utf-8'):\n \"\"\"Extract a list of unicode strings from the given text/encoding using the following policies:\n\n * if the regex contains a named group called \"extract\" that will be returned\n * if the regex contains multiple numbered groups, all those will be returned (flattened)\n * if the regex doesn't contain any group the entire regex matching is returned\n \"\"\"\n\n if isinstance(regex, six.string_types):\n regex = re.compile(regex, re.UNICODE)\n\n try:\n strings = [regex.search(text).group('extract')] # named group\n except:\n strings = regex.findall(text) # full regex or numbered groups\n strings = flatten(strings)\n\n if isinstance(text, six.text_type):\n return [replace_entities(s, keep=['lt', 'amp']) for s in strings]\n else:\n return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])\n for s in strings]\n\n\ndef md5sum(file):\n \"\"\"Calculate the md5 checksum of a file-like object without reading its\n whole content in memory.\n\n >>> from io import BytesIO\n >>> md5sum(BytesIO(b'file content to hash'))\n '784406af91dd5a54fbb9c84c2236595a'\n \"\"\"\n m = hashlib.md5()\n while True:\n d = file.read(8096)\n if not d:\n break\n m.update(d)\n return m.hexdigest()\n\ndef rel_has_nofollow(rel):\n \"\"\"Return True if link rel attribute has nofollow type\"\"\"\n return True if rel is not None and 'nofollow' in rel.split() else False\n \n", "path": "scrapy/utils/misc.py"}]} | 1,652 | 77 |
gh_patches_debug_29045 | rasdani/github-patches | git_diff | litestar-org__litestar-2864 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: OpenAPI schema generation fails due to same operation IDs
### Description
If two routes with the same path, but different methods are defined then the OpenAPI generation fails due to both of them having the same value for operation ID. After running `git bisect`, #2805 seems to have introduced this.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import get, post
from litestar.app import Litestar
from litestar.testing import create_test_client
@post("/")
async def post_handler() -> None:
...
@get("/")
async def get_handler() -> None:
...
with create_test_client([post_handler, get_handler]) as client:
response = client.get("/schema/openapi.json")
assert response.status_code == 200
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
HEAD
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2863">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/_openapi/plugin.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from litestar._openapi.datastructures import OpenAPIContext
6 from litestar._openapi.path_item import create_path_item_for_route
7 from litestar.exceptions import ImproperlyConfiguredException
8 from litestar.plugins import InitPluginProtocol
9 from litestar.plugins.base import ReceiveRoutePlugin
10 from litestar.routes import HTTPRoute
11
12 if TYPE_CHECKING:
13 from litestar.app import Litestar
14 from litestar.config.app import AppConfig
15 from litestar.openapi.config import OpenAPIConfig
16 from litestar.openapi.spec import OpenAPI
17 from litestar.routes import BaseRoute
18
19
20 class OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):
21 __slots__ = (
22 "app",
23 "included_routes",
24 "_openapi_config",
25 "_openapi_schema",
26 )
27
28 def __init__(self, app: Litestar) -> None:
29 self.app = app
30 self.included_routes: list[HTTPRoute] = []
31 self._openapi_config: OpenAPIConfig | None = None
32 self._openapi_schema: OpenAPI | None = None
33
34 def _build_openapi_schema(self) -> OpenAPI:
35 openapi = self.openapi_config.to_openapi_schema()
36 context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)
37 openapi.paths = {
38 route.path_format or "/": create_path_item_for_route(context, route) for route in self.included_routes
39 }
40 openapi.components.schemas = context.schema_registry.generate_components_schemas()
41 return openapi
42
43 def provide_openapi(self) -> OpenAPI:
44 if not self._openapi_schema:
45 self._openapi_schema = self._build_openapi_schema()
46 return self._openapi_schema
47
48 def on_app_init(self, app_config: AppConfig) -> AppConfig:
49 if app_config.openapi_config:
50 self._openapi_config = app_config.openapi_config
51 app_config.route_handlers.append(self.openapi_config.openapi_controller)
52 return app_config
53
54 @property
55 def openapi_config(self) -> OpenAPIConfig:
56 if not self._openapi_config:
57 raise ImproperlyConfiguredException("OpenAPIConfig not initialized")
58 return self._openapi_config
59
60 def receive_route(self, route: BaseRoute) -> None:
61 if not isinstance(route, HTTPRoute):
62 return
63
64 if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):
65 # Force recompute the schema if a new route is added
66 self._openapi_schema = None
67 self.included_routes.append(route)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/_openapi/plugin.py b/litestar/_openapi/plugin.py
--- a/litestar/_openapi/plugin.py
+++ b/litestar/_openapi/plugin.py
@@ -27,7 +27,7 @@
def __init__(self, app: Litestar) -> None:
self.app = app
- self.included_routes: list[HTTPRoute] = []
+ self.included_routes: dict[str, HTTPRoute] = {}
self._openapi_config: OpenAPIConfig | None = None
self._openapi_schema: OpenAPI | None = None
@@ -35,7 +35,8 @@
openapi = self.openapi_config.to_openapi_schema()
context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)
openapi.paths = {
- route.path_format or "/": create_path_item_for_route(context, route) for route in self.included_routes
+ route.path_format or "/": create_path_item_for_route(context, route)
+ for route in self.included_routes.values()
}
openapi.components.schemas = context.schema_registry.generate_components_schemas()
return openapi
@@ -64,4 +65,4 @@
if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):
# Force recompute the schema if a new route is added
self._openapi_schema = None
- self.included_routes.append(route)
+ self.included_routes[route.path] = route
| {"golden_diff": "diff --git a/litestar/_openapi/plugin.py b/litestar/_openapi/plugin.py\n--- a/litestar/_openapi/plugin.py\n+++ b/litestar/_openapi/plugin.py\n@@ -27,7 +27,7 @@\n \n def __init__(self, app: Litestar) -> None:\n self.app = app\n- self.included_routes: list[HTTPRoute] = []\n+ self.included_routes: dict[str, HTTPRoute] = {}\n self._openapi_config: OpenAPIConfig | None = None\n self._openapi_schema: OpenAPI | None = None\n \n@@ -35,7 +35,8 @@\n openapi = self.openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)\n openapi.paths = {\n- route.path_format or \"/\": create_path_item_for_route(context, route) for route in self.included_routes\n+ route.path_format or \"/\": create_path_item_for_route(context, route)\n+ for route in self.included_routes.values()\n }\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n@@ -64,4 +65,4 @@\n if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):\n # Force recompute the schema if a new route is added\n self._openapi_schema = None\n- self.included_routes.append(route)\n+ self.included_routes[route.path] = route\n", "issue": "Bug: OpenAPI schema generation fails due to same operation IDs\n### Description\n\nIf two routes with the same path, but different methods are defined then the OpenAPI generation fails due to both of them having the same value for operation ID. After running `git bisect`, #2805 seems to have introduced this.\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n```python\nfrom litestar import get, post\r\nfrom litestar.app import Litestar\r\nfrom litestar.testing import create_test_client\r\n\r\n\r\n@post(\"/\")\r\nasync def post_handler() -> None:\r\n ...\r\n\r\n\r\n@get(\"/\")\r\nasync def get_handler() -> None:\r\n ...\r\n\r\n\r\nwith create_test_client([post_handler, get_handler]) as client:\r\n response = client.get(\"/schema/openapi.json\")\r\n\r\n assert response.status_code == 200\n```\n\n\n### Steps to reproduce\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Litestar Version\n\nHEAD\n\n### Platform\n\n- [ ] Linux\n- [ ] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n\n<!-- POLAR PLEDGE BADGE START -->\n---\n> [!NOTE] \n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\n>\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\n> * If you would like to see an issue prioritized, make a pledge towards it!\n> * We receive the pledge once the issue is completed & verified\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2863\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom litestar._openapi.datastructures import OpenAPIContext\nfrom litestar._openapi.path_item import create_path_item_for_route\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.plugins import InitPluginProtocol\nfrom litestar.plugins.base import ReceiveRoutePlugin\nfrom litestar.routes import HTTPRoute\n\nif TYPE_CHECKING:\n from litestar.app import Litestar\n from litestar.config.app import AppConfig\n from litestar.openapi.config import OpenAPIConfig\n from litestar.openapi.spec import OpenAPI\n from litestar.routes import BaseRoute\n\n\nclass OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):\n __slots__ = (\n \"app\",\n \"included_routes\",\n \"_openapi_config\",\n \"_openapi_schema\",\n )\n\n def __init__(self, app: Litestar) -> None:\n self.app = app\n self.included_routes: list[HTTPRoute] = []\n self._openapi_config: OpenAPIConfig | None = None\n self._openapi_schema: OpenAPI | None = None\n\n def _build_openapi_schema(self) -> OpenAPI:\n openapi = self.openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)\n openapi.paths = {\n route.path_format or \"/\": create_path_item_for_route(context, route) for route in self.included_routes\n }\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n\n def provide_openapi(self) -> OpenAPI:\n if not self._openapi_schema:\n self._openapi_schema = self._build_openapi_schema()\n return self._openapi_schema\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n if app_config.openapi_config:\n self._openapi_config = app_config.openapi_config\n app_config.route_handlers.append(self.openapi_config.openapi_controller)\n return app_config\n\n @property\n def openapi_config(self) -> OpenAPIConfig:\n if not self._openapi_config:\n raise ImproperlyConfiguredException(\"OpenAPIConfig not initialized\")\n return self._openapi_config\n\n def receive_route(self, route: BaseRoute) -> None:\n if not isinstance(route, HTTPRoute):\n return\n\n if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):\n # Force recompute the schema if a new route is added\n self._openapi_schema = None\n self.included_routes.append(route)\n", "path": "litestar/_openapi/plugin.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom litestar._openapi.datastructures import OpenAPIContext\nfrom litestar._openapi.path_item import create_path_item_for_route\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.plugins import InitPluginProtocol\nfrom litestar.plugins.base import ReceiveRoutePlugin\nfrom litestar.routes import HTTPRoute\n\nif TYPE_CHECKING:\n from litestar.app import Litestar\n from litestar.config.app import AppConfig\n from litestar.openapi.config import OpenAPIConfig\n from litestar.openapi.spec import OpenAPI\n from litestar.routes import BaseRoute\n\n\nclass OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):\n __slots__ = (\n \"app\",\n \"included_routes\",\n \"_openapi_config\",\n \"_openapi_schema\",\n )\n\n def __init__(self, app: Litestar) -> None:\n self.app = app\n self.included_routes: dict[str, HTTPRoute] = {}\n self._openapi_config: OpenAPIConfig | None = None\n self._openapi_schema: OpenAPI | None = None\n\n def _build_openapi_schema(self) -> OpenAPI:\n openapi = self.openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)\n openapi.paths = {\n route.path_format or \"/\": create_path_item_for_route(context, route)\n for route in self.included_routes.values()\n }\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n\n def provide_openapi(self) -> OpenAPI:\n if not self._openapi_schema:\n self._openapi_schema = self._build_openapi_schema()\n return self._openapi_schema\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n if app_config.openapi_config:\n self._openapi_config = app_config.openapi_config\n app_config.route_handlers.append(self.openapi_config.openapi_controller)\n return app_config\n\n @property\n def openapi_config(self) -> OpenAPIConfig:\n if not self._openapi_config:\n raise ImproperlyConfiguredException(\"OpenAPIConfig not initialized\")\n return self._openapi_config\n\n def receive_route(self, route: BaseRoute) -> None:\n if not isinstance(route, HTTPRoute):\n return\n\n if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):\n # Force recompute the schema if a new route is added\n self._openapi_schema = None\n self.included_routes[route.path] = route\n", "path": "litestar/_openapi/plugin.py"}]} | 1,478 | 339 |
gh_patches_debug_3371 | rasdani/github-patches | git_diff | e2nIEE__pandapower-1661 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pandapower.networks: nets have wrong order of columns
Example for net = nw.case24_ieee_rts():
```python
net.bus.head()
Out[43]:
in_service max_vm_pu min_vm_pu name type vn_kv zone
0 True 1.1 0.9 a b 138.0 1.0
1 True 1.1 0.9 b b 138.0 1.0
2 True 1.1 0.9 c b 138.0 1.0
3 True 1.1 0.9 d b 138.0 1.0
4 True 1.1 0.9 e b 138.0 1.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6 from setuptools import setup, find_packages
7 import re
8
9 with open('README.rst', 'rb') as f:
10 install = f.read().decode('utf-8')
11
12 with open('CHANGELOG.rst', 'rb') as f:
13 changelog = f.read().decode('utf-8')
14
15 classifiers = [
16 'Development Status :: 5 - Production/Stable',
17 'Environment :: Console',
18 'Intended Audience :: Developers',
19 'Intended Audience :: Education',
20 'Intended Audience :: Science/Research',
21 'License :: OSI Approved :: BSD License',
22 'Natural Language :: English',
23 'Operating System :: OS Independent',
24 'Programming Language :: Python',
25 'Programming Language :: Python :: 3']
26
27 with open('.github/workflows/github_test_action.yml', 'rb') as f:
28 lines = f.read().decode('utf-8')
29 versions = set(re.findall('3.[7-9]', lines)) | set(re.findall('3.1[0-9]', lines))
30 for version in sorted(versions):
31 classifiers.append('Programming Language :: Python :: %s' % version)
32
33 long_description = '\n\n'.join((install, changelog))
34
35 setup(
36 name='pandapower',
37 version='2.10.1',
38 author='Leon Thurner, Alexander Scheidler',
39 author_email='[email protected], [email protected]',
40 description='An easy to use open source tool for power system modeling, analysis and optimization with a high degree of automation.',
41 long_description=long_description,
42 long_description_content_type='text/x-rst',
43 url='http://www.pandapower.org',
44 license='BSD',
45 install_requires=["pandas>=1.0",
46 "networkx>=2.5",
47 "scipy",
48 "numpy>=0.11",
49 "packaging",
50 "tqdm",
51 "deepdiff"],
52 extras_require={
53 "docs": ["numpydoc", "sphinx", "sphinx_rtd_theme"],
54 "plotting": ["plotly", "matplotlib", "python-igraph", "geopandas"],
55 # "shapely", "pyproj" are depedencies of geopandas and so already available;
56 # "base64", "hashlib", "zlib" produce installing problems, so they are not included
57 "test": ["pytest", "pytest-xdist"],
58 "performance": ["ortools"], # , "lightsim2grid"],
59 "fileio": ["xlsxwriter", "openpyxl", "cryptography", "geopandas"],
60 # "fiona" is a depedency of geopandas and so already available
61 "converter": ["matpowercaseframes"],
62 "all": ["numpydoc", "sphinx", "sphinx_rtd_theme",
63 "plotly", "matplotlib", "python-igraph", "geopandas",
64 "pytest", "pytest-xdist",
65 "ortools", # lightsim2grid,
66 "xlsxwriter", "openpyxl", "cryptography",
67 "matpowercaseframes"
68 ]}, # "shapely", "pyproj", "fiona" are depedencies of geopandas and so already available
69 # "hashlib", "zlib", "base64" produce installing problems, so it is not included
70 packages=find_packages(),
71 include_package_data=True,
72 classifiers=classifiers
73 )
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
install_requires=["pandas>=1.0",
"networkx>=2.5",
"scipy",
- "numpy>=0.11",
+ "numpy",
"packaging",
"tqdm",
"deepdiff"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,7 +45,7 @@\n install_requires=[\"pandas>=1.0\",\n \"networkx>=2.5\",\n \"scipy\",\n- \"numpy>=0.11\",\n+ \"numpy\",\n \"packaging\",\n \"tqdm\",\n \"deepdiff\"],\n", "issue": "pandapower.networks: nets have wrong order of columns\nExample for net = nw.case24_ieee_rts():\r\n\r\n```python\r\nnet.bus.head()\r\nOut[43]: \r\n in_service max_vm_pu min_vm_pu name type vn_kv zone\r\n0 True 1.1 0.9 a b 138.0 1.0\r\n1 True 1.1 0.9 b b 138.0 1.0\r\n2 True 1.1 0.9 c b 138.0 1.0\r\n3 True 1.1 0.9 d b 138.0 1.0\r\n4 True 1.1 0.9 e b 138.0 1.0\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nfrom setuptools import setup, find_packages\nimport re\n\nwith open('README.rst', 'rb') as f:\n install = f.read().decode('utf-8')\n\nwith open('CHANGELOG.rst', 'rb') as f:\n changelog = f.read().decode('utf-8')\n\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3']\n\nwith open('.github/workflows/github_test_action.yml', 'rb') as f:\n lines = f.read().decode('utf-8')\n versions = set(re.findall('3.[7-9]', lines)) | set(re.findall('3.1[0-9]', lines))\n for version in sorted(versions):\n classifiers.append('Programming Language :: Python :: %s' % version)\n\nlong_description = '\\n\\n'.join((install, changelog))\n\nsetup(\n name='pandapower',\n version='2.10.1',\n author='Leon Thurner, Alexander Scheidler',\n author_email='[email protected], [email protected]',\n description='An easy to use open source tool for power system modeling, analysis and optimization with a high degree of automation.',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='http://www.pandapower.org',\n license='BSD',\n install_requires=[\"pandas>=1.0\",\n \"networkx>=2.5\",\n \"scipy\",\n \"numpy>=0.11\",\n \"packaging\",\n \"tqdm\",\n \"deepdiff\"],\n extras_require={\n \"docs\": [\"numpydoc\", \"sphinx\", \"sphinx_rtd_theme\"],\n \"plotting\": [\"plotly\", \"matplotlib\", \"python-igraph\", \"geopandas\"],\n # \"shapely\", \"pyproj\" are depedencies of geopandas and so already available;\n # \"base64\", \"hashlib\", \"zlib\" produce installing problems, so they are not included\n \"test\": [\"pytest\", \"pytest-xdist\"],\n \"performance\": [\"ortools\"], # , \"lightsim2grid\"],\n \"fileio\": [\"xlsxwriter\", \"openpyxl\", \"cryptography\", \"geopandas\"],\n # \"fiona\" is a depedency of geopandas and so already available\n \"converter\": [\"matpowercaseframes\"],\n \"all\": [\"numpydoc\", \"sphinx\", \"sphinx_rtd_theme\",\n \"plotly\", \"matplotlib\", \"python-igraph\", \"geopandas\",\n \"pytest\", \"pytest-xdist\",\n \"ortools\", # lightsim2grid,\n \"xlsxwriter\", \"openpyxl\", \"cryptography\",\n \"matpowercaseframes\"\n ]}, # \"shapely\", \"pyproj\", \"fiona\" are depedencies of geopandas and so already available\n # \"hashlib\", \"zlib\", \"base64\" produce installing problems, so it is not included\n packages=find_packages(),\n include_package_data=True,\n classifiers=classifiers\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nfrom setuptools import setup, find_packages\nimport re\n\nwith open('README.rst', 'rb') as f:\n install = f.read().decode('utf-8')\n\nwith open('CHANGELOG.rst', 'rb') as f:\n changelog = f.read().decode('utf-8')\n\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3']\n\nwith open('.github/workflows/github_test_action.yml', 'rb') as f:\n lines = f.read().decode('utf-8')\n versions = set(re.findall('3.[7-9]', lines)) | set(re.findall('3.1[0-9]', lines))\n for version in sorted(versions):\n classifiers.append('Programming Language :: Python :: %s' % version)\n\nlong_description = '\\n\\n'.join((install, changelog))\n\nsetup(\n name='pandapower',\n version='2.10.1',\n author='Leon Thurner, Alexander Scheidler',\n author_email='[email protected], [email protected]',\n description='An easy to use open source tool for power system modeling, analysis and optimization with a high degree of automation.',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='http://www.pandapower.org',\n license='BSD',\n install_requires=[\"pandas>=1.0\",\n \"networkx>=2.5\",\n \"scipy\",\n \"numpy\",\n \"packaging\",\n \"tqdm\",\n \"deepdiff\"],\n extras_require={\n \"docs\": [\"numpydoc\", \"sphinx\", \"sphinx_rtd_theme\"],\n \"plotting\": [\"plotly\", \"matplotlib\", \"python-igraph\", \"geopandas\"],\n # \"shapely\", \"pyproj\" are depedencies of geopandas and so already available;\n # \"base64\", \"hashlib\", \"zlib\" produce installing problems, so they are not included\n \"test\": [\"pytest\", \"pytest-xdist\"],\n \"performance\": [\"ortools\"], # , \"lightsim2grid\"],\n \"fileio\": [\"xlsxwriter\", \"openpyxl\", \"cryptography\", \"geopandas\"],\n # \"fiona\" is a depedency of geopandas and so already available\n \"converter\": [\"matpowercaseframes\"],\n \"all\": [\"numpydoc\", \"sphinx\", \"sphinx_rtd_theme\",\n \"plotly\", \"matplotlib\", \"python-igraph\", \"geopandas\",\n \"pytest\", \"pytest-xdist\",\n \"ortools\", # lightsim2grid,\n \"xlsxwriter\", \"openpyxl\", \"cryptography\",\n \"matpowercaseframes\"\n ]}, # \"shapely\", \"pyproj\", \"fiona\" are depedencies of geopandas and so already available\n # \"hashlib\", \"zlib\", \"base64\" produce installing problems, so it is not included\n packages=find_packages(),\n include_package_data=True,\n classifiers=classifiers\n)\n", "path": "setup.py"}]} | 1,423 | 88 |
gh_patches_debug_20828 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reading goal status doesn't set plurals correctly
When someone is only planning to read 1 book, the status should say "1 book" not "1 books"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/goal.py`
Content:
```
1 ''' non-interactive pages '''
2 from django.contrib.auth.decorators import login_required
3 from django.http import HttpResponseNotFound
4 from django.shortcuts import redirect
5 from django.template.response import TemplateResponse
6 from django.utils.decorators import method_decorator
7 from django.views import View
8
9 from bookwyrm import forms, models
10 from bookwyrm.status import create_generated_note
11 from .helpers import get_user_from_username, object_visible_to_user
12
13
14 # pylint: disable= no-self-use
15 @method_decorator(login_required, name='dispatch')
16 class Goal(View):
17 ''' track books for the year '''
18 def get(self, request, username, year):
19 ''' reading goal page '''
20 user = get_user_from_username(username)
21 year = int(year)
22 goal = models.AnnualGoal.objects.filter(
23 year=year, user=user
24 ).first()
25 if not goal and user != request.user:
26 return HttpResponseNotFound()
27
28 if goal and not object_visible_to_user(request.user, goal):
29 return HttpResponseNotFound()
30
31 data = {
32 'title': '%s\'s %d Reading' % (user.display_name, year),
33 'goal_form': forms.GoalForm(instance=goal),
34 'goal': goal,
35 'user': user,
36 'year': year,
37 'is_self': request.user == user,
38 }
39 return TemplateResponse(request, 'goal.html', data)
40
41
42 def post(self, request, username, year):
43 ''' update or create an annual goal '''
44 user = get_user_from_username(username)
45 if user != request.user:
46 return HttpResponseNotFound()
47
48 year = int(year)
49 goal = models.AnnualGoal.objects.filter(
50 year=year, user=request.user
51 ).first()
52 form = forms.GoalForm(request.POST, instance=goal)
53 if not form.is_valid():
54 data = {
55 'title': '%s\'s %d Reading' % (request.user.display_name, year),
56 'goal_form': form,
57 'goal': goal,
58 'year': year,
59 }
60 return TemplateResponse(request, 'goal.html', data)
61 goal = form.save()
62
63 if request.POST.get('post-status'):
64 # create status, if appropraite
65 create_generated_note(
66 request.user,
67 'set a goal to read %d books in %d' % (goal.goal, goal.year),
68 privacy=goal.privacy
69 )
70
71 return redirect(request.headers.get('Referer', '/'))
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/goal.py b/bookwyrm/views/goal.py
--- a/bookwyrm/views/goal.py
+++ b/bookwyrm/views/goal.py
@@ -2,6 +2,7 @@
from django.contrib.auth.decorators import login_required
from django.http import HttpResponseNotFound
from django.shortcuts import redirect
+from django.template.loader import get_template
from django.template.response import TemplateResponse
from django.utils.decorators import method_decorator
from django.views import View
@@ -62,9 +63,10 @@
if request.POST.get('post-status'):
# create status, if appropraite
+ template = get_template('snippets/generated_status/goal.html')
create_generated_note(
request.user,
- 'set a goal to read %d books in %d' % (goal.goal, goal.year),
+ template.render({'goal': goal, 'user': request.user}).strip(),
privacy=goal.privacy
)
| {"golden_diff": "diff --git a/bookwyrm/views/goal.py b/bookwyrm/views/goal.py\n--- a/bookwyrm/views/goal.py\n+++ b/bookwyrm/views/goal.py\n@@ -2,6 +2,7 @@\n from django.contrib.auth.decorators import login_required\n from django.http import HttpResponseNotFound\n from django.shortcuts import redirect\n+from django.template.loader import get_template\n from django.template.response import TemplateResponse\n from django.utils.decorators import method_decorator\n from django.views import View\n@@ -62,9 +63,10 @@\n \n if request.POST.get('post-status'):\n # create status, if appropraite\n+ template = get_template('snippets/generated_status/goal.html')\n create_generated_note(\n request.user,\n- 'set a goal to read %d books in %d' % (goal.goal, goal.year),\n+ template.render({'goal': goal, 'user': request.user}).strip(),\n privacy=goal.privacy\n )\n", "issue": "Reading goal status doesn't set plurals correctly\nWhen someone is only planning to read 1 book, the status should say \"1 book\" not \"1 books\"\n", "before_files": [{"content": "''' non-interactive pages '''\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseNotFound\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.status import create_generated_note\nfrom .helpers import get_user_from_username, object_visible_to_user\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name='dispatch')\nclass Goal(View):\n ''' track books for the year '''\n def get(self, request, username, year):\n ''' reading goal page '''\n user = get_user_from_username(username)\n year = int(year)\n goal = models.AnnualGoal.objects.filter(\n year=year, user=user\n ).first()\n if not goal and user != request.user:\n return HttpResponseNotFound()\n\n if goal and not object_visible_to_user(request.user, goal):\n return HttpResponseNotFound()\n\n data = {\n 'title': '%s\\'s %d Reading' % (user.display_name, year),\n 'goal_form': forms.GoalForm(instance=goal),\n 'goal': goal,\n 'user': user,\n 'year': year,\n 'is_self': request.user == user,\n }\n return TemplateResponse(request, 'goal.html', data)\n\n\n def post(self, request, username, year):\n ''' update or create an annual goal '''\n user = get_user_from_username(username)\n if user != request.user:\n return HttpResponseNotFound()\n\n year = int(year)\n goal = models.AnnualGoal.objects.filter(\n year=year, user=request.user\n ).first()\n form = forms.GoalForm(request.POST, instance=goal)\n if not form.is_valid():\n data = {\n 'title': '%s\\'s %d Reading' % (request.user.display_name, year),\n 'goal_form': form,\n 'goal': goal,\n 'year': year,\n }\n return TemplateResponse(request, 'goal.html', data)\n goal = form.save()\n\n if request.POST.get('post-status'):\n # create status, if appropraite\n create_generated_note(\n request.user,\n 'set a goal to read %d books in %d' % (goal.goal, goal.year),\n privacy=goal.privacy\n )\n\n return redirect(request.headers.get('Referer', '/'))\n", "path": "bookwyrm/views/goal.py"}], "after_files": [{"content": "''' non-interactive pages '''\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseNotFound\nfrom django.shortcuts import redirect\nfrom django.template.loader import get_template\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.status import create_generated_note\nfrom .helpers import get_user_from_username, object_visible_to_user\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name='dispatch')\nclass Goal(View):\n ''' track books for the year '''\n def get(self, request, username, year):\n ''' reading goal page '''\n user = get_user_from_username(username)\n year = int(year)\n goal = models.AnnualGoal.objects.filter(\n year=year, user=user\n ).first()\n if not goal and user != request.user:\n return HttpResponseNotFound()\n\n if goal and not object_visible_to_user(request.user, goal):\n return HttpResponseNotFound()\n\n data = {\n 'title': '%s\\'s %d Reading' % (user.display_name, year),\n 'goal_form': forms.GoalForm(instance=goal),\n 'goal': goal,\n 'user': user,\n 'year': year,\n 'is_self': request.user == user,\n }\n return TemplateResponse(request, 'goal.html', data)\n\n\n def post(self, request, username, year):\n ''' update or create an annual goal '''\n user = get_user_from_username(username)\n if user != request.user:\n return HttpResponseNotFound()\n\n year = int(year)\n goal = models.AnnualGoal.objects.filter(\n year=year, user=request.user\n ).first()\n form = forms.GoalForm(request.POST, instance=goal)\n if not form.is_valid():\n data = {\n 'title': '%s\\'s %d Reading' % (request.user.display_name, year),\n 'goal_form': form,\n 'goal': goal,\n 'year': year,\n }\n return TemplateResponse(request, 'goal.html', data)\n goal = form.save()\n\n if request.POST.get('post-status'):\n # create status, if appropraite\n template = get_template('snippets/generated_status/goal.html')\n create_generated_note(\n request.user,\n template.render({'goal': goal, 'user': request.user}).strip(),\n privacy=goal.privacy\n )\n\n return redirect(request.headers.get('Referer', '/'))\n", "path": "bookwyrm/views/goal.py"}]} | 949 | 209 |
gh_patches_debug_12233 | rasdani/github-patches | git_diff | googleapis__python-bigquery-833 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expand pyarrow to support 5.x releases
Changelog: https://raw.githubusercontent.com/apache/arrow/4591d76fce2846a29dac33bf01e9ba0337b118e9/CHANGELOG.md
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "grpcio >= 1.38.1, < 2.0dev", # https://github.com/googleapis/python-bigquery/issues/695
33 # NOTE: Maintainers, please do not require google-api-core>=2.x.x
34 # Until this issue is closed
35 # https://github.com/googleapis/google-cloud-python/issues/10566
36 "google-api-core[grpc] >= 1.29.0, <3.0.0dev",
37 "proto-plus >= 1.10.0",
38 # NOTE: Maintainers, please do not require google-cloud-core>=2.x.x
39 # Until this issue is closed
40 # https://github.com/googleapis/google-cloud-python/issues/10566
41 "google-cloud-core >= 1.4.1, <3.0.0dev",
42 "google-resumable-media >= 0.6.0, < 3.0dev",
43 "packaging >= 14.3",
44 "protobuf >= 3.12.0",
45 "requests >= 2.18.0, < 3.0.0dev",
46 ]
47 extras = {
48 "bqstorage": [
49 "google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev",
50 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
51 # installed, even though `google-cloud-bigquery-storage` specifies it
52 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
53 # See: https://github.com/googleapis/python-bigquery/issues/83 The
54 # grpc.Channel.close() method isn't added until 1.32.0.
55 # https://github.com/grpc/grpc/pull/15254
56 "grpcio >= 1.38.1, < 2.0dev",
57 "pyarrow >= 1.0.0, < 5.0dev",
58 ],
59 "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 5.0dev"],
60 "bignumeric_type": ["pyarrow >= 3.0.0, < 5.0dev"],
61 "tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
62 "opentelemetry": [
63 "opentelemetry-api >= 0.11b0",
64 "opentelemetry-sdk >= 0.11b0",
65 "opentelemetry-instrumentation >= 0.11b0",
66 ],
67 }
68
69 all_extras = []
70
71 for extra in extras:
72 # Exclude this extra from all to avoid overly strict dependencies on core
73 # libraries such as pyarrow.
74 # https://github.com/googleapis/python-bigquery/issues/563
75 if extra in {"bignumeric_type"}:
76 continue
77 all_extras.extend(extras[extra])
78
79 extras["all"] = all_extras
80
81 # Setup boilerplate below this line.
82
83 package_root = os.path.abspath(os.path.dirname(__file__))
84
85 readme_filename = os.path.join(package_root, "README.rst")
86 with io.open(readme_filename, encoding="utf-8") as readme_file:
87 readme = readme_file.read()
88
89 version = {}
90 with open(os.path.join(package_root, "google/cloud/bigquery/version.py")) as fp:
91 exec(fp.read(), version)
92 version = version["__version__"]
93
94 # Only include packages under the 'google' namespace. Do not include tests,
95 # benchmarks, etc.
96 packages = [
97 package
98 for package in setuptools.PEP420PackageFinder.find()
99 if package.startswith("google")
100 ]
101
102 # Determine which namespaces are needed.
103 namespaces = ["google"]
104 if "google.cloud" in packages:
105 namespaces.append("google.cloud")
106
107
108 setuptools.setup(
109 name=name,
110 version=version,
111 description=description,
112 long_description=readme,
113 author="Google LLC",
114 author_email="[email protected]",
115 license="Apache 2.0",
116 url="https://github.com/googleapis/python-bigquery",
117 classifiers=[
118 release_status,
119 "Intended Audience :: Developers",
120 "License :: OSI Approved :: Apache Software License",
121 "Programming Language :: Python",
122 "Programming Language :: Python :: 3",
123 "Programming Language :: Python :: 3.6",
124 "Programming Language :: Python :: 3.7",
125 "Programming Language :: Python :: 3.8",
126 "Programming Language :: Python :: 3.9",
127 "Operating System :: OS Independent",
128 "Topic :: Internet",
129 ],
130 platforms="Posix; MacOS X; Windows",
131 packages=packages,
132 namespace_packages=namespaces,
133 install_requires=dependencies,
134 extras_require=extras,
135 python_requires=">=3.6, <3.10",
136 include_package_data=True,
137 zip_safe=False,
138 )
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -54,10 +54,10 @@
# grpc.Channel.close() method isn't added until 1.32.0.
# https://github.com/grpc/grpc/pull/15254
"grpcio >= 1.38.1, < 2.0dev",
- "pyarrow >= 1.0.0, < 5.0dev",
+ "pyarrow >= 1.0.0, < 6.0dev",
],
- "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 5.0dev"],
- "bignumeric_type": ["pyarrow >= 3.0.0, < 5.0dev"],
+ "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 6.0dev"],
+ "bignumeric_type": ["pyarrow >= 3.0.0, < 6.0dev"],
"tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
"opentelemetry": [
"opentelemetry-api >= 0.11b0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -54,10 +54,10 @@\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.38.1, < 2.0dev\",\n- \"pyarrow >= 1.0.0, < 5.0dev\",\n+ \"pyarrow >= 1.0.0, < 6.0dev\",\n ],\n- \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 5.0dev\"],\n- \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 5.0dev\"],\n+ \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 6.0dev\"],\n+ \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 6.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n", "issue": "Expand pyarrow to support 5.x releases\nChangelog: https://raw.githubusercontent.com/apache/arrow/4591d76fce2846a29dac33bf01e9ba0337b118e9/CHANGELOG.md\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"grpcio >= 1.38.1, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/695\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core[grpc] >= 1.29.0, <3.0.0dev\",\n \"proto-plus >= 1.10.0\",\n # NOTE: Maintainers, please do not require google-cloud-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-cloud-core >= 1.4.1, <3.0.0dev\",\n \"google-resumable-media >= 0.6.0, < 3.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.38.1, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 5.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 5.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 5.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n \"opentelemetry-sdk >= 0.11b0\",\n \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"grpcio >= 1.38.1, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/695\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core[grpc] >= 1.29.0, <3.0.0dev\",\n \"proto-plus >= 1.10.0\",\n # NOTE: Maintainers, please do not require google-cloud-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-cloud-core >= 1.4.1, <3.0.0dev\",\n \"google-resumable-media >= 0.6.0, < 3.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.38.1, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 6.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 6.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 6.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n \"opentelemetry-sdk >= 0.11b0\",\n \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 1,926 | 302 |
gh_patches_debug_6522 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive: Sub is required if a variable is used in a string in parameter descriptions
*cfn-lint version: 0.26.0*
*Description of issue.*
Parameter descriptions fail E1029 if they contain text which looks like variable substitution:
e.g.
```yaml
MyContentBucket:
Description: "Bucket name for content (usually ${VPCName}-my-content), use 'none' to disable creation"
Type: String
```
Gives an error:
[E1029: Sub is required if a variable is used in a string] (Found an embedded parameter outside of an "Fn::Sub" at Parameters/MyContentBucket/Description)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/functions/SubNeeded.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 from cfnlint.rules import CloudFormationLintRule
7 from cfnlint.rules import RuleMatch
8
9
10 class SubNeeded(CloudFormationLintRule):
11 """Check if a substitution string exists without a substitution function"""
12 id = 'E1029'
13 shortdesc = 'Sub is required if a variable is used in a string'
14 description = 'If a substitution variable exists in a string but isn\'t wrapped with the Fn::Sub function the deployment will fail.'
15 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
16 tags = ['functions', 'sub']
17
18 # Free-form text properties to exclude from this rule
19 # content is part of AWS::CloudFormation::Init
20 excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',
21 'CloudWatchAlarmDefinition', 'TopicRulePayload']
22 api_excludes = ['Uri', 'Body']
23
24 # IAM Policy has special variables that don't require !Sub, Check for these
25 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html
26 # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html
27 # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html
28 # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down
29 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html
30 resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',
31 '${aws:TokenIssueTime}', '${aws:principaltype}',
32 '${aws:SecureTransport}', '${aws:SourceIp}',
33 '${aws:UserAgent}', '${aws:userid}',
34 '${aws:username}', '${ec2:SourceInstanceARN}',
35 '${iot:Connection.Thing.ThingName}',
36 '${iot:Connection.Thing.ThingTypeName}',
37 '${iot:Connection.Thing.IsAttached}',
38 '${iot:ClientId}', '${transfer:HomeBucket}',
39 '${transfer:HomeDirectory}', '${transfer:HomeFolder}',
40 '${transfer:UserName}', '${redshift:DbUser}',
41 '${cognito-identity.amazonaws.com:aud}',
42 '${cognito-identity.amazonaws.com:sub}',
43 '${cognito-identity.amazonaws.com:amr}']
44
45 # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html
46 condition_excludes = [
47 '${redshift:DbUser}',
48 ]
49
50 def _match_values(self, searchRegex, cfnelem, path):
51 """Recursively search for values matching the searchRegex"""
52 values = []
53 if isinstance(cfnelem, dict):
54 for key in cfnelem:
55 pathprop = path[:]
56 pathprop.append(key)
57 values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))
58 elif isinstance(cfnelem, list):
59 for index, item in enumerate(cfnelem):
60 pathprop = path[:]
61 pathprop.append(index)
62 values.extend(self._match_values(searchRegex, item, pathprop))
63 else:
64 # Leaf node
65 if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):
66 # Get all variables as seperate paths
67 regex = re.compile(r'(\$\{.*?\.?.*?})')
68 for variable in re.findall(regex, cfnelem):
69 values.append(path + [variable])
70
71 return values
72
73 def match_values(self, searchRegex, cfn):
74 """
75 Search for values in all parts of the templates that match the searchRegex
76 """
77 results = []
78 results.extend(self._match_values(searchRegex, cfn.template, []))
79 # Globals are removed during a transform. They need to be checked manually
80 results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))
81 return results
82
83 def _api_exceptions(self, value):
84 """ Key value exceptions """
85 parameter_search = re.compile(r'^\$\{stageVariables\..*\}$')
86 return re.match(parameter_search, value)
87
88 def match(self, cfn):
89 """Basic Rule Matching"""
90
91 matches = []
92
93 # Generic regex to match a string containing at least one ${parameter}
94 parameter_search = re.compile(r'^.*(\$\{.*\}.*(\$\{.*\}.*)*)$')
95
96 # Get a list of paths to every leaf node string containing at least one ${parameter}
97 parameter_string_paths = self.match_values(parameter_search, cfn)
98
99 # We want to search all of the paths to check if each one contains an 'Fn::Sub'
100 for parameter_string_path in parameter_string_paths:
101 # Exxclude the special IAM variables
102 variable = parameter_string_path[-1]
103
104 if 'Resource' in parameter_string_path:
105 if variable in self.resource_excludes:
106 continue
107 if 'Condition' in parameter_string_path:
108 if variable in self.condition_excludes:
109 continue
110
111 # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)
112 if variable.startswith('${!'):
113 continue
114
115 found_sub = False
116 # Does the path contain an 'Fn::Sub'?
117 for step in parameter_string_path:
118 if step in self.api_excludes:
119 if self._api_exceptions(parameter_string_path[-1]):
120 found_sub = True
121 elif step == 'Fn::Sub' or step in self.excludes:
122 found_sub = True
123
124 # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly
125 if not found_sub:
126 # Remove the last item (the variable) to prevent multiple errors on 1 line errors
127 path = parameter_string_path[:-1]
128 message = 'Found an embedded parameter outside of an "Fn::Sub" at {}'.format(
129 '/'.join(map(str, path)))
130 matches.append(RuleMatch(path, message))
131
132 return matches
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py
--- a/src/cfnlint/rules/functions/SubNeeded.py
+++ b/src/cfnlint/rules/functions/SubNeeded.py
@@ -98,6 +98,8 @@
# We want to search all of the paths to check if each one contains an 'Fn::Sub'
for parameter_string_path in parameter_string_paths:
+ if parameter_string_path[0] in ['Parameters']:
+ continue
# Exxclude the special IAM variables
variable = parameter_string_path[-1]
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py\n--- a/src/cfnlint/rules/functions/SubNeeded.py\n+++ b/src/cfnlint/rules/functions/SubNeeded.py\n@@ -98,6 +98,8 @@\n \n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n+ if parameter_string_path[0] in ['Parameters']:\n+ continue\n # Exxclude the special IAM variables\n variable = parameter_string_path[-1]\n", "issue": "False positive: Sub is required if a variable is used in a string in parameter descriptions\n*cfn-lint version: 0.26.0*\r\n\r\n*Description of issue.*\r\nParameter descriptions fail E1029 if they contain text which looks like variable substitution:\r\n\r\ne.g.\r\n\r\n```yaml\r\n MyContentBucket:\r\n Description: \"Bucket name for content (usually ${VPCName}-my-content), use 'none' to disable creation\"\r\n Type: String\r\n```\r\n\r\nGives an error:\r\n\r\n [E1029: Sub is required if a variable is used in a string] (Found an embedded parameter outside of an \"Fn::Sub\" at Parameters/MyContentBucket/Description)\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',\n '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}',\n '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}',\n '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}',\n '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}',\n '${transfer:UserName}', '${redshift:DbUser}',\n '${cognito-identity.amazonaws.com:aud}',\n '${cognito-identity.amazonaws.com:sub}',\n '${cognito-identity.amazonaws.com:amr}']\n\n # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html\n condition_excludes = [\n '${redshift:DbUser}',\n ]\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n # Get all variables as seperate paths\n regex = re.compile(r'(\\$\\{.*?\\.?.*?})')\n for variable in re.findall(regex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^.*(\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n # Exxclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'Condition' in parameter_string_path:\n if variable in self.condition_excludes:\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format(\n '/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',\n '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}',\n '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}',\n '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}',\n '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}',\n '${transfer:UserName}', '${redshift:DbUser}',\n '${cognito-identity.amazonaws.com:aud}',\n '${cognito-identity.amazonaws.com:sub}',\n '${cognito-identity.amazonaws.com:amr}']\n\n # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html\n condition_excludes = [\n '${redshift:DbUser}',\n ]\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n # Get all variables as seperate paths\n regex = re.compile(r'(\\$\\{.*?\\.?.*?})')\n for variable in re.findall(regex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^.*(\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n if parameter_string_path[0] in ['Parameters']:\n continue\n # Exxclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'Condition' in parameter_string_path:\n if variable in self.condition_excludes:\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format(\n '/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}]} | 2,040 | 129 |
gh_patches_debug_17133 | rasdani/github-patches | git_diff | python-poetry__poetry-7547 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
add -e/--executable to poetry env info to get the python executable path
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] I have searched the [FAQ](https://python-poetry.org/docs/faq/) and general [documentation](https://python-poetry.org/docs/) and believe that my question is not already covered.
## Feature Request
in addition to the already present `-p/--path` option, add a `-e/--execuatble` option to return the python executable path.
My use case: I'm starting to use Taskfile and poetry on some projects; these project are developed on both linux and windows;
I would like to avoid having to install tools such as mypy in the virtual environment, since they can be run from the outside (this also allows me to have faster CI, I have set up a custom docker image with all the tools needed).
mypy in particular wants to know the exact path of the python executable to work (passed as `--python-executable` option), so having a new `poetry env info --executable` option that outputs the python path would solve my issue in a cross-platform fashion.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/poetry/console/commands/env/info.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from cleo.helpers import option
6
7 from poetry.console.commands.command import Command
8
9
10 if TYPE_CHECKING:
11 from poetry.utils.env import Env
12
13
14 class EnvInfoCommand(Command):
15 name = "env info"
16 description = "Displays information about the current environment."
17
18 options = [option("path", "p", "Only display the environment's path.")]
19
20 def handle(self) -> int:
21 from poetry.utils.env import EnvManager
22
23 env = EnvManager(self.poetry).get()
24
25 if self.option("path"):
26 if not env.is_venv():
27 return 1
28
29 self.line(str(env.path))
30
31 return 0
32
33 self._display_complete_info(env)
34 return 0
35
36 def _display_complete_info(self, env: Env) -> None:
37 env_python_version = ".".join(str(s) for s in env.version_info[:3])
38 self.line("")
39 self.line("<b>Virtualenv</b>")
40 listing = [
41 f"<info>Python</info>: <comment>{env_python_version}</>",
42 f"<info>Implementation</info>: <comment>{env.python_implementation}</>",
43 (
44 "<info>Path</info>: "
45 f" <comment>{env.path if env.is_venv() else 'NA'}</>"
46 ),
47 (
48 "<info>Executable</info>: "
49 f" <comment>{env.python if env.is_venv() else 'NA'}</>"
50 ),
51 ]
52 if env.is_venv():
53 listing.append(
54 "<info>Valid</info>: "
55 f" <{'comment' if env.is_sane() else 'error'}>{env.is_sane()}</>"
56 )
57 self.line("\n".join(listing))
58
59 self.line("")
60
61 system_env = env.parent_env
62 python = ".".join(str(v) for v in system_env.version_info[:3])
63 self.line("<b>System</b>")
64 self.line(
65 "\n".join(
66 [
67 f"<info>Platform</info>: <comment>{env.platform}</>",
68 f"<info>OS</info>: <comment>{env.os}</>",
69 f"<info>Python</info>: <comment>{python}</>",
70 f"<info>Path</info>: <comment>{system_env.path}</>",
71 f"<info>Executable</info>: <comment>{system_env.python}</>",
72 ]
73 )
74 )
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/poetry/console/commands/env/info.py b/src/poetry/console/commands/env/info.py
--- a/src/poetry/console/commands/env/info.py
+++ b/src/poetry/console/commands/env/info.py
@@ -15,7 +15,12 @@
name = "env info"
description = "Displays information about the current environment."
- options = [option("path", "p", "Only display the environment's path.")]
+ options = [
+ option("path", "p", "Only display the environment's path."),
+ option(
+ "executable", "e", "Only display the environment's python executable path."
+ ),
+ ]
def handle(self) -> int:
from poetry.utils.env import EnvManager
@@ -30,6 +35,14 @@
return 0
+ if self.option("executable"):
+ if not env.is_venv():
+ return 1
+
+ self.line(str(env.python))
+
+ return 0
+
self._display_complete_info(env)
return 0
| {"golden_diff": "diff --git a/src/poetry/console/commands/env/info.py b/src/poetry/console/commands/env/info.py\n--- a/src/poetry/console/commands/env/info.py\n+++ b/src/poetry/console/commands/env/info.py\n@@ -15,7 +15,12 @@\n name = \"env info\"\n description = \"Displays information about the current environment.\"\n \n- options = [option(\"path\", \"p\", \"Only display the environment's path.\")]\n+ options = [\n+ option(\"path\", \"p\", \"Only display the environment's path.\"),\n+ option(\n+ \"executable\", \"e\", \"Only display the environment's python executable path.\"\n+ ),\n+ ]\n \n def handle(self) -> int:\n from poetry.utils.env import EnvManager\n@@ -30,6 +35,14 @@\n \n return 0\n \n+ if self.option(\"executable\"):\n+ if not env.is_venv():\n+ return 1\n+\n+ self.line(str(env.python))\n+\n+ return 0\n+\n self._display_complete_info(env)\n return 0\n", "issue": "add -e/--executable to poetry env info to get the python executable path\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] I have searched the [FAQ](https://python-poetry.org/docs/faq/) and general [documentation](https://python-poetry.org/docs/) and believe that my question is not already covered.\r\n\r\n## Feature Request\r\n\r\nin addition to the already present `-p/--path` option, add a `-e/--execuatble` option to return the python executable path.\r\n\r\nMy use case: I'm starting to use Taskfile and poetry on some projects; these project are developed on both linux and windows;\r\n\r\nI would like to avoid having to install tools such as mypy in the virtual environment, since they can be run from the outside (this also allows me to have faster CI, I have set up a custom docker image with all the tools needed).\r\n\r\nmypy in particular wants to know the exact path of the python executable to work (passed as `--python-executable` option), so having a new `poetry env info --executable` option that outputs the python path would solve my issue in a cross-platform fashion.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom cleo.helpers import option\n\nfrom poetry.console.commands.command import Command\n\n\nif TYPE_CHECKING:\n from poetry.utils.env import Env\n\n\nclass EnvInfoCommand(Command):\n name = \"env info\"\n description = \"Displays information about the current environment.\"\n\n options = [option(\"path\", \"p\", \"Only display the environment's path.\")]\n\n def handle(self) -> int:\n from poetry.utils.env import EnvManager\n\n env = EnvManager(self.poetry).get()\n\n if self.option(\"path\"):\n if not env.is_venv():\n return 1\n\n self.line(str(env.path))\n\n return 0\n\n self._display_complete_info(env)\n return 0\n\n def _display_complete_info(self, env: Env) -> None:\n env_python_version = \".\".join(str(s) for s in env.version_info[:3])\n self.line(\"\")\n self.line(\"<b>Virtualenv</b>\")\n listing = [\n f\"<info>Python</info>: <comment>{env_python_version}</>\",\n f\"<info>Implementation</info>: <comment>{env.python_implementation}</>\",\n (\n \"<info>Path</info>: \"\n f\" <comment>{env.path if env.is_venv() else 'NA'}</>\"\n ),\n (\n \"<info>Executable</info>: \"\n f\" <comment>{env.python if env.is_venv() else 'NA'}</>\"\n ),\n ]\n if env.is_venv():\n listing.append(\n \"<info>Valid</info>: \"\n f\" <{'comment' if env.is_sane() else 'error'}>{env.is_sane()}</>\"\n )\n self.line(\"\\n\".join(listing))\n\n self.line(\"\")\n\n system_env = env.parent_env\n python = \".\".join(str(v) for v in system_env.version_info[:3])\n self.line(\"<b>System</b>\")\n self.line(\n \"\\n\".join(\n [\n f\"<info>Platform</info>: <comment>{env.platform}</>\",\n f\"<info>OS</info>: <comment>{env.os}</>\",\n f\"<info>Python</info>: <comment>{python}</>\",\n f\"<info>Path</info>: <comment>{system_env.path}</>\",\n f\"<info>Executable</info>: <comment>{system_env.python}</>\",\n ]\n )\n )\n", "path": "src/poetry/console/commands/env/info.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom cleo.helpers import option\n\nfrom poetry.console.commands.command import Command\n\n\nif TYPE_CHECKING:\n from poetry.utils.env import Env\n\n\nclass EnvInfoCommand(Command):\n name = \"env info\"\n description = \"Displays information about the current environment.\"\n\n options = [\n option(\"path\", \"p\", \"Only display the environment's path.\"),\n option(\n \"executable\", \"e\", \"Only display the environment's python executable path.\"\n ),\n ]\n\n def handle(self) -> int:\n from poetry.utils.env import EnvManager\n\n env = EnvManager(self.poetry).get()\n\n if self.option(\"path\"):\n if not env.is_venv():\n return 1\n\n self.line(str(env.path))\n\n return 0\n\n if self.option(\"executable\"):\n if not env.is_venv():\n return 1\n\n self.line(str(env.python))\n\n return 0\n\n self._display_complete_info(env)\n return 0\n\n def _display_complete_info(self, env: Env) -> None:\n env_python_version = \".\".join(str(s) for s in env.version_info[:3])\n self.line(\"\")\n self.line(\"<b>Virtualenv</b>\")\n listing = [\n f\"<info>Python</info>: <comment>{env_python_version}</>\",\n f\"<info>Implementation</info>: <comment>{env.python_implementation}</>\",\n (\n \"<info>Path</info>: \"\n f\" <comment>{env.path if env.is_venv() else 'NA'}</>\"\n ),\n (\n \"<info>Executable</info>: \"\n f\" <comment>{env.python if env.is_venv() else 'NA'}</>\"\n ),\n ]\n if env.is_venv():\n listing.append(\n \"<info>Valid</info>: \"\n f\" <{'comment' if env.is_sane() else 'error'}>{env.is_sane()}</>\"\n )\n self.line(\"\\n\".join(listing))\n\n self.line(\"\")\n\n system_env = env.parent_env\n python = \".\".join(str(v) for v in system_env.version_info[:3])\n self.line(\"<b>System</b>\")\n self.line(\n \"\\n\".join(\n [\n f\"<info>Platform</info>: <comment>{env.platform}</>\",\n f\"<info>OS</info>: <comment>{env.os}</>\",\n f\"<info>Python</info>: <comment>{python}</>\",\n f\"<info>Path</info>: <comment>{system_env.path}</>\",\n f\"<info>Executable</info>: <comment>{system_env.python}</>\",\n ]\n )\n )\n", "path": "src/poetry/console/commands/env/info.py"}]} | 1,210 | 245 |
gh_patches_debug_38758 | rasdani/github-patches | git_diff | python-discord__site-1104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider dropping deploy preview support for redirects app
Do we need previews of the legacy redirects?
If not, we may be able to remove a lot of code from the redirects app.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydis_site/apps/redirect/urls.py`
Content:
```
1 import dataclasses
2 import re
3
4 import yaml
5 from django import conf
6 from django.http import HttpResponse
7 from django.urls import URLPattern, path
8 from django_distill import distill_path
9
10 from pydis_site import settings
11 from pydis_site.apps.content import urls as pages_urls
12 from pydis_site.apps.redirect.views import CustomRedirectView
13 from pydis_site.apps.resources import urls as resources_urls
14
15 app_name = "redirect"
16
17
18 __PARAMETER_REGEX = re.compile(r"<\w+:\w+>")
19 REDIRECT_TEMPLATE = "<meta http-equiv=\"refresh\" content=\"0; URL={url}\"/>"
20
21
22 @dataclasses.dataclass(frozen=True)
23 class Redirect:
24 """Metadata about a redirect route."""
25
26 original_path: str
27 redirect_route: str
28 redirect_arguments: tuple[str] = tuple()
29
30 prefix_redirect: bool = False
31
32
33 def map_redirect(name: str, data: Redirect) -> list[URLPattern]:
34 """Return a pattern using the Redirects app, or a static HTML redirect for static builds."""
35 if not settings.STATIC_BUILD:
36 # Normal dynamic redirect
37 return [path(
38 data.original_path,
39 CustomRedirectView.as_view(
40 pattern_name=data.redirect_route,
41 static_args=tuple(data.redirect_arguments),
42 prefix_redirect=data.prefix_redirect
43 ),
44 name=name
45 )]
46
47 # Create static HTML redirects for static builds
48 new_app_name = data.redirect_route.split(":")[0]
49
50 if __PARAMETER_REGEX.search(data.original_path):
51 # Redirects for paths which accept parameters
52 # We generate an HTML redirect file for all possible entries
53 paths = []
54
55 class RedirectFunc:
56 def __init__(self, new_url: str, _name: str):
57 self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))
58 self.__qualname__ = _name
59
60 def __call__(self, *args, **kwargs):
61 return self.result
62
63 if new_app_name == resources_urls.app_name:
64 items = resources_urls.get_all_resources()
65 elif new_app_name == pages_urls.app_name:
66 items = pages_urls.get_all_pages()
67 else:
68 raise ValueError(f"Unknown app in redirect: {new_app_name}")
69
70 for item in items:
71 entry = next(iter(item.values()))
72
73 # Replace dynamic redirect with concrete path
74 concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)
75 new_redirect = f"/{new_app_name}/{entry}"
76 pattern_name = f"{name}_{entry}"
77
78 paths.append(distill_path(
79 concrete_path,
80 RedirectFunc(new_redirect, pattern_name),
81 name=pattern_name
82 ))
83
84 return paths
85
86 redirect_path_name = "pages" if new_app_name == "content" else new_app_name
87 if len(data.redirect_arguments) > 0:
88 redirect_arg = data.redirect_arguments[0]
89 else:
90 redirect_arg = "resources/"
91 new_redirect = f"/{redirect_path_name}/{redirect_arg}"
92
93 if new_redirect == "/resources/resources/":
94 new_redirect = "/resources/"
95
96 return [distill_path(
97 data.original_path,
98 lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),
99 name=name,
100 )]
101
102
103 urlpatterns = []
104 for _name, _data in yaml.safe_load(conf.settings.REDIRECTIONS_PATH.read_text()).items():
105 urlpatterns.extend(map_redirect(_name, Redirect(**_data)))
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pydis_site/apps/redirect/urls.py b/pydis_site/apps/redirect/urls.py
--- a/pydis_site/apps/redirect/urls.py
+++ b/pydis_site/apps/redirect/urls.py
@@ -3,14 +3,9 @@
import yaml
from django import conf
-from django.http import HttpResponse
from django.urls import URLPattern, path
-from django_distill import distill_path
-from pydis_site import settings
-from pydis_site.apps.content import urls as pages_urls
from pydis_site.apps.redirect.views import CustomRedirectView
-from pydis_site.apps.resources import urls as resources_urls
app_name = "redirect"
@@ -31,72 +26,15 @@
def map_redirect(name: str, data: Redirect) -> list[URLPattern]:
- """Return a pattern using the Redirects app, or a static HTML redirect for static builds."""
- if not settings.STATIC_BUILD:
- # Normal dynamic redirect
- return [path(
- data.original_path,
- CustomRedirectView.as_view(
- pattern_name=data.redirect_route,
- static_args=tuple(data.redirect_arguments),
- prefix_redirect=data.prefix_redirect
- ),
- name=name
- )]
-
- # Create static HTML redirects for static builds
- new_app_name = data.redirect_route.split(":")[0]
-
- if __PARAMETER_REGEX.search(data.original_path):
- # Redirects for paths which accept parameters
- # We generate an HTML redirect file for all possible entries
- paths = []
-
- class RedirectFunc:
- def __init__(self, new_url: str, _name: str):
- self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))
- self.__qualname__ = _name
-
- def __call__(self, *args, **kwargs):
- return self.result
-
- if new_app_name == resources_urls.app_name:
- items = resources_urls.get_all_resources()
- elif new_app_name == pages_urls.app_name:
- items = pages_urls.get_all_pages()
- else:
- raise ValueError(f"Unknown app in redirect: {new_app_name}")
-
- for item in items:
- entry = next(iter(item.values()))
-
- # Replace dynamic redirect with concrete path
- concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)
- new_redirect = f"/{new_app_name}/{entry}"
- pattern_name = f"{name}_{entry}"
-
- paths.append(distill_path(
- concrete_path,
- RedirectFunc(new_redirect, pattern_name),
- name=pattern_name
- ))
-
- return paths
-
- redirect_path_name = "pages" if new_app_name == "content" else new_app_name
- if len(data.redirect_arguments) > 0:
- redirect_arg = data.redirect_arguments[0]
- else:
- redirect_arg = "resources/"
- new_redirect = f"/{redirect_path_name}/{redirect_arg}"
-
- if new_redirect == "/resources/resources/":
- new_redirect = "/resources/"
-
- return [distill_path(
+ """Return a pattern using the Redirects app."""
+ return [path(
data.original_path,
- lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),
- name=name,
+ CustomRedirectView.as_view(
+ pattern_name=data.redirect_route,
+ static_args=tuple(data.redirect_arguments),
+ prefix_redirect=data.prefix_redirect
+ ),
+ name=name
)]
| {"golden_diff": "diff --git a/pydis_site/apps/redirect/urls.py b/pydis_site/apps/redirect/urls.py\n--- a/pydis_site/apps/redirect/urls.py\n+++ b/pydis_site/apps/redirect/urls.py\n@@ -3,14 +3,9 @@\n \n import yaml\n from django import conf\n-from django.http import HttpResponse\n from django.urls import URLPattern, path\n-from django_distill import distill_path\n \n-from pydis_site import settings\n-from pydis_site.apps.content import urls as pages_urls\n from pydis_site.apps.redirect.views import CustomRedirectView\n-from pydis_site.apps.resources import urls as resources_urls\n \n app_name = \"redirect\"\n \n@@ -31,72 +26,15 @@\n \n \n def map_redirect(name: str, data: Redirect) -> list[URLPattern]:\n- \"\"\"Return a pattern using the Redirects app, or a static HTML redirect for static builds.\"\"\"\n- if not settings.STATIC_BUILD:\n- # Normal dynamic redirect\n- return [path(\n- data.original_path,\n- CustomRedirectView.as_view(\n- pattern_name=data.redirect_route,\n- static_args=tuple(data.redirect_arguments),\n- prefix_redirect=data.prefix_redirect\n- ),\n- name=name\n- )]\n-\n- # Create static HTML redirects for static builds\n- new_app_name = data.redirect_route.split(\":\")[0]\n-\n- if __PARAMETER_REGEX.search(data.original_path):\n- # Redirects for paths which accept parameters\n- # We generate an HTML redirect file for all possible entries\n- paths = []\n-\n- class RedirectFunc:\n- def __init__(self, new_url: str, _name: str):\n- self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))\n- self.__qualname__ = _name\n-\n- def __call__(self, *args, **kwargs):\n- return self.result\n-\n- if new_app_name == resources_urls.app_name:\n- items = resources_urls.get_all_resources()\n- elif new_app_name == pages_urls.app_name:\n- items = pages_urls.get_all_pages()\n- else:\n- raise ValueError(f\"Unknown app in redirect: {new_app_name}\")\n-\n- for item in items:\n- entry = next(iter(item.values()))\n-\n- # Replace dynamic redirect with concrete path\n- concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)\n- new_redirect = f\"/{new_app_name}/{entry}\"\n- pattern_name = f\"{name}_{entry}\"\n-\n- paths.append(distill_path(\n- concrete_path,\n- RedirectFunc(new_redirect, pattern_name),\n- name=pattern_name\n- ))\n-\n- return paths\n-\n- redirect_path_name = \"pages\" if new_app_name == \"content\" else new_app_name\n- if len(data.redirect_arguments) > 0:\n- redirect_arg = data.redirect_arguments[0]\n- else:\n- redirect_arg = \"resources/\"\n- new_redirect = f\"/{redirect_path_name}/{redirect_arg}\"\n-\n- if new_redirect == \"/resources/resources/\":\n- new_redirect = \"/resources/\"\n-\n- return [distill_path(\n+ \"\"\"Return a pattern using the Redirects app.\"\"\"\n+ return [path(\n data.original_path,\n- lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),\n- name=name,\n+ CustomRedirectView.as_view(\n+ pattern_name=data.redirect_route,\n+ static_args=tuple(data.redirect_arguments),\n+ prefix_redirect=data.prefix_redirect\n+ ),\n+ name=name\n )]\n", "issue": "Consider dropping deploy preview support for redirects app\nDo we need previews of the legacy redirects?\n\nIf not, we may be able to remove a lot of code from the redirects app.\n", "before_files": [{"content": "import dataclasses\nimport re\n\nimport yaml\nfrom django import conf\nfrom django.http import HttpResponse\nfrom django.urls import URLPattern, path\nfrom django_distill import distill_path\n\nfrom pydis_site import settings\nfrom pydis_site.apps.content import urls as pages_urls\nfrom pydis_site.apps.redirect.views import CustomRedirectView\nfrom pydis_site.apps.resources import urls as resources_urls\n\napp_name = \"redirect\"\n\n\n__PARAMETER_REGEX = re.compile(r\"<\\w+:\\w+>\")\nREDIRECT_TEMPLATE = \"<meta http-equiv=\\\"refresh\\\" content=\\\"0; URL={url}\\\"/>\"\n\n\[email protected](frozen=True)\nclass Redirect:\n \"\"\"Metadata about a redirect route.\"\"\"\n\n original_path: str\n redirect_route: str\n redirect_arguments: tuple[str] = tuple()\n\n prefix_redirect: bool = False\n\n\ndef map_redirect(name: str, data: Redirect) -> list[URLPattern]:\n \"\"\"Return a pattern using the Redirects app, or a static HTML redirect for static builds.\"\"\"\n if not settings.STATIC_BUILD:\n # Normal dynamic redirect\n return [path(\n data.original_path,\n CustomRedirectView.as_view(\n pattern_name=data.redirect_route,\n static_args=tuple(data.redirect_arguments),\n prefix_redirect=data.prefix_redirect\n ),\n name=name\n )]\n\n # Create static HTML redirects for static builds\n new_app_name = data.redirect_route.split(\":\")[0]\n\n if __PARAMETER_REGEX.search(data.original_path):\n # Redirects for paths which accept parameters\n # We generate an HTML redirect file for all possible entries\n paths = []\n\n class RedirectFunc:\n def __init__(self, new_url: str, _name: str):\n self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))\n self.__qualname__ = _name\n\n def __call__(self, *args, **kwargs):\n return self.result\n\n if new_app_name == resources_urls.app_name:\n items = resources_urls.get_all_resources()\n elif new_app_name == pages_urls.app_name:\n items = pages_urls.get_all_pages()\n else:\n raise ValueError(f\"Unknown app in redirect: {new_app_name}\")\n\n for item in items:\n entry = next(iter(item.values()))\n\n # Replace dynamic redirect with concrete path\n concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)\n new_redirect = f\"/{new_app_name}/{entry}\"\n pattern_name = f\"{name}_{entry}\"\n\n paths.append(distill_path(\n concrete_path,\n RedirectFunc(new_redirect, pattern_name),\n name=pattern_name\n ))\n\n return paths\n\n redirect_path_name = \"pages\" if new_app_name == \"content\" else new_app_name\n if len(data.redirect_arguments) > 0:\n redirect_arg = data.redirect_arguments[0]\n else:\n redirect_arg = \"resources/\"\n new_redirect = f\"/{redirect_path_name}/{redirect_arg}\"\n\n if new_redirect == \"/resources/resources/\":\n new_redirect = \"/resources/\"\n\n return [distill_path(\n data.original_path,\n lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),\n name=name,\n )]\n\n\nurlpatterns = []\nfor _name, _data in yaml.safe_load(conf.settings.REDIRECTIONS_PATH.read_text()).items():\n urlpatterns.extend(map_redirect(_name, Redirect(**_data)))\n", "path": "pydis_site/apps/redirect/urls.py"}], "after_files": [{"content": "import dataclasses\nimport re\n\nimport yaml\nfrom django import conf\nfrom django.urls import URLPattern, path\n\nfrom pydis_site.apps.redirect.views import CustomRedirectView\n\napp_name = \"redirect\"\n\n\n__PARAMETER_REGEX = re.compile(r\"<\\w+:\\w+>\")\nREDIRECT_TEMPLATE = \"<meta http-equiv=\\\"refresh\\\" content=\\\"0; URL={url}\\\"/>\"\n\n\[email protected](frozen=True)\nclass Redirect:\n \"\"\"Metadata about a redirect route.\"\"\"\n\n original_path: str\n redirect_route: str\n redirect_arguments: tuple[str] = tuple()\n\n prefix_redirect: bool = False\n\n\ndef map_redirect(name: str, data: Redirect) -> list[URLPattern]:\n \"\"\"Return a pattern using the Redirects app.\"\"\"\n return [path(\n data.original_path,\n CustomRedirectView.as_view(\n pattern_name=data.redirect_route,\n static_args=tuple(data.redirect_arguments),\n prefix_redirect=data.prefix_redirect\n ),\n name=name\n )]\n\n\nurlpatterns = []\nfor _name, _data in yaml.safe_load(conf.settings.REDIRECTIONS_PATH.read_text()).items():\n urlpatterns.extend(map_redirect(_name, Redirect(**_data)))\n", "path": "pydis_site/apps/redirect/urls.py"}]} | 1,237 | 773 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.