problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_23185 | rasdani/github-patches | git_diff | angr__angr-3508 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError angr 9.2.16 and PyPy 7.3.9 (CPython 3.9)
<!--
*Disclaimer:
The angr suite is maintained by a small team of volunteers.
While we cannot guarantee any timeliness for fixes and enhancements, we will do our best.
For more real-time help with angr, from us and the community, join our [Slack.](https://angr.io/invite/)*
-->
---
**Describe the bug.**
<!--
Please include a clear and concise description of what the bug is.
-->
The latest version of angr appears to have issues when ran with the latest version of PyPy (I haven't tested with CPython). This issue affects all angr versions newer than 9.2.11:
```text
$ python --version
Python 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)
[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)]
$ python -c "import angr; p = angr.Project('/bin/ls')"
WARNING | 2022-08-31 09:52:12,054 | cle.loader | The main binary is a position-independent executable. It is being loaded with a base address of 0x400000.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/carter/env/lib/pypy3.9/site-packages/angr/project.py", line 230, in __init__
self.simos.configure_project()
File "/home/carter/env/lib/pypy3.9/site-packages/angr/simos/linux.py", line 161, in configure_project
super(SimLinux, self).configure_project(syscall_abis)
File "/home/carter/env/lib/pypy3.9/site-packages/angr/simos/userland.py", line 49, in configure_project
super().configure_project()
File "/home/carter/env/lib/pypy3.9/site-packages/angr/simos/simos.py", line 82, in configure_project
self.project.loader.perform_irelative_relocs(irelative_resolver)
File "/home/carter/env/lib/pypy3.9/site-packages/cle/loader.py", line 601, in perform_irelative_relocs
val = resolver_func(resolver)
File "/home/carter/env/lib/pypy3.9/site-packages/angr/simos/simos.py", line 72, in irelative_resolver
val = resolver()
File "/home/carter/env/lib/pypy3.9/site-packages/angr/callable.py", line 55, in __call__
self.perform_call(*args, prototype=prototype)
File "/home/carter/env/lib/pypy3.9/site-packages/angr/callable.py", line 78, in perform_call
caller = self._project.factory.simulation_manager(state)
File "/home/carter/env/lib/pypy3.9/site-packages/angr/factory.py", line 181, in simulation_manager
return SimulationManager(self.project, active_states=thing, **kwargs)
File "/home/carter/env/lib/pypy3.9/site-packages/angr/sim_manager.py", line 94, in __init__
self._hierarchy = StateHierarchy() if hierarchy is None else hierarchy
File "/home/carter/env/lib/pypy3.9/site-packages/angr/state_hierarchy.py", line 31, in __init__
self._lock = PicklableRLock()
File "/home/carter/env/lib/pypy3.9/site-packages/angr/misc/picklable_lock.py", line 11, in __init__
self._lock = self._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args
File "/home/carter/pypy3.9/lib/pypy3.9/threading.py", line 93, in RLock
return _CRLock(*args, **kwargs)
TypeError: __new__() takes 1 positional argument but 2 were given
```
**Environment Information.**
<!--
Many common issues are caused by problems with the local Python environment.
Before submitting, double-check that your versions of all modules in the angr suite (angr, cle, pyvex, ...) are up to date.
Please include the output of `python -m angr.misc.bug_report` here.
-->
* Affects angr versions 9.2.12 through 9.2.16
* PyPy version 7.3.9 (latest release at time of writing)
* Debian Bullseye
**To Reproduce.**
<!--
Please include *both a script to reproduce the crash, and attach the binary used, if possible*
-->
1. Install latest PyPy release
2. Install angr version 9.2.16
3. Run: `python -c "import angr; p = angr.Project('/bin/ls')`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `angr/misc/picklable_lock.py`
Content:
```
1 import threading
2
3 class PicklableLock:
4 """
5 Normal thread-locks are not pickleable. This provides a pickleable lock by mandating that the lock is unlocked
6 during serialization.
7 """
8 _LOCK = threading.Lock
9
10 def __init__(self, *args, **kwargs):
11 self._lock = self._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args
12
13 def __enter__(self):
14 return self._lock.__enter__()
15
16 def __exit__(self, exc_type, exc_val, exc_tb):
17 return self._lock.__exit__(exc_type, exc_val, exc_tb)
18
19 def acquire(self, *args, **kwargs):
20 return self._lock.acquire(*args, **kwargs)
21
22 def locked(self):
23 return self._lock.locked()
24
25 def release(self):
26 return self._lock.release()
27
28 def __reduce__(self):
29 if self.locked():
30 raise TypeError("Why are you pickling a locked lock")
31 return type(self), ()
32
33 class PicklableRLock(PicklableLock):
34 """
35 Same as above, but uses RLock instead of Lock for locking. Note that RLock does not provide an interface to tell
36 whether is it presently held by any thread, and thus this class will lie about whether it is locked.
37 """
38 _LOCK = threading.RLock
39
40 def locked(self):
41 return False # ummmmmmmmmmmmmmmm
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/angr/misc/picklable_lock.py b/angr/misc/picklable_lock.py
--- a/angr/misc/picklable_lock.py
+++ b/angr/misc/picklable_lock.py
@@ -1,5 +1,6 @@
import threading
+
class PicklableLock:
"""
Normal thread-locks are not pickleable. This provides a pickleable lock by mandating that the lock is unlocked
@@ -8,7 +9,7 @@
_LOCK = threading.Lock
def __init__(self, *args, **kwargs):
- self._lock = self._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args
+ self._lock = self.__class__._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args
def __enter__(self):
return self._lock.__enter__()
@@ -30,6 +31,7 @@
raise TypeError("Why are you pickling a locked lock")
return type(self), ()
+
class PicklableRLock(PicklableLock):
"""
Same as above, but uses RLock instead of Lock for locking. Note that RLock does not provide an interface to tell
| {"golden_diff": "diff --git a/angr/misc/picklable_lock.py b/angr/misc/picklable_lock.py\n--- a/angr/misc/picklable_lock.py\n+++ b/angr/misc/picklable_lock.py\n@@ -1,5 +1,6 @@\n import threading\n \n+\n class PicklableLock:\n \"\"\"\n Normal thread-locks are not pickleable. This provides a pickleable lock by mandating that the lock is unlocked\n@@ -8,7 +9,7 @@\n _LOCK = threading.Lock\n \n def __init__(self, *args, **kwargs):\n- self._lock = self._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args\n+ self._lock = self.__class__._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args\n \n def __enter__(self):\n return self._lock.__enter__()\n@@ -30,6 +31,7 @@\n raise TypeError(\"Why are you pickling a locked lock\")\n return type(self), ()\n \n+\n class PicklableRLock(PicklableLock):\n \"\"\"\n Same as above, but uses RLock instead of Lock for locking. Note that RLock does not provide an interface to tell\n", "issue": "TypeError angr 9.2.16 and PyPy 7.3.9 (CPython 3.9)\n<!--\r\n*Disclaimer:\r\nThe angr suite is maintained by a small team of volunteers.\r\nWhile we cannot guarantee any timeliness for fixes and enhancements, we will do our best.\r\nFor more real-time help with angr, from us and the community, join our [Slack.](https://angr.io/invite/)*\r\n-->\r\n---\r\n\r\n**Describe the bug.**\r\n<!--\r\nPlease include a clear and concise description of what the bug is.\r\n-->\r\n\r\nThe latest version of angr appears to have issues when ran with the latest version of PyPy (I haven't tested with CPython). This issue affects all angr versions newer than 9.2.11:\r\n\r\n```text\r\n$ python --version\r\nPython 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)\r\n[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)]\r\n$ python -c \"import angr; p = angr.Project('/bin/ls')\"\r\nWARNING | 2022-08-31 09:52:12,054 | cle.loader | The main binary is a position-independent executable. It is being loaded with a base address of 0x400000.\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/project.py\", line 230, in __init__\r\n self.simos.configure_project()\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/simos/linux.py\", line 161, in configure_project\r\n super(SimLinux, self).configure_project(syscall_abis)\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/simos/userland.py\", line 49, in configure_project\r\n super().configure_project()\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/simos/simos.py\", line 82, in configure_project\r\n self.project.loader.perform_irelative_relocs(irelative_resolver)\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/cle/loader.py\", line 601, in perform_irelative_relocs\r\n val = resolver_func(resolver)\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/simos/simos.py\", line 72, in irelative_resolver\r\n val = resolver()\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/callable.py\", line 55, in __call__\r\n self.perform_call(*args, prototype=prototype)\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/callable.py\", line 78, in perform_call\r\n caller = self._project.factory.simulation_manager(state)\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/factory.py\", line 181, in simulation_manager\r\n return SimulationManager(self.project, active_states=thing, **kwargs)\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/sim_manager.py\", line 94, in __init__\r\n self._hierarchy = StateHierarchy() if hierarchy is None else hierarchy\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/state_hierarchy.py\", line 31, in __init__\r\n self._lock = PicklableRLock()\r\n File \"/home/carter/env/lib/pypy3.9/site-packages/angr/misc/picklable_lock.py\", line 11, in __init__\r\n self._lock = self._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args\r\n File \"/home/carter/pypy3.9/lib/pypy3.9/threading.py\", line 93, in RLock\r\n return _CRLock(*args, **kwargs)\r\nTypeError: __new__() takes 1 positional argument but 2 were given\r\n```\r\n\r\n**Environment Information.**\r\n<!--\r\nMany common issues are caused by problems with the local Python environment.\r\nBefore submitting, double-check that your versions of all modules in the angr suite (angr, cle, pyvex, ...) are up to date.\r\nPlease include the output of `python -m angr.misc.bug_report` here.\r\n-->\r\n\r\n* Affects angr versions 9.2.12 through 9.2.16\r\n* PyPy version 7.3.9 (latest release at time of writing)\r\n* Debian Bullseye\r\n\r\n**To Reproduce.**\r\n<!--\r\nPlease include *both a script to reproduce the crash, and attach the binary used, if possible*\r\n-->\r\n\r\n1. Install latest PyPy release\r\n2. Install angr version 9.2.16\r\n3. Run: `python -c \"import angr; p = angr.Project('/bin/ls')`\n", "before_files": [{"content": "import threading\n\nclass PicklableLock:\n \"\"\"\n Normal thread-locks are not pickleable. This provides a pickleable lock by mandating that the lock is unlocked\n during serialization.\n \"\"\"\n _LOCK = threading.Lock\n\n def __init__(self, *args, **kwargs):\n self._lock = self._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args\n\n def __enter__(self):\n return self._lock.__enter__()\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n return self._lock.__exit__(exc_type, exc_val, exc_tb)\n\n def acquire(self, *args, **kwargs):\n return self._lock.acquire(*args, **kwargs)\n\n def locked(self):\n return self._lock.locked()\n\n def release(self):\n return self._lock.release()\n\n def __reduce__(self):\n if self.locked():\n raise TypeError(\"Why are you pickling a locked lock\")\n return type(self), ()\n\nclass PicklableRLock(PicklableLock):\n \"\"\"\n Same as above, but uses RLock instead of Lock for locking. Note that RLock does not provide an interface to tell\n whether is it presently held by any thread, and thus this class will lie about whether it is locked.\n \"\"\"\n _LOCK = threading.RLock\n\n def locked(self):\n return False # ummmmmmmmmmmmmmmm\n", "path": "angr/misc/picklable_lock.py"}], "after_files": [{"content": "import threading\n\n\nclass PicklableLock:\n \"\"\"\n Normal thread-locks are not pickleable. This provides a pickleable lock by mandating that the lock is unlocked\n during serialization.\n \"\"\"\n _LOCK = threading.Lock\n\n def __init__(self, *args, **kwargs):\n self._lock = self.__class__._LOCK(*args, **kwargs) # pylint: disable=too-many-function-args\n\n def __enter__(self):\n return self._lock.__enter__()\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n return self._lock.__exit__(exc_type, exc_val, exc_tb)\n\n def acquire(self, *args, **kwargs):\n return self._lock.acquire(*args, **kwargs)\n\n def locked(self):\n return self._lock.locked()\n\n def release(self):\n return self._lock.release()\n\n def __reduce__(self):\n if self.locked():\n raise TypeError(\"Why are you pickling a locked lock\")\n return type(self), ()\n\n\nclass PicklableRLock(PicklableLock):\n \"\"\"\n Same as above, but uses RLock instead of Lock for locking. Note that RLock does not provide an interface to tell\n whether is it presently held by any thread, and thus this class will lie about whether it is locked.\n \"\"\"\n _LOCK = threading.RLock\n\n def locked(self):\n return False # ummmmmmmmmmmmmmmm\n", "path": "angr/misc/picklable_lock.py"}]} | 1,830 | 275 |
gh_patches_debug_36603 | rasdani/github-patches | git_diff | getsentry__sentry-59486 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User avatars don't show in emails
At least for comment notifications, the avatar of the user who commented is just a blue box with a question mark regardless of whether they have a custom avatar or the default gravatar. We should check if this is happening for other notifications or if it's just the comment workflow email.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/notifications/utils/avatar.py`
Content:
```
1 from __future__ import annotations
2
3 from django.urls import reverse
4 from django.utils.html import format_html
5 from django.utils.safestring import SafeString
6
7 from sentry.models.avatars.user_avatar import UserAvatar
8 from sentry.models.user import User
9 from sentry.services.hybrid_cloud.user import RpcUser
10 from sentry.utils.assets import get_asset_url
11 from sentry.utils.avatar import get_email_avatar
12 from sentry.utils.http import absolute_uri
13
14
15 def get_user_avatar_url(user: User | RpcUser, size: int = 20) -> str:
16 ident: str
17 if isinstance(user, User):
18 try:
19 avatar = UserAvatar.objects.get(user=user)
20 ident = avatar.ident
21 except UserAvatar.DoesNotExist:
22 return ""
23 elif user.avatar:
24 if user.avatar is None:
25 return ""
26 ident = user.avatar.ident
27 else:
28 return ""
29
30 url = reverse("sentry-user-avatar-url", args=[ident])
31 if size:
32 url = f"{url}?s={int(size)}"
33 return str(absolute_uri(url))
34
35
36 def get_sentry_avatar_url() -> str:
37 url = "/images/sentry-email-avatar.png"
38 return str(absolute_uri(get_asset_url("sentry", url)))
39
40
41 def avatar_as_html(user: User | RpcUser) -> SafeString:
42 if not user:
43 return format_html(
44 '<img class="avatar" src="{}" width="20px" height="20px" />', get_sentry_avatar_url()
45 )
46 avatar_type = user.get_avatar_type()
47 if avatar_type == "upload":
48 return format_html('<img class="avatar" src="{}" />', get_user_avatar_url(user))
49 elif avatar_type == "letter_avatar":
50 return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)
51 else:
52 return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)
53
```
Path: `src/sentry/notifications/notifications/activity/note.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any, Mapping, Optional
4
5 from sentry.services.hybrid_cloud.actor import RpcActor
6 from sentry.types.integrations import ExternalProviders
7
8 from .base import GroupActivityNotification
9
10
11 class NoteActivityNotification(GroupActivityNotification):
12 message_builder = "SlackNotificationsMessageBuilder"
13 metrics_key = "note_activity"
14 template_path = "sentry/emails/activity/note"
15
16 def get_description(self) -> tuple[str, Optional[str], Mapping[str, Any]]:
17 # Notes may contain {} characters so we should escape them.
18 text = str(self.activity.data["text"]).replace("{", "{{").replace("}", "}}")
19 return text, None, {}
20
21 @property
22 def title(self) -> str:
23 if self.user:
24 author = self.user.get_display_name()
25 else:
26 author = "Unknown"
27 return f"New comment by {author}"
28
29 def get_notification_title(
30 self, provider: ExternalProviders, context: Mapping[str, Any] | None = None
31 ) -> str:
32 return self.title
33
34 def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:
35 return self.get_context()["text_description"]
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/notifications/notifications/activity/note.py b/src/sentry/notifications/notifications/activity/note.py
--- a/src/sentry/notifications/notifications/activity/note.py
+++ b/src/sentry/notifications/notifications/activity/note.py
@@ -2,6 +2,10 @@
from typing import Any, Mapping, Optional
+from django.utils.html import format_html
+from django.utils.safestring import SafeString
+
+from sentry.notifications.utils.avatar import avatar_as_html
from sentry.services.hybrid_cloud.actor import RpcActor
from sentry.types.integrations import ExternalProviders
@@ -33,3 +37,15 @@
def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:
return self.get_context()["text_description"]
+
+ def description_as_html(self, description: str, params: Mapping[str, Any]) -> SafeString:
+ """Note emails are formatted differently from almost all other activity emails.
+ Rather than passing the `description` as a string to be formatted into HTML with
+ `author` and `an_issue` (see base definition and resolved.py's `get_description`
+ as an example) we are simply passed the comment as a string that needs no formatting,
+ and want the avatar on it's own rather than bundled with the author's display name
+ because the display name is already shown in the notification title."""
+ fmt = '<span class="avatar-container">{}</span>'
+ if self.user:
+ return format_html(fmt, avatar_as_html(self.user, 48))
+ return format_html(description)
diff --git a/src/sentry/notifications/utils/avatar.py b/src/sentry/notifications/utils/avatar.py
--- a/src/sentry/notifications/utils/avatar.py
+++ b/src/sentry/notifications/utils/avatar.py
@@ -38,15 +38,18 @@
return str(absolute_uri(get_asset_url("sentry", url)))
-def avatar_as_html(user: User | RpcUser) -> SafeString:
+def avatar_as_html(user: User | RpcUser, size: int = 20) -> SafeString:
if not user:
return format_html(
- '<img class="avatar" src="{}" width="20px" height="20px" />', get_sentry_avatar_url()
+ '<img class="avatar" src="{}" width="{}px" height="{}px" />',
+ get_sentry_avatar_url(),
+ size,
+ size,
)
avatar_type = user.get_avatar_type()
if avatar_type == "upload":
return format_html('<img class="avatar" src="{}" />', get_user_avatar_url(user))
elif avatar_type == "letter_avatar":
- return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)
+ return get_email_avatar(user.get_display_name(), user.get_label(), size, False)
else:
- return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)
+ return get_email_avatar(user.get_display_name(), user.get_label(), size, True)
| {"golden_diff": "diff --git a/src/sentry/notifications/notifications/activity/note.py b/src/sentry/notifications/notifications/activity/note.py\n--- a/src/sentry/notifications/notifications/activity/note.py\n+++ b/src/sentry/notifications/notifications/activity/note.py\n@@ -2,6 +2,10 @@\n \n from typing import Any, Mapping, Optional\n \n+from django.utils.html import format_html\n+from django.utils.safestring import SafeString\n+\n+from sentry.notifications.utils.avatar import avatar_as_html\n from sentry.services.hybrid_cloud.actor import RpcActor\n from sentry.types.integrations import ExternalProviders\n \n@@ -33,3 +37,15 @@\n \n def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:\n return self.get_context()[\"text_description\"]\n+\n+ def description_as_html(self, description: str, params: Mapping[str, Any]) -> SafeString:\n+ \"\"\"Note emails are formatted differently from almost all other activity emails.\n+ Rather than passing the `description` as a string to be formatted into HTML with\n+ `author` and `an_issue` (see base definition and resolved.py's `get_description`\n+ as an example) we are simply passed the comment as a string that needs no formatting,\n+ and want the avatar on it's own rather than bundled with the author's display name\n+ because the display name is already shown in the notification title.\"\"\"\n+ fmt = '<span class=\"avatar-container\">{}</span>'\n+ if self.user:\n+ return format_html(fmt, avatar_as_html(self.user, 48))\n+ return format_html(description)\ndiff --git a/src/sentry/notifications/utils/avatar.py b/src/sentry/notifications/utils/avatar.py\n--- a/src/sentry/notifications/utils/avatar.py\n+++ b/src/sentry/notifications/utils/avatar.py\n@@ -38,15 +38,18 @@\n return str(absolute_uri(get_asset_url(\"sentry\", url)))\n \n \n-def avatar_as_html(user: User | RpcUser) -> SafeString:\n+def avatar_as_html(user: User | RpcUser, size: int = 20) -> SafeString:\n if not user:\n return format_html(\n- '<img class=\"avatar\" src=\"{}\" width=\"20px\" height=\"20px\" />', get_sentry_avatar_url()\n+ '<img class=\"avatar\" src=\"{}\" width=\"{}px\" height=\"{}px\" />',\n+ get_sentry_avatar_url(),\n+ size,\n+ size,\n )\n avatar_type = user.get_avatar_type()\n if avatar_type == \"upload\":\n return format_html('<img class=\"avatar\" src=\"{}\" />', get_user_avatar_url(user))\n elif avatar_type == \"letter_avatar\":\n- return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)\n+ return get_email_avatar(user.get_display_name(), user.get_label(), size, False)\n else:\n- return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)\n+ return get_email_avatar(user.get_display_name(), user.get_label(), size, True)\n", "issue": "User avatars don't show in emails\nAt least for comment notifications, the avatar of the user who commented is just a blue box with a question mark regardless of whether they have a custom avatar or the default gravatar. We should check if this is happening for other notifications or if it's just the comment workflow email.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom django.urls import reverse\nfrom django.utils.html import format_html\nfrom django.utils.safestring import SafeString\n\nfrom sentry.models.avatars.user_avatar import UserAvatar\nfrom sentry.models.user import User\nfrom sentry.services.hybrid_cloud.user import RpcUser\nfrom sentry.utils.assets import get_asset_url\nfrom sentry.utils.avatar import get_email_avatar\nfrom sentry.utils.http import absolute_uri\n\n\ndef get_user_avatar_url(user: User | RpcUser, size: int = 20) -> str:\n ident: str\n if isinstance(user, User):\n try:\n avatar = UserAvatar.objects.get(user=user)\n ident = avatar.ident\n except UserAvatar.DoesNotExist:\n return \"\"\n elif user.avatar:\n if user.avatar is None:\n return \"\"\n ident = user.avatar.ident\n else:\n return \"\"\n\n url = reverse(\"sentry-user-avatar-url\", args=[ident])\n if size:\n url = f\"{url}?s={int(size)}\"\n return str(absolute_uri(url))\n\n\ndef get_sentry_avatar_url() -> str:\n url = \"/images/sentry-email-avatar.png\"\n return str(absolute_uri(get_asset_url(\"sentry\", url)))\n\n\ndef avatar_as_html(user: User | RpcUser) -> SafeString:\n if not user:\n return format_html(\n '<img class=\"avatar\" src=\"{}\" width=\"20px\" height=\"20px\" />', get_sentry_avatar_url()\n )\n avatar_type = user.get_avatar_type()\n if avatar_type == \"upload\":\n return format_html('<img class=\"avatar\" src=\"{}\" />', get_user_avatar_url(user))\n elif avatar_type == \"letter_avatar\":\n return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)\n else:\n return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)\n", "path": "src/sentry/notifications/utils/avatar.py"}, {"content": "from __future__ import annotations\n\nfrom typing import Any, Mapping, Optional\n\nfrom sentry.services.hybrid_cloud.actor import RpcActor\nfrom sentry.types.integrations import ExternalProviders\n\nfrom .base import GroupActivityNotification\n\n\nclass NoteActivityNotification(GroupActivityNotification):\n message_builder = \"SlackNotificationsMessageBuilder\"\n metrics_key = \"note_activity\"\n template_path = \"sentry/emails/activity/note\"\n\n def get_description(self) -> tuple[str, Optional[str], Mapping[str, Any]]:\n # Notes may contain {} characters so we should escape them.\n text = str(self.activity.data[\"text\"]).replace(\"{\", \"{{\").replace(\"}\", \"}}\")\n return text, None, {}\n\n @property\n def title(self) -> str:\n if self.user:\n author = self.user.get_display_name()\n else:\n author = \"Unknown\"\n return f\"New comment by {author}\"\n\n def get_notification_title(\n self, provider: ExternalProviders, context: Mapping[str, Any] | None = None\n ) -> str:\n return self.title\n\n def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:\n return self.get_context()[\"text_description\"]\n", "path": "src/sentry/notifications/notifications/activity/note.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom django.urls import reverse\nfrom django.utils.html import format_html\nfrom django.utils.safestring import SafeString\n\nfrom sentry.models.avatars.user_avatar import UserAvatar\nfrom sentry.models.user import User\nfrom sentry.services.hybrid_cloud.user import RpcUser\nfrom sentry.utils.assets import get_asset_url\nfrom sentry.utils.avatar import get_email_avatar\nfrom sentry.utils.http import absolute_uri\n\n\ndef get_user_avatar_url(user: User | RpcUser, size: int = 20) -> str:\n ident: str\n if isinstance(user, User):\n try:\n avatar = UserAvatar.objects.get(user=user)\n ident = avatar.ident\n except UserAvatar.DoesNotExist:\n return \"\"\n elif user.avatar:\n if user.avatar is None:\n return \"\"\n ident = user.avatar.ident\n else:\n return \"\"\n\n url = reverse(\"sentry-user-avatar-url\", args=[ident])\n if size:\n url = f\"{url}?s={int(size)}\"\n return str(absolute_uri(url))\n\n\ndef get_sentry_avatar_url() -> str:\n url = \"/images/sentry-email-avatar.png\"\n return str(absolute_uri(get_asset_url(\"sentry\", url)))\n\n\ndef avatar_as_html(user: User | RpcUser, size: int = 20) -> SafeString:\n if not user:\n return format_html(\n '<img class=\"avatar\" src=\"{}\" width=\"{}px\" height=\"{}px\" />',\n get_sentry_avatar_url(),\n size,\n size,\n )\n avatar_type = user.get_avatar_type()\n if avatar_type == \"upload\":\n return format_html('<img class=\"avatar\" src=\"{}\" />', get_user_avatar_url(user))\n elif avatar_type == \"letter_avatar\":\n return get_email_avatar(user.get_display_name(), user.get_label(), size, False)\n else:\n return get_email_avatar(user.get_display_name(), user.get_label(), size, True)\n", "path": "src/sentry/notifications/utils/avatar.py"}, {"content": "from __future__ import annotations\n\nfrom typing import Any, Mapping, Optional\n\nfrom django.utils.html import format_html\nfrom django.utils.safestring import SafeString\n\nfrom sentry.notifications.utils.avatar import avatar_as_html\nfrom sentry.services.hybrid_cloud.actor import RpcActor\nfrom sentry.types.integrations import ExternalProviders\n\nfrom .base import GroupActivityNotification\n\n\nclass NoteActivityNotification(GroupActivityNotification):\n message_builder = \"SlackNotificationsMessageBuilder\"\n metrics_key = \"note_activity\"\n template_path = \"sentry/emails/activity/note\"\n\n def get_description(self) -> tuple[str, Optional[str], Mapping[str, Any]]:\n # Notes may contain {} characters so we should escape them.\n text = str(self.activity.data[\"text\"]).replace(\"{\", \"{{\").replace(\"}\", \"}}\")\n return text, None, {}\n\n @property\n def title(self) -> str:\n if self.user:\n author = self.user.get_display_name()\n else:\n author = \"Unknown\"\n return f\"New comment by {author}\"\n\n def get_notification_title(\n self, provider: ExternalProviders, context: Mapping[str, Any] | None = None\n ) -> str:\n return self.title\n\n def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:\n return self.get_context()[\"text_description\"]\n\n def description_as_html(self, description: str, params: Mapping[str, Any]) -> SafeString:\n \"\"\"Note emails are formatted differently from almost all other activity emails.\n Rather than passing the `description` as a string to be formatted into HTML with\n `author` and `an_issue` (see base definition and resolved.py's `get_description`\n as an example) we are simply passed the comment as a string that needs no formatting,\n and want the avatar on it's own rather than bundled with the author's display name\n because the display name is already shown in the notification title.\"\"\"\n fmt = '<span class=\"avatar-container\">{}</span>'\n if self.user:\n return format_html(fmt, avatar_as_html(self.user, 48))\n return format_html(description)\n", "path": "src/sentry/notifications/notifications/activity/note.py"}]} | 1,190 | 682 |
gh_patches_debug_19243 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API crashes when a non-valid date is entered
E.g. `http://rsr.akvo.org/rest/v1/project_update_extra/?created_at__gt=2015-07`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/project_update.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from akvo.rsr.models import ProjectUpdate
9
10 from ..serializers import ProjectUpdateSerializer, ProjectUpdateExtraSerializer
11 from ..viewsets import PublicProjectViewSet
12
13 from rest_framework.decorators import api_view, permission_classes
14 from rest_framework.permissions import IsAuthenticated
15 from rest_framework.response import Response
16 from rest_framework.exceptions import ParseError
17 from re import match
18
19
20 class ProjectUpdateViewSet(PublicProjectViewSet):
21
22 """."""
23 queryset = ProjectUpdate.objects.select_related('project',
24 'user').prefetch_related('locations')
25 serializer_class = ProjectUpdateSerializer
26
27 paginate_by_param = 'limit'
28 max_paginate_by = 1000
29
30 def get_queryset(self):
31 """
32 Allow simple filtering on selected fields.
33 We don't use the default filter_fields, because Up filters on
34 datetime for last_modified_at, and they only support a date, not datetime.
35 """
36 created_at__gt = validate_date(self.request.QUERY_PARAMS.get('created_at__gt', None))
37 if created_at__gt is not None:
38 self.queryset = self.queryset.filter(created_at__gt=created_at__gt)
39 created_at__lt = validate_date(self.request.QUERY_PARAMS.get('created_at__lt', None))
40 if created_at__lt is not None:
41 self.queryset = self.queryset.filter(created_at__lt=created_at__lt)
42 last_modified_at__gt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__gt', None))
43 if last_modified_at__gt is not None:
44 self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)
45 last_modified_at__lt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__lt', None))
46 if last_modified_at__lt is not None:
47 self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)
48 # Get updates per organisation
49 project__partners = self.request.QUERY_PARAMS.get('project__partners', None)
50 if project__partners:
51 self.queryset = self.queryset.filter(project__partners=project__partners)
52 user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)
53 if user__organisations:
54 self.queryset = self.queryset.filter(user__organisations=user__organisations)
55 return super(ProjectUpdateViewSet, self).get_queryset()
56
57
58 class ProjectUpdateExtraViewSet(PublicProjectViewSet):
59
60 """Project update extra resource."""
61
62 max_paginate_by = 30
63 paginate_by = 10
64
65 queryset = ProjectUpdate.objects.select_related(
66 'primary_location',
67 'primary_location__location_target',
68 'primary_location__location_target__project',
69 'primary_location__location_target__user',
70 'primary_location__location_target__primary_location',
71 'primary_location__location_target__country',
72 'project',
73 'user',
74 'user__organisation',
75 'user__organisation__primary_location',
76 'user__organisation__primary_location__country',
77 'user__organisation__primary_location__location_target',
78 'user__organisation__primary_location__location_target__internal_org_ids',
79
80 ).prefetch_related(
81 'user__organisations',
82 'user__organisations__primary_location',
83 'user__organisations__primary_location__country',
84 'user__organisations__primary_location__location_target')
85 serializer_class = ProjectUpdateExtraSerializer
86
87 def get_queryset(self):
88 """
89 Allow simple filtering on selected fields.
90 We don't use the default filter_fields, because Up filters on
91 datetime for last_modified_at, and they only support a date, not datetime.
92 """
93 created_at__gt = validate_date(self.request.QUERY_PARAMS.get('created_at__gt', None))
94 if created_at__gt is not None:
95 self.queryset = self.queryset.filter(created_at__gt=created_at__gt)
96 created_at__lt = validate_date(self.request.QUERY_PARAMS.get('created_at__lt', None))
97 if created_at__lt is not None:
98 self.queryset = self.queryset.filter(created_at__lt=created_at__lt)
99 last_modified_at__gt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__gt', None))
100 if last_modified_at__gt is not None:
101 self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)
102 last_modified_at__lt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__lt', None))
103 if last_modified_at__lt is not None:
104 self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)
105 # Get updates per organisation
106 project__partners = self.request.QUERY_PARAMS.get('project__partners', None)
107 if project__partners:
108 self.queryset = self.queryset.filter(project__partners=project__partners)
109 user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)
110 if user__organisations:
111 self.queryset = self.queryset.filter(user__organisations=user__organisations)
112 return super(ProjectUpdateExtraViewSet, self).get_queryset()
113
114
115 # validate date strings from URL
116 def validate_date(date):
117
118 if date is None:
119 return None
120 # if yyyy-mm-dd
121 elif match('^\d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])$', date) is not None:
122 return date
123 # if yyyy-mm
124 elif match('^\d{4}\-(0?[1-9]|1[012])$', date) is not None:
125 return date + '-01'
126 else:
127 raise ParseError('created_at and last_modified_at dates must be in format: yyyy-mm-dd')
128
129
130 @api_view(['POST'])
131 @permission_classes((IsAuthenticated, ))
132 def upload_indicator_update_photo(request, pk=None):
133 update = ProjectUpdate.objects.get(pk=pk)
134 user = request.user
135
136 # TODO: permissions
137
138 files = request.FILES
139
140 if 'photo' in files.keys():
141 update.photo = files['photo']
142 update.save(update_fields=['photo'])
143
144 return Response(ProjectUpdateExtraSerializer(update).data)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/views/project_update.py b/akvo/rest/views/project_update.py
--- a/akvo/rest/views/project_update.py
+++ b/akvo/rest/views/project_update.py
@@ -117,6 +117,11 @@
if date is None:
return None
+ # if yyyy-mm-ddThh:mm:ss
+ elif match(
+ '^\d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])T[0-2]\d{1}:[0-5]\d{1}:[0-5]\d{1}$',
+ date) is not None:
+ return date
# if yyyy-mm-dd
elif match('^\d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])$', date) is not None:
return date
@@ -124,7 +129,10 @@
elif match('^\d{4}\-(0?[1-9]|1[012])$', date) is not None:
return date + '-01'
else:
- raise ParseError('created_at and last_modified_at dates must be in format: yyyy-mm-dd')
+ raise ParseError(
+ 'Invalid date: created_at and last_modified_at dates must be in one of the following '
+ 'formats: yyyy-mm, yyyy-mm-dd or yyyy-mm-ddThh:mm:ss'
+ )
@api_view(['POST'])
| {"golden_diff": "diff --git a/akvo/rest/views/project_update.py b/akvo/rest/views/project_update.py\n--- a/akvo/rest/views/project_update.py\n+++ b/akvo/rest/views/project_update.py\n@@ -117,6 +117,11 @@\n \n if date is None:\n return None\n+ # if yyyy-mm-ddThh:mm:ss\n+ elif match(\n+ '^\\d{4}\\-(0?[1-9]|1[012])\\-(0?[1-9]|[12][0-9]|3[01])T[0-2]\\d{1}:[0-5]\\d{1}:[0-5]\\d{1}$',\n+ date) is not None:\n+ return date\n # if yyyy-mm-dd\n elif match('^\\d{4}\\-(0?[1-9]|1[012])\\-(0?[1-9]|[12][0-9]|3[01])$', date) is not None:\n return date\n@@ -124,7 +129,10 @@\n elif match('^\\d{4}\\-(0?[1-9]|1[012])$', date) is not None:\n return date + '-01'\n else:\n- raise ParseError('created_at and last_modified_at dates must be in format: yyyy-mm-dd')\n+ raise ParseError(\n+ 'Invalid date: created_at and last_modified_at dates must be in one of the following '\n+ 'formats: yyyy-mm, yyyy-mm-dd or yyyy-mm-ddThh:mm:ss'\n+ )\n \n \n @api_view(['POST'])\n", "issue": "API crashes when a non-valid date is entered\nE.g. `http://rsr.akvo.org/rest/v1/project_update_extra/?created_at__gt=2015-07`\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rsr.models import ProjectUpdate\n\nfrom ..serializers import ProjectUpdateSerializer, ProjectUpdateExtraSerializer\nfrom ..viewsets import PublicProjectViewSet\n\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.exceptions import ParseError\nfrom re import match\n\n\nclass ProjectUpdateViewSet(PublicProjectViewSet):\n\n \"\"\".\"\"\"\n queryset = ProjectUpdate.objects.select_related('project',\n 'user').prefetch_related('locations')\n serializer_class = ProjectUpdateSerializer\n\n paginate_by_param = 'limit'\n max_paginate_by = 1000\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = validate_date(self.request.QUERY_PARAMS.get('created_at__gt', None))\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = validate_date(self.request.QUERY_PARAMS.get('created_at__lt', None))\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__gt', None))\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__lt', None))\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateViewSet, self).get_queryset()\n\n\nclass ProjectUpdateExtraViewSet(PublicProjectViewSet):\n\n \"\"\"Project update extra resource.\"\"\"\n\n max_paginate_by = 30\n paginate_by = 10\n\n queryset = ProjectUpdate.objects.select_related(\n 'primary_location',\n 'primary_location__location_target',\n 'primary_location__location_target__project',\n 'primary_location__location_target__user',\n 'primary_location__location_target__primary_location',\n 'primary_location__location_target__country',\n 'project',\n 'user',\n 'user__organisation',\n 'user__organisation__primary_location',\n 'user__organisation__primary_location__country',\n 'user__organisation__primary_location__location_target',\n 'user__organisation__primary_location__location_target__internal_org_ids',\n\n ).prefetch_related(\n 'user__organisations',\n 'user__organisations__primary_location',\n 'user__organisations__primary_location__country',\n 'user__organisations__primary_location__location_target')\n serializer_class = ProjectUpdateExtraSerializer\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = validate_date(self.request.QUERY_PARAMS.get('created_at__gt', None))\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = validate_date(self.request.QUERY_PARAMS.get('created_at__lt', None))\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__gt', None))\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__lt', None))\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateExtraViewSet, self).get_queryset()\n\n\n# validate date strings from URL\ndef validate_date(date):\n\n if date is None:\n return None\n # if yyyy-mm-dd\n elif match('^\\d{4}\\-(0?[1-9]|1[012])\\-(0?[1-9]|[12][0-9]|3[01])$', date) is not None:\n return date\n # if yyyy-mm\n elif match('^\\d{4}\\-(0?[1-9]|1[012])$', date) is not None:\n return date + '-01'\n else:\n raise ParseError('created_at and last_modified_at dates must be in format: yyyy-mm-dd')\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef upload_indicator_update_photo(request, pk=None):\n update = ProjectUpdate.objects.get(pk=pk)\n user = request.user\n\n # TODO: permissions\n\n files = request.FILES\n\n if 'photo' in files.keys():\n update.photo = files['photo']\n update.save(update_fields=['photo'])\n\n return Response(ProjectUpdateExtraSerializer(update).data)\n", "path": "akvo/rest/views/project_update.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rsr.models import ProjectUpdate\n\nfrom ..serializers import ProjectUpdateSerializer, ProjectUpdateExtraSerializer\nfrom ..viewsets import PublicProjectViewSet\n\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.exceptions import ParseError\nfrom re import match\n\n\nclass ProjectUpdateViewSet(PublicProjectViewSet):\n\n \"\"\".\"\"\"\n queryset = ProjectUpdate.objects.select_related('project',\n 'user').prefetch_related('locations')\n serializer_class = ProjectUpdateSerializer\n\n paginate_by_param = 'limit'\n max_paginate_by = 1000\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = validate_date(self.request.QUERY_PARAMS.get('created_at__gt', None))\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = validate_date(self.request.QUERY_PARAMS.get('created_at__lt', None))\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__gt', None))\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__lt', None))\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateViewSet, self).get_queryset()\n\n\nclass ProjectUpdateExtraViewSet(PublicProjectViewSet):\n\n \"\"\"Project update extra resource.\"\"\"\n\n max_paginate_by = 30\n paginate_by = 10\n\n queryset = ProjectUpdate.objects.select_related(\n 'primary_location',\n 'primary_location__location_target',\n 'primary_location__location_target__project',\n 'primary_location__location_target__user',\n 'primary_location__location_target__primary_location',\n 'primary_location__location_target__country',\n 'project',\n 'user',\n 'user__organisation',\n 'user__organisation__primary_location',\n 'user__organisation__primary_location__country',\n 'user__organisation__primary_location__location_target',\n 'user__organisation__primary_location__location_target__internal_org_ids',\n\n ).prefetch_related(\n 'user__organisations',\n 'user__organisations__primary_location',\n 'user__organisations__primary_location__country',\n 'user__organisations__primary_location__location_target')\n serializer_class = ProjectUpdateExtraSerializer\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = validate_date(self.request.QUERY_PARAMS.get('created_at__gt', None))\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = validate_date(self.request.QUERY_PARAMS.get('created_at__lt', None))\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__gt', None))\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = validate_date(self.request.QUERY_PARAMS.get('last_modified_at__lt', None))\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateExtraViewSet, self).get_queryset()\n\n\n# validate date strings from URL\ndef validate_date(date):\n\n if date is None:\n return None\n # if yyyy-mm-ddThh:mm:ss\n elif match(\n '^\\d{4}\\-(0?[1-9]|1[012])\\-(0?[1-9]|[12][0-9]|3[01])T[0-2]\\d{1}:[0-5]\\d{1}:[0-5]\\d{1}$',\n date) is not None:\n return date\n # if yyyy-mm-dd\n elif match('^\\d{4}\\-(0?[1-9]|1[012])\\-(0?[1-9]|[12][0-9]|3[01])$', date) is not None:\n return date\n # if yyyy-mm\n elif match('^\\d{4}\\-(0?[1-9]|1[012])$', date) is not None:\n return date + '-01'\n else:\n raise ParseError(\n 'Invalid date: created_at and last_modified_at dates must be in one of the following '\n 'formats: yyyy-mm, yyyy-mm-dd or yyyy-mm-ddThh:mm:ss'\n )\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef upload_indicator_update_photo(request, pk=None):\n update = ProjectUpdate.objects.get(pk=pk)\n user = request.user\n\n # TODO: permissions\n\n files = request.FILES\n\n if 'photo' in files.keys():\n update.photo = files['photo']\n update.save(update_fields=['photo'])\n\n return Response(ProjectUpdateExtraSerializer(update).data)\n", "path": "akvo/rest/views/project_update.py"}]} | 2,047 | 364 |
gh_patches_debug_7838 | rasdani/github-patches | git_diff | facebookresearch__hydra-472 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] setuptools finds and installs tests/ as a top-level package in site-packages/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 import codecs
4 import os
5 import pathlib
6 import re
7 import shutil
8 from distutils import cmd
9 from os.path import exists, isdir, join
10 from typing import Any, List
11
12 import pkg_resources
13 from setuptools import find_packages, setup
14
15 here = os.path.abspath(os.path.dirname(__file__))
16
17
18 def read(*parts):
19 with codecs.open(os.path.join(here, *parts), "r") as fp:
20 return fp.read()
21
22
23 def find_version(*file_paths):
24 version_file = read(*file_paths)
25 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
26 if version_match:
27 return version_match.group(1)
28 raise RuntimeError("Unable to find version string.")
29
30
31 with pathlib.Path("requirements/requirements.txt").open() as requirements_txt:
32 install_requires = [
33 str(requirement)
34 for requirement in pkg_resources.parse_requirements(requirements_txt)
35 ]
36
37
38 class CleanCommand(cmd.Command):
39 """
40 Our custom command to clean out junk files.
41 """
42
43 description = "Cleans out junk files we don't want in the repo"
44 user_options: List[Any] = []
45
46 def initialize_options(self):
47 pass
48
49 def finalize_options(self):
50 pass
51
52 @staticmethod
53 def find(root, includes, excludes=[]):
54 res = []
55 for parent, dirs, files in os.walk(root):
56 for f in dirs + files:
57 add = list()
58 for include in includes:
59 if re.findall(include, f):
60 add.append(join(parent, f))
61 res.extend(add)
62 final_list = []
63 # Exclude things that matches an exclude pattern
64 for ex in excludes:
65 for file in res:
66 if not re.findall(ex, file):
67 final_list.append(file)
68 return final_list
69
70 def run(self):
71 delete_patterns = [
72 ".eggs",
73 ".egg-info",
74 ".pytest_cache",
75 "build",
76 "dist",
77 "__pycache__",
78 ".pyc",
79 ]
80 deletion_list = CleanCommand.find(
81 ".", includes=delete_patterns, excludes=["\\.nox/.*"]
82 )
83
84 for f in deletion_list:
85 if exists(f):
86 if isdir(f):
87 shutil.rmtree(f, ignore_errors=True)
88 else:
89 os.unlink(f)
90
91
92 with open("README.md", "r") as fh:
93 LONG_DESC = fh.read()
94 setup(
95 cmdclass={"clean": CleanCommand},
96 name="hydra-core",
97 version=find_version("hydra", "__init__.py"),
98 author="Omry Yadan",
99 author_email="[email protected]",
100 description="A framework for elegantly configuring complex applications",
101 long_description=LONG_DESC,
102 long_description_content_type="text/markdown",
103 url="https://github.com/facebookresearch/hydra",
104 keywords="command-line configuration yaml tab-completion",
105 packages=find_packages(),
106 include_package_data=True,
107 classifiers=[
108 "License :: OSI Approved :: MIT License",
109 "Development Status :: 4 - Beta",
110 "Programming Language :: Python :: 3.6",
111 "Programming Language :: Python :: 3.7",
112 "Programming Language :: Python :: 3.8",
113 "Operating System :: POSIX :: Linux",
114 "Operating System :: MacOS",
115 "Operating System :: Microsoft :: Windows",
116 ],
117 install_requires=install_requires,
118 # Install development dependencies with
119 # pip install -r requirements/dev.txt -e .
120 )
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -102,7 +102,7 @@
long_description_content_type="text/markdown",
url="https://github.com/facebookresearch/hydra",
keywords="command-line configuration yaml tab-completion",
- packages=find_packages(),
+ packages=find_packages(include=["hydra"]),
include_package_data=True,
classifiers=[
"License :: OSI Approved :: MIT License",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -102,7 +102,7 @@\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n- packages=find_packages(),\n+ packages=find_packages(include=[\"hydra\"]),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n", "issue": "[Bug] setuptools finds and installs tests/ as a top-level package in site-packages/\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nimport codecs\nimport os\nimport pathlib\nimport re\nimport shutil\nfrom distutils import cmd\nfrom os.path import exists, isdir, join\nfrom typing import Any, List\n\nimport pkg_resources\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n with codecs.open(os.path.join(here, *parts), \"r\") as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith pathlib.Path(\"requirements/requirements.txt\").open() as requirements_txt:\n install_requires = [\n str(requirement)\n for requirement in pkg_resources.parse_requirements(requirements_txt)\n ]\n\n\nclass CleanCommand(cmd.Command):\n \"\"\"\n Our custom command to clean out junk files.\n \"\"\"\n\n description = \"Cleans out junk files we don't want in the repo\"\n user_options: List[Any] = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n @staticmethod\n def find(root, includes, excludes=[]):\n res = []\n for parent, dirs, files in os.walk(root):\n for f in dirs + files:\n add = list()\n for include in includes:\n if re.findall(include, f):\n add.append(join(parent, f))\n res.extend(add)\n final_list = []\n # Exclude things that matches an exclude pattern\n for ex in excludes:\n for file in res:\n if not re.findall(ex, file):\n final_list.append(file)\n return final_list\n\n def run(self):\n delete_patterns = [\n \".eggs\",\n \".egg-info\",\n \".pytest_cache\",\n \"build\",\n \"dist\",\n \"__pycache__\",\n \".pyc\",\n ]\n deletion_list = CleanCommand.find(\n \".\", includes=delete_patterns, excludes=[\"\\\\.nox/.*\"]\n )\n\n for f in deletion_list:\n if exists(f):\n if isdir(f):\n shutil.rmtree(f, ignore_errors=True)\n else:\n os.unlink(f)\n\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n cmdclass={\"clean\": CleanCommand},\n name=\"hydra-core\",\n version=find_version(\"hydra\", \"__init__.py\"),\n author=\"Omry Yadan\",\n author_email=\"[email protected]\",\n description=\"A framework for elegantly configuring complex applications\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n ],\n install_requires=install_requires,\n # Install development dependencies with\n # pip install -r requirements/dev.txt -e .\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nimport codecs\nimport os\nimport pathlib\nimport re\nimport shutil\nfrom distutils import cmd\nfrom os.path import exists, isdir, join\nfrom typing import Any, List\n\nimport pkg_resources\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n with codecs.open(os.path.join(here, *parts), \"r\") as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith pathlib.Path(\"requirements/requirements.txt\").open() as requirements_txt:\n install_requires = [\n str(requirement)\n for requirement in pkg_resources.parse_requirements(requirements_txt)\n ]\n\n\nclass CleanCommand(cmd.Command):\n \"\"\"\n Our custom command to clean out junk files.\n \"\"\"\n\n description = \"Cleans out junk files we don't want in the repo\"\n user_options: List[Any] = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n @staticmethod\n def find(root, includes, excludes=[]):\n res = []\n for parent, dirs, files in os.walk(root):\n for f in dirs + files:\n add = list()\n for include in includes:\n if re.findall(include, f):\n add.append(join(parent, f))\n res.extend(add)\n final_list = []\n # Exclude things that matches an exclude pattern\n for ex in excludes:\n for file in res:\n if not re.findall(ex, file):\n final_list.append(file)\n return final_list\n\n def run(self):\n delete_patterns = [\n \".eggs\",\n \".egg-info\",\n \".pytest_cache\",\n \"build\",\n \"dist\",\n \"__pycache__\",\n \".pyc\",\n ]\n deletion_list = CleanCommand.find(\n \".\", includes=delete_patterns, excludes=[\"\\\\.nox/.*\"]\n )\n\n for f in deletion_list:\n if exists(f):\n if isdir(f):\n shutil.rmtree(f, ignore_errors=True)\n else:\n os.unlink(f)\n\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n cmdclass={\"clean\": CleanCommand},\n name=\"hydra-core\",\n version=find_version(\"hydra\", \"__init__.py\"),\n author=\"Omry Yadan\",\n author_email=\"[email protected]\",\n description=\"A framework for elegantly configuring complex applications\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n packages=find_packages(include=[\"hydra\"]),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n ],\n install_requires=install_requires,\n # Install development dependencies with\n # pip install -r requirements/dev.txt -e .\n )\n", "path": "setup.py"}]} | 1,298 | 103 |
gh_patches_debug_2075 | rasdani/github-patches | git_diff | litestar-org__litestar-2433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: `2.2.0` does not have `[full]` group
### Description
The move from `poetry` to `pdm` in 2.2.0 has a regression for the `[full]` group.
### URL to code causing the issue
_No response_
### MCVE
```python
pip install litestar[full]==2.2.0 && pip show pydantic
```
### Steps to reproduce
- `pip install litestar[full]`
- Observe no `[full]` group is available, and `pip show $package` does not show expected pacakges
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.2.0
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
> [!NOTE]
> Check out all issues funded or available for funding here: https://polar.sh/litestar-org
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
<a href="https://polar.sh/litestar-org/litestar/issues/2434">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/types/internal_types.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, Callable, Literal, NamedTuple
4
5 from litestar.utils.deprecation import warn_deprecation
6
7 __all__ = (
8 "ControllerRouterHandler",
9 "PathParameterDefinition",
10 "PathParameterDefinition",
11 "ReservedKwargs",
12 "ResponseType",
13 "RouteHandlerMapItem",
14 "RouteHandlerType",
15 )
16
17 if TYPE_CHECKING:
18 from typing_extensions import TypeAlias
19
20 from litestar.app import Litestar
21 from litestar.controller import Controller
22 from litestar.handlers.asgi_handlers import ASGIRouteHandler
23 from litestar.handlers.http_handlers import HTTPRouteHandler
24 from litestar.handlers.websocket_handlers import WebsocketRouteHandler
25 from litestar.response import Response
26 from litestar.router import Router
27 from litestar.types import Method
28
29 ReservedKwargs: TypeAlias = Literal["request", "socket", "headers", "query", "cookies", "state", "data"]
30 RouteHandlerType: TypeAlias = "HTTPRouteHandler | WebsocketRouteHandler | ASGIRouteHandler"
31 ResponseType: TypeAlias = "type[Response]"
32 ControllerRouterHandler: TypeAlias = "type[Controller] | RouteHandlerType | Router | Callable[..., Any]"
33 RouteHandlerMapItem: TypeAlias = 'dict[Method | Literal["websocket", "asgi"], RouteHandlerType]'
34
35 # deprecated
36 _LitestarType: TypeAlias = "Litestar"
37
38
39 class PathParameterDefinition(NamedTuple):
40 """Path parameter tuple."""
41
42 name: str
43 full: str
44 type: type
45 parser: Callable[[str], Any] | None
46
47
48 def __getattr__(name: str) -> Any:
49 if name == "LitestarType":
50 warn_deprecation(
51 "2.3.0",
52 "LitestarType",
53 "import",
54 removal_in="3.0.0",
55 alternative="Litestar",
56 )
57 return _LitestarType
58 raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/types/internal_types.py b/litestar/types/internal_types.py
--- a/litestar/types/internal_types.py
+++ b/litestar/types/internal_types.py
@@ -48,7 +48,7 @@
def __getattr__(name: str) -> Any:
if name == "LitestarType":
warn_deprecation(
- "2.3.0",
+ "2.2.1",
"LitestarType",
"import",
removal_in="3.0.0",
| {"golden_diff": "diff --git a/litestar/types/internal_types.py b/litestar/types/internal_types.py\n--- a/litestar/types/internal_types.py\n+++ b/litestar/types/internal_types.py\n@@ -48,7 +48,7 @@\n def __getattr__(name: str) -> Any:\n if name == \"LitestarType\":\n warn_deprecation(\n- \"2.3.0\",\n+ \"2.2.1\",\n \"LitestarType\",\n \"import\",\n removal_in=\"3.0.0\",\n", "issue": "Bug: `2.2.0` does not have `[full]` group\n### Description\r\n\r\nThe move from `poetry` to `pdm` in 2.2.0 has a regression for the `[full]` group.\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\npip install litestar[full]==2.2.0 && pip show pydantic\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n- `pip install litestar[full]`\r\n- Observe no `[full]` group is available, and `pip show $package` does not show expected pacakges\r\n\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\n2.2.0\r\n\r\n### Platform\r\n\r\n- [ ] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [X] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n> [!NOTE] \r\n> Check out all issues funded or available for funding here: https://polar.sh/litestar-org\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2434\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Literal, NamedTuple\n\nfrom litestar.utils.deprecation import warn_deprecation\n\n__all__ = (\n \"ControllerRouterHandler\",\n \"PathParameterDefinition\",\n \"PathParameterDefinition\",\n \"ReservedKwargs\",\n \"ResponseType\",\n \"RouteHandlerMapItem\",\n \"RouteHandlerType\",\n)\n\nif TYPE_CHECKING:\n from typing_extensions import TypeAlias\n\n from litestar.app import Litestar\n from litestar.controller import Controller\n from litestar.handlers.asgi_handlers import ASGIRouteHandler\n from litestar.handlers.http_handlers import HTTPRouteHandler\n from litestar.handlers.websocket_handlers import WebsocketRouteHandler\n from litestar.response import Response\n from litestar.router import Router\n from litestar.types import Method\n\nReservedKwargs: TypeAlias = Literal[\"request\", \"socket\", \"headers\", \"query\", \"cookies\", \"state\", \"data\"]\nRouteHandlerType: TypeAlias = \"HTTPRouteHandler | WebsocketRouteHandler | ASGIRouteHandler\"\nResponseType: TypeAlias = \"type[Response]\"\nControllerRouterHandler: TypeAlias = \"type[Controller] | RouteHandlerType | Router | Callable[..., Any]\"\nRouteHandlerMapItem: TypeAlias = 'dict[Method | Literal[\"websocket\", \"asgi\"], RouteHandlerType]'\n\n# deprecated\n_LitestarType: TypeAlias = \"Litestar\"\n\n\nclass PathParameterDefinition(NamedTuple):\n \"\"\"Path parameter tuple.\"\"\"\n\n name: str\n full: str\n type: type\n parser: Callable[[str], Any] | None\n\n\ndef __getattr__(name: str) -> Any:\n if name == \"LitestarType\":\n warn_deprecation(\n \"2.3.0\",\n \"LitestarType\",\n \"import\",\n removal_in=\"3.0.0\",\n alternative=\"Litestar\",\n )\n return _LitestarType\n raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n", "path": "litestar/types/internal_types.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Literal, NamedTuple\n\nfrom litestar.utils.deprecation import warn_deprecation\n\n__all__ = (\n \"ControllerRouterHandler\",\n \"PathParameterDefinition\",\n \"PathParameterDefinition\",\n \"ReservedKwargs\",\n \"ResponseType\",\n \"RouteHandlerMapItem\",\n \"RouteHandlerType\",\n)\n\nif TYPE_CHECKING:\n from typing_extensions import TypeAlias\n\n from litestar.app import Litestar\n from litestar.controller import Controller\n from litestar.handlers.asgi_handlers import ASGIRouteHandler\n from litestar.handlers.http_handlers import HTTPRouteHandler\n from litestar.handlers.websocket_handlers import WebsocketRouteHandler\n from litestar.response import Response\n from litestar.router import Router\n from litestar.types import Method\n\nReservedKwargs: TypeAlias = Literal[\"request\", \"socket\", \"headers\", \"query\", \"cookies\", \"state\", \"data\"]\nRouteHandlerType: TypeAlias = \"HTTPRouteHandler | WebsocketRouteHandler | ASGIRouteHandler\"\nResponseType: TypeAlias = \"type[Response]\"\nControllerRouterHandler: TypeAlias = \"type[Controller] | RouteHandlerType | Router | Callable[..., Any]\"\nRouteHandlerMapItem: TypeAlias = 'dict[Method | Literal[\"websocket\", \"asgi\"], RouteHandlerType]'\n\n# deprecated\n_LitestarType: TypeAlias = \"Litestar\"\n\n\nclass PathParameterDefinition(NamedTuple):\n \"\"\"Path parameter tuple.\"\"\"\n\n name: str\n full: str\n type: type\n parser: Callable[[str], Any] | None\n\n\ndef __getattr__(name: str) -> Any:\n if name == \"LitestarType\":\n warn_deprecation(\n \"2.2.1\",\n \"LitestarType\",\n \"import\",\n removal_in=\"3.0.0\",\n alternative=\"Litestar\",\n )\n return _LitestarType\n raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n", "path": "litestar/types/internal_types.py"}]} | 1,195 | 115 |
gh_patches_debug_32626 | rasdani/github-patches | git_diff | uccser__cs-unplugged-147 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Store custom Kordac templates
The custom Kordac templates for Markdown conversion need to be stored within the repository.
Gut instinct is to store these within the `templates` directory under `markdown_templates` and then exclude this folder from the Django template loader (to avoid loading unused templates in serving webpages).
These can then be loaded for Kordac (possibly a Django loader would do the job).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/utils/BaseLoader.py`
Content:
```
1 import yaml
2 import mdx_math
3 import abc
4 import sys
5 from kordac import Kordac
6 from .check_converter_required_files import check_required_files
7
8
9 class BaseLoader():
10 """Base loader class for individual loaders"""
11
12 def __init__(self, BASE_PATH='', load_log=[]):
13 if load_log:
14 self.load_log = load_log
15 else:
16 self.load_log = list(load_log)
17 self.BASE_PATH = BASE_PATH
18 self.setup_md_to_html_converter()
19
20 def setup_md_to_html_converter(self):
21 """Create Kordac converter with custom processors, html templates,
22 and extensions.
23 """
24 templates = dict()
25 templates['scratch'] = '<div><object data="{% autoescape false -%}{{ "{% get_static_prefix %}" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}" type="image/svg+xml" /></div>' # noqa: E501 Fixed in #77
26 templates['iframe'] = '<iframe allowtransparency="true" width="485" height="402" src="{{ link }}" frameborder="0" allowfullscreen="true"></iframe>' # noqa: E501 Fixed in #77
27 templates['heading'] = '<{{ heading_type }} id="{{ title_slug }}">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77
28 extensions = [
29 'markdown.extensions.fenced_code',
30 'markdown.extensions.codehilite',
31 'markdown.extensions.sane_lists',
32 'markdown.extensions.tables',
33 mdx_math.MathExtension(enable_dollar_delimiter=True)
34 ]
35 self.converter = Kordac(html_templates=templates, extensions=extensions)
36 custom_processors = self.converter.processor_defaults()
37 custom_processors.add('remove-title')
38 self.converter.update_processors(custom_processors)
39
40 def convert_md_file(self, md_file_path):
41 """Returns the Kordac object for a given Markdown file
42
43 Args:
44 file_path: location of md file to convert
45
46 Returns:
47 Kordac result object
48 """
49 content = open(md_file_path, encoding='UTF-8').read()
50 result = self.converter.convert(content)
51 check_required_files(result.required_files)
52 return result
53
54 def log(self, log_message, indent_amount=0):
55 """Adds the log message to the load log with the specified indent"""
56 self.load_log.append((log_message, indent_amount))
57
58 def print_load_log(self):
59 """Output log messages from loader to console"""
60 for (log, indent_amount) in self.load_log:
61 indent = ' ' * indent_amount
62 sys.stdout.write('{indent}{text}\n'.format(indent=indent, text=log))
63 sys.stdout.write('\n')
64 self.load_log = []
65
66 def load_yaml_file(self, yaml_file_path):
67 """Loads and reads yaml file
68
69 Args:
70 file_path: location of yaml file to read
71
72 Returns:
73 Either list or string, depending on structure of given yaml file
74 """
75 yaml_file = open(yaml_file_path, encoding='UTF-8').read()
76 return yaml.load(yaml_file)
77
78 @abc.abstractmethod
79 def load(self):
80 raise NotImplementedError('subclass does not implement this method')
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/utils/BaseLoader.py b/csunplugged/utils/BaseLoader.py
--- a/csunplugged/utils/BaseLoader.py
+++ b/csunplugged/utils/BaseLoader.py
@@ -2,6 +2,9 @@
import mdx_math
import abc
import sys
+import re
+import os.path
+from os import listdir
from kordac import Kordac
from .check_converter_required_files import check_required_files
@@ -21,10 +24,7 @@
"""Create Kordac converter with custom processors, html templates,
and extensions.
"""
- templates = dict()
- templates['scratch'] = '<div><object data="{% autoescape false -%}{{ "{% get_static_prefix %}" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}" type="image/svg+xml" /></div>' # noqa: E501 Fixed in #77
- templates['iframe'] = '<iframe allowtransparency="true" width="485" height="402" src="{{ link }}" frameborder="0" allowfullscreen="true"></iframe>' # noqa: E501 Fixed in #77
- templates['heading'] = '<{{ heading_type }} id="{{ title_slug }}">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77
+ templates = self.load_template_files()
extensions = [
'markdown.extensions.fenced_code',
'markdown.extensions.codehilite',
@@ -75,6 +75,19 @@
yaml_file = open(yaml_file_path, encoding='UTF-8').read()
return yaml.load(yaml_file)
+ def load_template_files(self):
+ templates = dict()
+ template_path = os.path.join(
+ os.path.dirname(__file__),
+ 'custom_converter_templates/'
+ )
+ for file in listdir(template_path):
+ template_file = re.search(r'(.*?).html$', file)
+ if template_file:
+ template_name = template_file.groups()[0]
+ templates[template_name] = open(template_path + file).read()
+ return templates
+
@abc.abstractmethod
def load(self):
raise NotImplementedError('subclass does not implement this method')
| {"golden_diff": "diff --git a/csunplugged/utils/BaseLoader.py b/csunplugged/utils/BaseLoader.py\n--- a/csunplugged/utils/BaseLoader.py\n+++ b/csunplugged/utils/BaseLoader.py\n@@ -2,6 +2,9 @@\n import mdx_math\n import abc\n import sys\n+import re\n+import os.path\n+from os import listdir\n from kordac import Kordac\n from .check_converter_required_files import check_required_files\n \n@@ -21,10 +24,7 @@\n \"\"\"Create Kordac converter with custom processors, html templates,\n and extensions.\n \"\"\"\n- templates = dict()\n- templates['scratch'] = '<div><object data=\"{% autoescape false -%}{{ \"{% get_static_prefix %}\" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}\" type=\"image/svg+xml\" /></div>' # noqa: E501 Fixed in #77\n- templates['iframe'] = '<iframe allowtransparency=\"true\" width=\"485\" height=\"402\" src=\"{{ link }}\" frameborder=\"0\" allowfullscreen=\"true\"></iframe>' # noqa: E501 Fixed in #77\n- templates['heading'] = '<{{ heading_type }} id=\"{{ title_slug }}\">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77\n+ templates = self.load_template_files()\n extensions = [\n 'markdown.extensions.fenced_code',\n 'markdown.extensions.codehilite',\n@@ -75,6 +75,19 @@\n yaml_file = open(yaml_file_path, encoding='UTF-8').read()\n return yaml.load(yaml_file)\n \n+ def load_template_files(self):\n+ templates = dict()\n+ template_path = os.path.join(\n+ os.path.dirname(__file__),\n+ 'custom_converter_templates/'\n+ )\n+ for file in listdir(template_path):\n+ template_file = re.search(r'(.*?).html$', file)\n+ if template_file:\n+ template_name = template_file.groups()[0]\n+ templates[template_name] = open(template_path + file).read()\n+ return templates\n+\n @abc.abstractmethod\n def load(self):\n raise NotImplementedError('subclass does not implement this method')\n", "issue": "Store custom Kordac templates\nThe custom Kordac templates for Markdown conversion need to be stored within the repository.\r\nGut instinct is to store these within the `templates` directory under `markdown_templates` and then exclude this folder from the Django template loader (to avoid loading unused templates in serving webpages).\r\n\r\nThese can then be loaded for Kordac (possibly a Django loader would do the job).\n", "before_files": [{"content": "import yaml\nimport mdx_math\nimport abc\nimport sys\nfrom kordac import Kordac\nfrom .check_converter_required_files import check_required_files\n\n\nclass BaseLoader():\n \"\"\"Base loader class for individual loaders\"\"\"\n\n def __init__(self, BASE_PATH='', load_log=[]):\n if load_log:\n self.load_log = load_log\n else:\n self.load_log = list(load_log)\n self.BASE_PATH = BASE_PATH\n self.setup_md_to_html_converter()\n\n def setup_md_to_html_converter(self):\n \"\"\"Create Kordac converter with custom processors, html templates,\n and extensions.\n \"\"\"\n templates = dict()\n templates['scratch'] = '<div><object data=\"{% autoescape false -%}{{ \"{% get_static_prefix %}\" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}\" type=\"image/svg+xml\" /></div>' # noqa: E501 Fixed in #77\n templates['iframe'] = '<iframe allowtransparency=\"true\" width=\"485\" height=\"402\" src=\"{{ link }}\" frameborder=\"0\" allowfullscreen=\"true\"></iframe>' # noqa: E501 Fixed in #77\n templates['heading'] = '<{{ heading_type }} id=\"{{ title_slug }}\">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77\n extensions = [\n 'markdown.extensions.fenced_code',\n 'markdown.extensions.codehilite',\n 'markdown.extensions.sane_lists',\n 'markdown.extensions.tables',\n mdx_math.MathExtension(enable_dollar_delimiter=True)\n ]\n self.converter = Kordac(html_templates=templates, extensions=extensions)\n custom_processors = self.converter.processor_defaults()\n custom_processors.add('remove-title')\n self.converter.update_processors(custom_processors)\n\n def convert_md_file(self, md_file_path):\n \"\"\"Returns the Kordac object for a given Markdown file\n\n Args:\n file_path: location of md file to convert\n\n Returns:\n Kordac result object\n \"\"\"\n content = open(md_file_path, encoding='UTF-8').read()\n result = self.converter.convert(content)\n check_required_files(result.required_files)\n return result\n\n def log(self, log_message, indent_amount=0):\n \"\"\"Adds the log message to the load log with the specified indent\"\"\"\n self.load_log.append((log_message, indent_amount))\n\n def print_load_log(self):\n \"\"\"Output log messages from loader to console\"\"\"\n for (log, indent_amount) in self.load_log:\n indent = ' ' * indent_amount\n sys.stdout.write('{indent}{text}\\n'.format(indent=indent, text=log))\n sys.stdout.write('\\n')\n self.load_log = []\n\n def load_yaml_file(self, yaml_file_path):\n \"\"\"Loads and reads yaml file\n\n Args:\n file_path: location of yaml file to read\n\n Returns:\n Either list or string, depending on structure of given yaml file\n \"\"\"\n yaml_file = open(yaml_file_path, encoding='UTF-8').read()\n return yaml.load(yaml_file)\n\n @abc.abstractmethod\n def load(self):\n raise NotImplementedError('subclass does not implement this method')\n", "path": "csunplugged/utils/BaseLoader.py"}], "after_files": [{"content": "import yaml\nimport mdx_math\nimport abc\nimport sys\nimport re\nimport os.path\nfrom os import listdir\nfrom kordac import Kordac\nfrom .check_converter_required_files import check_required_files\n\n\nclass BaseLoader():\n \"\"\"Base loader class for individual loaders\"\"\"\n\n def __init__(self, BASE_PATH='', load_log=[]):\n if load_log:\n self.load_log = load_log\n else:\n self.load_log = list(load_log)\n self.BASE_PATH = BASE_PATH\n self.setup_md_to_html_converter()\n\n def setup_md_to_html_converter(self):\n \"\"\"Create Kordac converter with custom processors, html templates,\n and extensions.\n \"\"\"\n templates = self.load_template_files()\n extensions = [\n 'markdown.extensions.fenced_code',\n 'markdown.extensions.codehilite',\n 'markdown.extensions.sane_lists',\n 'markdown.extensions.tables',\n mdx_math.MathExtension(enable_dollar_delimiter=True)\n ]\n self.converter = Kordac(html_templates=templates, extensions=extensions)\n custom_processors = self.converter.processor_defaults()\n custom_processors.add('remove-title')\n self.converter.update_processors(custom_processors)\n\n def convert_md_file(self, md_file_path):\n \"\"\"Returns the Kordac object for a given Markdown file\n\n Args:\n file_path: location of md file to convert\n\n Returns:\n Kordac result object\n \"\"\"\n content = open(md_file_path, encoding='UTF-8').read()\n result = self.converter.convert(content)\n check_required_files(result.required_files)\n return result\n\n def log(self, log_message, indent_amount=0):\n \"\"\"Adds the log message to the load log with the specified indent\"\"\"\n self.load_log.append((log_message, indent_amount))\n\n def print_load_log(self):\n \"\"\"Output log messages from loader to console\"\"\"\n for (log, indent_amount) in self.load_log:\n indent = ' ' * indent_amount\n sys.stdout.write('{indent}{text}\\n'.format(indent=indent, text=log))\n sys.stdout.write('\\n')\n self.load_log = []\n\n def load_yaml_file(self, yaml_file_path):\n \"\"\"Loads and reads yaml file\n\n Args:\n file_path: location of yaml file to read\n\n Returns:\n Either list or string, depending on structure of given yaml file\n \"\"\"\n yaml_file = open(yaml_file_path, encoding='UTF-8').read()\n return yaml.load(yaml_file)\n\n def load_template_files(self):\n templates = dict()\n template_path = os.path.join(\n os.path.dirname(__file__),\n 'custom_converter_templates/'\n )\n for file in listdir(template_path):\n template_file = re.search(r'(.*?).html$', file)\n if template_file:\n template_name = template_file.groups()[0]\n templates[template_name] = open(template_path + file).read()\n return templates\n\n @abc.abstractmethod\n def load(self):\n raise NotImplementedError('subclass does not implement this method')\n", "path": "csunplugged/utils/BaseLoader.py"}]} | 1,197 | 501 |
gh_patches_debug_2679 | rasdani/github-patches | git_diff | TileDB-Inc__TileDB-Py-501 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Four components should be three components?
In the recently created example "writing_dense_rgb.py" there is this fragment:
https://github.com/TileDB-Inc/TileDB-Py/blob/75ddcf56ed80ba5e1a1237b7e527ec4fbd87abb9/examples/writing_dense_rgb.py#L56-L57
It says four int32 components where it seems like it should be three int32 components. After all the values of the attribute are RGB and not RGBA.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/writing_dense_rgb.py`
Content:
```
1 # writing_dense_rgb.py
2 #
3 # LICENSE
4 #
5 # The MIT License
6 #
7 # Copyright (c) 2021 TileDB, Inc.
8 #
9 # Permission is hereby granted, free of charge, to any person obtaining a copy
10 # of this software and associated documentation files (the "Software"), to deal
11 # in the Software without restriction, including without limitation the rights
12 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
13 # copies of the Software, and to permit persons to whom the Software is
14 # furnished to do so, subject to the following conditions:
15 #
16 # The above copyright notice and this permission notice shall be included in
17 # all copies or substantial portions of the Software.
18 #
19 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
20 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
21 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
22 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
23 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
24 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
25 # THE SOFTWARE.
26 #
27 # DESCRIPTION
28 #
29 # Please see the TileDB documentation for more information:
30 # https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays
31 #
32 # When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some
33 # data to it, and read the entire array data.
34
35 import tiledb, numpy as np
36
37 img_shape = (100, 224, 224)
38 img_uri = "writing_dense_rgb"
39
40 image_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)
41
42
43 def create_array():
44 domain = tiledb.Domain(
45 tiledb.Dim(
46 name="image_id", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32
47 ),
48 tiledb.Dim(
49 name="x", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32
50 ),
51 tiledb.Dim(
52 name="y", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32
53 ),
54 )
55
56 # create multi-component attribute with four int32 components
57 attr = tiledb.Attr(dtype=np.dtype("i4, i4, i4"))
58
59 schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])
60
61 tiledb.Array.create(img_uri, schema)
62
63 image_data_rgb = image_data.view(np.dtype("i4, i4, i4"))
64
65 with tiledb.open(img_uri, "w") as A:
66 # write data to 1st image_id slot
67 A[:] = image_data_rgb
68
69
70 def read_array():
71 with tiledb.open(img_uri) as A:
72 print(A[:].shape)
73
74
75 if __name__ == "__main__":
76 create_array()
77 read_array()
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/writing_dense_rgb.py b/examples/writing_dense_rgb.py
--- a/examples/writing_dense_rgb.py
+++ b/examples/writing_dense_rgb.py
@@ -53,7 +53,7 @@
),
)
- # create multi-component attribute with four int32 components
+ # create multi-component attribute with three int32 components
attr = tiledb.Attr(dtype=np.dtype("i4, i4, i4"))
schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])
| {"golden_diff": "diff --git a/examples/writing_dense_rgb.py b/examples/writing_dense_rgb.py\n--- a/examples/writing_dense_rgb.py\n+++ b/examples/writing_dense_rgb.py\n@@ -53,7 +53,7 @@\n ),\n )\n \n- # create multi-component attribute with four int32 components\n+ # create multi-component attribute with three int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n \n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n", "issue": "Four components should be three components?\nIn the recently created example \"writing_dense_rgb.py\" there is this fragment:\r\nhttps://github.com/TileDB-Inc/TileDB-Py/blob/75ddcf56ed80ba5e1a1237b7e527ec4fbd87abb9/examples/writing_dense_rgb.py#L56-L57\r\n\r\nIt says four int32 components where it seems like it should be three int32 components. After all the values of the attribute are RGB and not RGBA.\n", "before_files": [{"content": "# writing_dense_rgb.py\n#\n# LICENSE\n#\n# The MIT License\n#\n# Copyright (c) 2021 TileDB, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n# DESCRIPTION\n#\n# Please see the TileDB documentation for more information:\n# https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays\n#\n# When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some\n# data to it, and read the entire array data.\n\nimport tiledb, numpy as np\n\nimg_shape = (100, 224, 224)\nimg_uri = \"writing_dense_rgb\"\n\nimage_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)\n\n\ndef create_array():\n domain = tiledb.Domain(\n tiledb.Dim(\n name=\"image_id\", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32\n ),\n tiledb.Dim(\n name=\"x\", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32\n ),\n tiledb.Dim(\n name=\"y\", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32\n ),\n )\n\n # create multi-component attribute with four int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n\n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n\n tiledb.Array.create(img_uri, schema)\n\n image_data_rgb = image_data.view(np.dtype(\"i4, i4, i4\"))\n\n with tiledb.open(img_uri, \"w\") as A:\n # write data to 1st image_id slot\n A[:] = image_data_rgb\n\n\ndef read_array():\n with tiledb.open(img_uri) as A:\n print(A[:].shape)\n\n\nif __name__ == \"__main__\":\n create_array()\n read_array()\n", "path": "examples/writing_dense_rgb.py"}], "after_files": [{"content": "# writing_dense_rgb.py\n#\n# LICENSE\n#\n# The MIT License\n#\n# Copyright (c) 2021 TileDB, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n# DESCRIPTION\n#\n# Please see the TileDB documentation for more information:\n# https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays\n#\n# When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some\n# data to it, and read the entire array data.\n\nimport tiledb, numpy as np\n\nimg_shape = (100, 224, 224)\nimg_uri = \"writing_dense_rgb\"\n\nimage_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)\n\n\ndef create_array():\n domain = tiledb.Domain(\n tiledb.Dim(\n name=\"image_id\", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32\n ),\n tiledb.Dim(\n name=\"x\", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32\n ),\n tiledb.Dim(\n name=\"y\", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32\n ),\n )\n\n # create multi-component attribute with three int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n\n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n\n tiledb.Array.create(img_uri, schema)\n\n image_data_rgb = image_data.view(np.dtype(\"i4, i4, i4\"))\n\n with tiledb.open(img_uri, \"w\") as A:\n # write data to 1st image_id slot\n A[:] = image_data_rgb\n\n\ndef read_array():\n with tiledb.open(img_uri) as A:\n print(A[:].shape)\n\n\nif __name__ == \"__main__\":\n create_array()\n read_array()\n", "path": "examples/writing_dense_rgb.py"}]} | 1,221 | 120 |
gh_patches_debug_7792 | rasdani/github-patches | git_diff | locustio__locust-401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))
I wanted to offer this up not as an issue, but as a solution to one that I found today.
I had a test that when run on a specific server would always fail with this unhelpful message:
requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))
The test had multiple requests to the same client within a single task and a colleague suspected it was something to do with the connection from the first request not being properly closed.
After a lot of playing around with timeouts and attempting to close out the first connection before the next one was sent (both of which did not solve the issue), I found a stackoverflow article with the same issue:
http://stackoverflow.com/questions/30033516/single-session-multiple-post-get-in-python-requests
The quick and dirty solution was to update to requests 2.7.0. At the time of getting this error I was on 2.6.2. I also noticed that the default version for locust is on 2.4. If you are experiencing this issue, simply update to 2.7 and you should be good!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # encoding: utf-8
2
3 from setuptools import setup, find_packages, Command
4 import sys, os
5
6 version = '0.7.3'
7
8
9 class Unit2Discover(Command):
10 user_options = []
11
12 def initialize_options(self):
13 pass
14
15 def finalize_options(self):
16 pass
17
18 def run(self):
19 import sys, subprocess
20 basecmd = ['unit2', 'discover']
21 errno = subprocess.call(basecmd)
22 raise SystemExit(errno)
23
24
25 setup(
26 name='locustio',
27 version=version,
28 description="Website load testing framework",
29 long_description="""Locust is a python utility for doing easy, distributed load testing of a web site""",
30 classifiers=[
31 "Topic :: Software Development :: Testing :: Traffic Generation",
32 "Development Status :: 4 - Beta",
33 "License :: OSI Approved :: MIT License",
34 "Operating System :: OS Independent",
35 "Programming Language :: Python",
36 "Programming Language :: Python :: 2",
37 "Programming Language :: Python :: 2.6",
38 "Programming Language :: Python :: 2.7",
39 "Intended Audience :: Developers",
40 "Intended Audience :: System Administrators",
41 ],
42 keywords='',
43 author='Jonatan Heyman, Carl Bystrom, Joakim Hamrén, Hugo Heyman',
44 author_email='',
45 url='http://locust.io',
46 license='MIT',
47 packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
48 include_package_data=True,
49 zip_safe=False,
50 install_requires=["gevent==1.0.1", "flask>=0.10.1", "requests>=2.4.1", "msgpack-python>=0.4.2"],
51 tests_require=['unittest2', 'mock', 'pyzmq'],
52 entry_points={
53 'console_scripts': [
54 'locust = locust.main:main',
55 ]
56 },
57 test_suite='unittest2.collector',
58 )
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -47,7 +47,7 @@
packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
include_package_data=True,
zip_safe=False,
- install_requires=["gevent==1.0.1", "flask>=0.10.1", "requests>=2.4.1", "msgpack-python>=0.4.2"],
+ install_requires=["gevent==1.0.1", "flask>=0.10.1", "requests>=2.9.1", "msgpack-python>=0.4.2"],
tests_require=['unittest2', 'mock', 'pyzmq'],
entry_points={
'console_scripts': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -47,7 +47,7 @@\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n- install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.4.1\", \"msgpack-python>=0.4.2\"],\n+ install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.9.1\", \"msgpack-python>=0.4.2\"],\n tests_require=['unittest2', 'mock', 'pyzmq'],\n entry_points={\n 'console_scripts': [\n", "issue": "requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))\nI wanted to offer this up not as an issue, but as a solution to one that I found today.\n\nI had a test that when run on a specific server would always fail with this unhelpful message:\nrequests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))\n\nThe test had multiple requests to the same client within a single task and a colleague suspected it was something to do with the connection from the first request not being properly closed.\n\nAfter a lot of playing around with timeouts and attempting to close out the first connection before the next one was sent (both of which did not solve the issue), I found a stackoverflow article with the same issue:\nhttp://stackoverflow.com/questions/30033516/single-session-multiple-post-get-in-python-requests\n\nThe quick and dirty solution was to update to requests 2.7.0. At the time of getting this error I was on 2.6.2. I also noticed that the default version for locust is on 2.4. If you are experiencing this issue, simply update to 2.7 and you should be good!\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom setuptools import setup, find_packages, Command\nimport sys, os\n\nversion = '0.7.3'\n\n\nclass Unit2Discover(Command):\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n import sys, subprocess\n basecmd = ['unit2', 'discover']\n errno = subprocess.call(basecmd)\n raise SystemExit(errno)\n\n\nsetup(\n name='locustio',\n version=version,\n description=\"Website load testing framework\",\n long_description=\"\"\"Locust is a python utility for doing easy, distributed load testing of a web site\"\"\",\n classifiers=[\n \"Topic :: Software Development :: Testing :: Traffic Generation\",\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n keywords='',\n author='Jonatan Heyman, Carl Bystrom, Joakim Hamr\u00e9n, Hugo Heyman',\n author_email='',\n url='http://locust.io',\n license='MIT',\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.4.1\", \"msgpack-python>=0.4.2\"],\n tests_require=['unittest2', 'mock', 'pyzmq'],\n entry_points={\n 'console_scripts': [\n 'locust = locust.main:main',\n ]\n },\n test_suite='unittest2.collector',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# encoding: utf-8\n\nfrom setuptools import setup, find_packages, Command\nimport sys, os\n\nversion = '0.7.3'\n\n\nclass Unit2Discover(Command):\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n import sys, subprocess\n basecmd = ['unit2', 'discover']\n errno = subprocess.call(basecmd)\n raise SystemExit(errno)\n\n\nsetup(\n name='locustio',\n version=version,\n description=\"Website load testing framework\",\n long_description=\"\"\"Locust is a python utility for doing easy, distributed load testing of a web site\"\"\",\n classifiers=[\n \"Topic :: Software Development :: Testing :: Traffic Generation\",\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n keywords='',\n author='Jonatan Heyman, Carl Bystrom, Joakim Hamr\u00e9n, Hugo Heyman',\n author_email='',\n url='http://locust.io',\n license='MIT',\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.9.1\", \"msgpack-python>=0.4.2\"],\n tests_require=['unittest2', 'mock', 'pyzmq'],\n entry_points={\n 'console_scripts': [\n 'locust = locust.main:main',\n ]\n },\n test_suite='unittest2.collector',\n)\n", "path": "setup.py"}]} | 1,036 | 174 |
gh_patches_debug_34594 | rasdani/github-patches | git_diff | scrapy__scrapy-4746 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[WIP] fix xmliter namespace on selected node
This PR was triggered by [scrapy-users](https://groups.google.com/forum/#!topic/scrapy-users/VN6409UHexQ)
Actually `xmliter` populates a `Selector` with everything from the position 0 to the tag start, so if we had 100mb before the tag we want to iter it copy those 100mb across all the `Selector` objects. Also it just extract this info for the first tag and embed the rest on that, this can cause info crossing.
In this PR I kept the regex stuff even tho I think we should use something like [`iterparse`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse).
Currently `xmliter_lxml` tests are failing due to it has a different API.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/iterators.py`
Content:
```
1 import csv
2 import logging
3 import re
4 from io import StringIO
5
6 from scrapy.http import TextResponse, Response
7 from scrapy.selector import Selector
8 from scrapy.utils.python import re_rsearch, to_unicode
9
10
11 logger = logging.getLogger(__name__)
12
13
14 def xmliter(obj, nodename):
15 """Return a iterator of Selector's over all nodes of a XML document,
16 given the name of the node to iterate. Useful for parsing XML feeds.
17
18 obj can be:
19 - a Response object
20 - a unicode string
21 - a string encoded as utf-8
22 """
23 nodename_patt = re.escape(nodename)
24
25 HEADER_START_RE = re.compile(fr'^(.*?)<\s*{nodename_patt}(?:\s|>)', re.S)
26 HEADER_END_RE = re.compile(fr'<\s*/{nodename_patt}\s*>', re.S)
27 text = _body_or_str(obj)
28
29 header_start = re.search(HEADER_START_RE, text)
30 header_start = header_start.group(1).strip() if header_start else ''
31 header_end = re_rsearch(HEADER_END_RE, text)
32 header_end = text[header_end[1]:].strip() if header_end else ''
33
34 r = re.compile(fr'<{nodename_patt}[\s>].*?</{nodename_patt}>', re.DOTALL)
35 for match in r.finditer(text):
36 nodetext = header_start + match.group() + header_end
37 yield Selector(text=nodetext, type='xml').xpath('//' + nodename)[0]
38
39
40 def xmliter_lxml(obj, nodename, namespace=None, prefix='x'):
41 from lxml import etree
42 reader = _StreamReader(obj)
43 tag = f'{{{namespace}}}{nodename}'if namespace else nodename
44 iterable = etree.iterparse(reader, tag=tag, encoding=reader.encoding)
45 selxpath = '//' + (f'{prefix}:{nodename}' if namespace else nodename)
46 for _, node in iterable:
47 nodetext = etree.tostring(node, encoding='unicode')
48 node.clear()
49 xs = Selector(text=nodetext, type='xml')
50 if namespace:
51 xs.register_namespace(prefix, namespace)
52 yield xs.xpath(selxpath)[0]
53
54
55 class _StreamReader:
56
57 def __init__(self, obj):
58 self._ptr = 0
59 if isinstance(obj, Response):
60 self._text, self.encoding = obj.body, obj.encoding
61 else:
62 self._text, self.encoding = obj, 'utf-8'
63 self._is_unicode = isinstance(self._text, str)
64
65 def read(self, n=65535):
66 self.read = self._read_unicode if self._is_unicode else self._read_string
67 return self.read(n).lstrip()
68
69 def _read_string(self, n=65535):
70 s, e = self._ptr, self._ptr + n
71 self._ptr = e
72 return self._text[s:e]
73
74 def _read_unicode(self, n=65535):
75 s, e = self._ptr, self._ptr + n
76 self._ptr = e
77 return self._text[s:e].encode('utf-8')
78
79
80 def csviter(obj, delimiter=None, headers=None, encoding=None, quotechar=None):
81 """ Returns an iterator of dictionaries from the given csv object
82
83 obj can be:
84 - a Response object
85 - a unicode string
86 - a string encoded as utf-8
87
88 delimiter is the character used to separate fields on the given obj.
89
90 headers is an iterable that when provided offers the keys
91 for the returned dictionaries, if not the first row is used.
92
93 quotechar is the character used to enclosure fields on the given obj.
94 """
95
96 encoding = obj.encoding if isinstance(obj, TextResponse) else encoding or 'utf-8'
97
98 def row_to_unicode(row_):
99 return [to_unicode(field, encoding) for field in row_]
100
101 lines = StringIO(_body_or_str(obj, unicode=True))
102
103 kwargs = {}
104 if delimiter:
105 kwargs["delimiter"] = delimiter
106 if quotechar:
107 kwargs["quotechar"] = quotechar
108 csv_r = csv.reader(lines, **kwargs)
109
110 if not headers:
111 try:
112 row = next(csv_r)
113 except StopIteration:
114 return
115 headers = row_to_unicode(row)
116
117 for row in csv_r:
118 row = row_to_unicode(row)
119 if len(row) != len(headers):
120 logger.warning("ignoring row %(csvlnum)d (length: %(csvrow)d, "
121 "should be: %(csvheader)d)",
122 {'csvlnum': csv_r.line_num, 'csvrow': len(row),
123 'csvheader': len(headers)})
124 continue
125 else:
126 yield dict(zip(headers, row))
127
128
129 def _body_or_str(obj, unicode=True):
130 expected_types = (Response, str, bytes)
131 if not isinstance(obj, expected_types):
132 expected_types_str = " or ".join(t.__name__ for t in expected_types)
133 raise TypeError(
134 f"Object {obj!r} must be {expected_types_str}, not {type(obj).__name__}"
135 )
136 if isinstance(obj, Response):
137 if not unicode:
138 return obj.body
139 elif isinstance(obj, TextResponse):
140 return obj.text
141 else:
142 return obj.body.decode('utf-8')
143 elif isinstance(obj, str):
144 return obj if unicode else obj.encode('utf-8')
145 else:
146 return obj.decode('utf-8') if unicode else obj
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/iterators.py b/scrapy/utils/iterators.py
--- a/scrapy/utils/iterators.py
+++ b/scrapy/utils/iterators.py
@@ -22,25 +22,41 @@
"""
nodename_patt = re.escape(nodename)
- HEADER_START_RE = re.compile(fr'^(.*?)<\s*{nodename_patt}(?:\s|>)', re.S)
+ DOCUMENT_HEADER_RE = re.compile(r'<\?xml[^>]+>\s*', re.S)
HEADER_END_RE = re.compile(fr'<\s*/{nodename_patt}\s*>', re.S)
+ END_TAG_RE = re.compile(r'<\s*/([^\s>]+)\s*>', re.S)
+ NAMESPACE_RE = re.compile(r'((xmlns[:A-Za-z]*)=[^>\s]+)', re.S)
text = _body_or_str(obj)
- header_start = re.search(HEADER_START_RE, text)
- header_start = header_start.group(1).strip() if header_start else ''
- header_end = re_rsearch(HEADER_END_RE, text)
- header_end = text[header_end[1]:].strip() if header_end else ''
+ document_header = re.search(DOCUMENT_HEADER_RE, text)
+ document_header = document_header.group().strip() if document_header else ''
+ header_end_idx = re_rsearch(HEADER_END_RE, text)
+ header_end = text[header_end_idx[1]:].strip() if header_end_idx else ''
+ namespaces = {}
+ if header_end:
+ for tagname in reversed(re.findall(END_TAG_RE, header_end)):
+ tag = re.search(fr'<\s*{tagname}.*?xmlns[:=][^>]*>', text[:header_end_idx[1]], re.S)
+ if tag:
+ namespaces.update(reversed(x) for x in re.findall(NAMESPACE_RE, tag.group()))
r = re.compile(fr'<{nodename_patt}[\s>].*?</{nodename_patt}>', re.DOTALL)
for match in r.finditer(text):
- nodetext = header_start + match.group() + header_end
- yield Selector(text=nodetext, type='xml').xpath('//' + nodename)[0]
+ nodetext = (
+ document_header
+ + match.group().replace(
+ nodename,
+ f'{nodename} {" ".join(namespaces.values())}',
+ 1
+ )
+ + header_end
+ )
+ yield Selector(text=nodetext, type='xml')
def xmliter_lxml(obj, nodename, namespace=None, prefix='x'):
from lxml import etree
reader = _StreamReader(obj)
- tag = f'{{{namespace}}}{nodename}'if namespace else nodename
+ tag = f'{{{namespace}}}{nodename}' if namespace else nodename
iterable = etree.iterparse(reader, tag=tag, encoding=reader.encoding)
selxpath = '//' + (f'{prefix}:{nodename}' if namespace else nodename)
for _, node in iterable:
| {"golden_diff": "diff --git a/scrapy/utils/iterators.py b/scrapy/utils/iterators.py\n--- a/scrapy/utils/iterators.py\n+++ b/scrapy/utils/iterators.py\n@@ -22,25 +22,41 @@\n \"\"\"\n nodename_patt = re.escape(nodename)\n \n- HEADER_START_RE = re.compile(fr'^(.*?)<\\s*{nodename_patt}(?:\\s|>)', re.S)\n+ DOCUMENT_HEADER_RE = re.compile(r'<\\?xml[^>]+>\\s*', re.S)\n HEADER_END_RE = re.compile(fr'<\\s*/{nodename_patt}\\s*>', re.S)\n+ END_TAG_RE = re.compile(r'<\\s*/([^\\s>]+)\\s*>', re.S)\n+ NAMESPACE_RE = re.compile(r'((xmlns[:A-Za-z]*)=[^>\\s]+)', re.S)\n text = _body_or_str(obj)\n \n- header_start = re.search(HEADER_START_RE, text)\n- header_start = header_start.group(1).strip() if header_start else ''\n- header_end = re_rsearch(HEADER_END_RE, text)\n- header_end = text[header_end[1]:].strip() if header_end else ''\n+ document_header = re.search(DOCUMENT_HEADER_RE, text)\n+ document_header = document_header.group().strip() if document_header else ''\n+ header_end_idx = re_rsearch(HEADER_END_RE, text)\n+ header_end = text[header_end_idx[1]:].strip() if header_end_idx else ''\n+ namespaces = {}\n+ if header_end:\n+ for tagname in reversed(re.findall(END_TAG_RE, header_end)):\n+ tag = re.search(fr'<\\s*{tagname}.*?xmlns[:=][^>]*>', text[:header_end_idx[1]], re.S)\n+ if tag:\n+ namespaces.update(reversed(x) for x in re.findall(NAMESPACE_RE, tag.group()))\n \n r = re.compile(fr'<{nodename_patt}[\\s>].*?</{nodename_patt}>', re.DOTALL)\n for match in r.finditer(text):\n- nodetext = header_start + match.group() + header_end\n- yield Selector(text=nodetext, type='xml').xpath('//' + nodename)[0]\n+ nodetext = (\n+ document_header\n+ + match.group().replace(\n+ nodename,\n+ f'{nodename} {\" \".join(namespaces.values())}',\n+ 1\n+ )\n+ + header_end\n+ )\n+ yield Selector(text=nodetext, type='xml')\n \n \n def xmliter_lxml(obj, nodename, namespace=None, prefix='x'):\n from lxml import etree\n reader = _StreamReader(obj)\n- tag = f'{{{namespace}}}{nodename}'if namespace else nodename\n+ tag = f'{{{namespace}}}{nodename}' if namespace else nodename\n iterable = etree.iterparse(reader, tag=tag, encoding=reader.encoding)\n selxpath = '//' + (f'{prefix}:{nodename}' if namespace else nodename)\n for _, node in iterable:\n", "issue": "[WIP] fix xmliter namespace on selected node\nThis PR was triggered by [scrapy-users](https://groups.google.com/forum/#!topic/scrapy-users/VN6409UHexQ)\n\nActually `xmliter` populates a `Selector` with everything from the position 0 to the tag start, so if we had 100mb before the tag we want to iter it copy those 100mb across all the `Selector` objects. Also it just extract this info for the first tag and embed the rest on that, this can cause info crossing.\n\nIn this PR I kept the regex stuff even tho I think we should use something like [`iterparse`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse).\n\nCurrently `xmliter_lxml` tests are failing due to it has a different API.\n\n", "before_files": [{"content": "import csv\nimport logging\nimport re\nfrom io import StringIO\n\nfrom scrapy.http import TextResponse, Response\nfrom scrapy.selector import Selector\nfrom scrapy.utils.python import re_rsearch, to_unicode\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef xmliter(obj, nodename):\n \"\"\"Return a iterator of Selector's over all nodes of a XML document,\n given the name of the node to iterate. Useful for parsing XML feeds.\n\n obj can be:\n - a Response object\n - a unicode string\n - a string encoded as utf-8\n \"\"\"\n nodename_patt = re.escape(nodename)\n\n HEADER_START_RE = re.compile(fr'^(.*?)<\\s*{nodename_patt}(?:\\s|>)', re.S)\n HEADER_END_RE = re.compile(fr'<\\s*/{nodename_patt}\\s*>', re.S)\n text = _body_or_str(obj)\n\n header_start = re.search(HEADER_START_RE, text)\n header_start = header_start.group(1).strip() if header_start else ''\n header_end = re_rsearch(HEADER_END_RE, text)\n header_end = text[header_end[1]:].strip() if header_end else ''\n\n r = re.compile(fr'<{nodename_patt}[\\s>].*?</{nodename_patt}>', re.DOTALL)\n for match in r.finditer(text):\n nodetext = header_start + match.group() + header_end\n yield Selector(text=nodetext, type='xml').xpath('//' + nodename)[0]\n\n\ndef xmliter_lxml(obj, nodename, namespace=None, prefix='x'):\n from lxml import etree\n reader = _StreamReader(obj)\n tag = f'{{{namespace}}}{nodename}'if namespace else nodename\n iterable = etree.iterparse(reader, tag=tag, encoding=reader.encoding)\n selxpath = '//' + (f'{prefix}:{nodename}' if namespace else nodename)\n for _, node in iterable:\n nodetext = etree.tostring(node, encoding='unicode')\n node.clear()\n xs = Selector(text=nodetext, type='xml')\n if namespace:\n xs.register_namespace(prefix, namespace)\n yield xs.xpath(selxpath)[0]\n\n\nclass _StreamReader:\n\n def __init__(self, obj):\n self._ptr = 0\n if isinstance(obj, Response):\n self._text, self.encoding = obj.body, obj.encoding\n else:\n self._text, self.encoding = obj, 'utf-8'\n self._is_unicode = isinstance(self._text, str)\n\n def read(self, n=65535):\n self.read = self._read_unicode if self._is_unicode else self._read_string\n return self.read(n).lstrip()\n\n def _read_string(self, n=65535):\n s, e = self._ptr, self._ptr + n\n self._ptr = e\n return self._text[s:e]\n\n def _read_unicode(self, n=65535):\n s, e = self._ptr, self._ptr + n\n self._ptr = e\n return self._text[s:e].encode('utf-8')\n\n\ndef csviter(obj, delimiter=None, headers=None, encoding=None, quotechar=None):\n \"\"\" Returns an iterator of dictionaries from the given csv object\n\n obj can be:\n - a Response object\n - a unicode string\n - a string encoded as utf-8\n\n delimiter is the character used to separate fields on the given obj.\n\n headers is an iterable that when provided offers the keys\n for the returned dictionaries, if not the first row is used.\n\n quotechar is the character used to enclosure fields on the given obj.\n \"\"\"\n\n encoding = obj.encoding if isinstance(obj, TextResponse) else encoding or 'utf-8'\n\n def row_to_unicode(row_):\n return [to_unicode(field, encoding) for field in row_]\n\n lines = StringIO(_body_or_str(obj, unicode=True))\n\n kwargs = {}\n if delimiter:\n kwargs[\"delimiter\"] = delimiter\n if quotechar:\n kwargs[\"quotechar\"] = quotechar\n csv_r = csv.reader(lines, **kwargs)\n\n if not headers:\n try:\n row = next(csv_r)\n except StopIteration:\n return\n headers = row_to_unicode(row)\n\n for row in csv_r:\n row = row_to_unicode(row)\n if len(row) != len(headers):\n logger.warning(\"ignoring row %(csvlnum)d (length: %(csvrow)d, \"\n \"should be: %(csvheader)d)\",\n {'csvlnum': csv_r.line_num, 'csvrow': len(row),\n 'csvheader': len(headers)})\n continue\n else:\n yield dict(zip(headers, row))\n\n\ndef _body_or_str(obj, unicode=True):\n expected_types = (Response, str, bytes)\n if not isinstance(obj, expected_types):\n expected_types_str = \" or \".join(t.__name__ for t in expected_types)\n raise TypeError(\n f\"Object {obj!r} must be {expected_types_str}, not {type(obj).__name__}\"\n )\n if isinstance(obj, Response):\n if not unicode:\n return obj.body\n elif isinstance(obj, TextResponse):\n return obj.text\n else:\n return obj.body.decode('utf-8')\n elif isinstance(obj, str):\n return obj if unicode else obj.encode('utf-8')\n else:\n return obj.decode('utf-8') if unicode else obj\n", "path": "scrapy/utils/iterators.py"}], "after_files": [{"content": "import csv\nimport logging\nimport re\nfrom io import StringIO\n\nfrom scrapy.http import TextResponse, Response\nfrom scrapy.selector import Selector\nfrom scrapy.utils.python import re_rsearch, to_unicode\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef xmliter(obj, nodename):\n \"\"\"Return a iterator of Selector's over all nodes of a XML document,\n given the name of the node to iterate. Useful for parsing XML feeds.\n\n obj can be:\n - a Response object\n - a unicode string\n - a string encoded as utf-8\n \"\"\"\n nodename_patt = re.escape(nodename)\n\n DOCUMENT_HEADER_RE = re.compile(r'<\\?xml[^>]+>\\s*', re.S)\n HEADER_END_RE = re.compile(fr'<\\s*/{nodename_patt}\\s*>', re.S)\n END_TAG_RE = re.compile(r'<\\s*/([^\\s>]+)\\s*>', re.S)\n NAMESPACE_RE = re.compile(r'((xmlns[:A-Za-z]*)=[^>\\s]+)', re.S)\n text = _body_or_str(obj)\n\n document_header = re.search(DOCUMENT_HEADER_RE, text)\n document_header = document_header.group().strip() if document_header else ''\n header_end_idx = re_rsearch(HEADER_END_RE, text)\n header_end = text[header_end_idx[1]:].strip() if header_end_idx else ''\n namespaces = {}\n if header_end:\n for tagname in reversed(re.findall(END_TAG_RE, header_end)):\n tag = re.search(fr'<\\s*{tagname}.*?xmlns[:=][^>]*>', text[:header_end_idx[1]], re.S)\n if tag:\n namespaces.update(reversed(x) for x in re.findall(NAMESPACE_RE, tag.group()))\n\n r = re.compile(fr'<{nodename_patt}[\\s>].*?</{nodename_patt}>', re.DOTALL)\n for match in r.finditer(text):\n nodetext = (\n document_header\n + match.group().replace(\n nodename,\n f'{nodename} {\" \".join(namespaces.values())}',\n 1\n )\n + header_end\n )\n yield Selector(text=nodetext, type='xml')\n\n\ndef xmliter_lxml(obj, nodename, namespace=None, prefix='x'):\n from lxml import etree\n reader = _StreamReader(obj)\n tag = f'{{{namespace}}}{nodename}' if namespace else nodename\n iterable = etree.iterparse(reader, tag=tag, encoding=reader.encoding)\n selxpath = '//' + (f'{prefix}:{nodename}' if namespace else nodename)\n for _, node in iterable:\n nodetext = etree.tostring(node, encoding='unicode')\n node.clear()\n xs = Selector(text=nodetext, type='xml')\n if namespace:\n xs.register_namespace(prefix, namespace)\n yield xs.xpath(selxpath)[0]\n\n\nclass _StreamReader:\n\n def __init__(self, obj):\n self._ptr = 0\n if isinstance(obj, Response):\n self._text, self.encoding = obj.body, obj.encoding\n else:\n self._text, self.encoding = obj, 'utf-8'\n self._is_unicode = isinstance(self._text, str)\n\n def read(self, n=65535):\n self.read = self._read_unicode if self._is_unicode else self._read_string\n return self.read(n).lstrip()\n\n def _read_string(self, n=65535):\n s, e = self._ptr, self._ptr + n\n self._ptr = e\n return self._text[s:e]\n\n def _read_unicode(self, n=65535):\n s, e = self._ptr, self._ptr + n\n self._ptr = e\n return self._text[s:e].encode('utf-8')\n\n\ndef csviter(obj, delimiter=None, headers=None, encoding=None, quotechar=None):\n \"\"\" Returns an iterator of dictionaries from the given csv object\n\n obj can be:\n - a Response object\n - a unicode string\n - a string encoded as utf-8\n\n delimiter is the character used to separate fields on the given obj.\n\n headers is an iterable that when provided offers the keys\n for the returned dictionaries, if not the first row is used.\n\n quotechar is the character used to enclosure fields on the given obj.\n \"\"\"\n\n encoding = obj.encoding if isinstance(obj, TextResponse) else encoding or 'utf-8'\n\n def row_to_unicode(row_):\n return [to_unicode(field, encoding) for field in row_]\n\n lines = StringIO(_body_or_str(obj, unicode=True))\n\n kwargs = {}\n if delimiter:\n kwargs[\"delimiter\"] = delimiter\n if quotechar:\n kwargs[\"quotechar\"] = quotechar\n csv_r = csv.reader(lines, **kwargs)\n\n if not headers:\n try:\n row = next(csv_r)\n except StopIteration:\n return\n headers = row_to_unicode(row)\n\n for row in csv_r:\n row = row_to_unicode(row)\n if len(row) != len(headers):\n logger.warning(\"ignoring row %(csvlnum)d (length: %(csvrow)d, \"\n \"should be: %(csvheader)d)\",\n {'csvlnum': csv_r.line_num, 'csvrow': len(row),\n 'csvheader': len(headers)})\n continue\n else:\n yield dict(zip(headers, row))\n\n\ndef _body_or_str(obj, unicode=True):\n expected_types = (Response, str, bytes)\n if not isinstance(obj, expected_types):\n expected_types_str = \" or \".join(t.__name__ for t in expected_types)\n raise TypeError(\n f\"Object {obj!r} must be {expected_types_str}, not {type(obj).__name__}\"\n )\n if isinstance(obj, Response):\n if not unicode:\n return obj.body\n elif isinstance(obj, TextResponse):\n return obj.text\n else:\n return obj.body.decode('utf-8')\n elif isinstance(obj, str):\n return obj if unicode else obj.encode('utf-8')\n else:\n return obj.decode('utf-8') if unicode else obj\n", "path": "scrapy/utils/iterators.py"}]} | 1,994 | 695 |
gh_patches_debug_35304 | rasdani/github-patches | git_diff | vaexio__vaex-757 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bool values get flipped when converting Arrow table to DataFrame
Using the latest version:
`vaex==2.6.1`
Just realised that when converting an Arrow table to a DataFrame, bool columns get flipped and converted to integers:
```python
import vaex
from pyarrow import feather
bool_array = [False, True, True, False]
pdf = pd.DataFrame({"col1": bool_array})
pdf.to_feather("test_data.feather")
arrow_table = feather.read_table("test_data.feather")
vaex.from_arrow_table(arrow_table)
```
```
# | col1
-- | --
0 | 1
1 | 0
2 | 0
3 | 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packages/vaex-arrow/vaex_arrow/convert.py`
Content:
```
1 """Convert between arrow and vaex/numpy columns/arrays without doing memory copies."""
2 import pyarrow
3 import numpy as np
4 from vaex.column import ColumnStringArrow
5
6 def arrow_array_from_numpy_array(array):
7 dtype = array.dtype
8 mask = None
9 if np.ma.isMaskedArray(array):
10 mask = array.mask
11 # arrow 0.16 behaves weird in this case https://github.com/vaexio/vaex/pull/639
12 if mask is np.False_:
13 mask = None
14 elif mask is np.True_:
15 raise ValueError('not sure what pyarrow does with mask=True')
16 array = array.data
17 if dtype.kind == 'S':
18 type = pyarrow.binary(dtype.itemsize)
19 arrow_array = pyarrow.array(array, type, mask=mask)
20 else:
21 if not dtype.isnative:
22 array = array.astype(dtype.newbyteorder('='))
23 arrow_array = pyarrow.Array.from_pandas(array, mask=mask)
24 return arrow_array
25
26 from vaex.dataframe import Column
27
28
29 def column_from_arrow_array(arrow_array):
30 arrow_type = arrow_array.type
31 buffers = arrow_array.buffers()
32 if len(buffers) == 2:
33 return numpy_array_from_arrow_array(arrow_array)
34 elif len(buffers) == 3 and isinstance(arrow_array.type, type(pyarrow.string())):
35 bitmap_buffer, offsets, string_bytes = arrow_array.buffers()
36 if arrow_array.null_count == 0:
37 null_bitmap = None # we drop any null_bitmap when there are no null counts
38 else:
39 null_bitmap = np.frombuffer(bitmap_buffer, 'uint8', len(bitmap_buffer))
40 offsets = np.frombuffer(offsets, np.int32, len(offsets)//4)
41 if string_bytes is None:
42 string_bytes = np.array([], dtype='S1')
43 else:
44 string_bytes = np.frombuffer(string_bytes, 'S1', len(string_bytes))
45 column = ColumnStringArrow(offsets, string_bytes, len(arrow_array), null_bitmap=null_bitmap)
46 return column
47 else:
48 raise TypeError('type unsupported: %r' % arrow_type)
49
50
51 def numpy_array_from_arrow_array(arrow_array):
52 arrow_type = arrow_array.type
53 buffers = arrow_array.buffers()
54 assert len(buffers) == 2
55 bitmap_buffer, data_buffer = buffers
56 if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?
57 # mimics python/pyarrow/array.pxi::Array::to_numpy
58 assert len(buffers) == 2
59 dtype = "S" + str(arrow_type.byte_width)
60 # arrow seems to do padding, check if it is all ok
61 expected_length = arrow_type.byte_width * len(arrow_array)
62 actual_length = len(buffers[-1])
63 if actual_length < expected_length:
64 raise ValueError('buffer is smaller (%d) than expected (%d)' % (actual_length, expected_length))
65 array = np.frombuffer(buffers[-1], dtype, len(arrow_array))# TODO: deal with offset ? [arrow_array.offset:arrow_array.offset + len(arrow_array)]
66 else:
67 dtype = arrow_array.type.to_pandas_dtype()
68 if np.bool_ == dtype:
69 # TODO: this will also be a copy, we probably want to support bitmasks as well
70 bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))
71 array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
72 else:
73 array = np.frombuffer(data_buffer, dtype, len(arrow_array))
74
75 if bitmap_buffer is not None:
76 bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))
77 mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
78 array = np.ma.MaskedArray(array, mask=mask)
79 return array
80
81 def numpy_mask_from_arrow_mask(bitmap, length):
82 # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md
83 # we do have to change the ordering of the bits
84 return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]
85
86
87
88 def arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):
89 """Implementation of Dataset.to_arrow_table"""
90 names = []
91 arrays = []
92 for name, array in ds.to_items(column_names=column_names, selection=selection, strings=strings, virtual=virtual):
93 names.append(name)
94 arrays.append(arrow_array_from_numpy_array(array))
95 return pyarrow.Table.from_arrays(arrays, names)
96
97 def vaex_df_from_arrow_table(table):
98 from .dataset import DatasetArrow
99 return DatasetArrow(table=table)
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/packages/vaex-arrow/vaex_arrow/convert.py b/packages/vaex-arrow/vaex_arrow/convert.py
--- a/packages/vaex-arrow/vaex_arrow/convert.py
+++ b/packages/vaex-arrow/vaex_arrow/convert.py
@@ -53,6 +53,7 @@
buffers = arrow_array.buffers()
assert len(buffers) == 2
bitmap_buffer, data_buffer = buffers
+ offset = arrow_array.offset
if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?
# mimics python/pyarrow/array.pxi::Array::to_numpy
assert len(buffers) == 2
@@ -68,13 +69,13 @@
if np.bool_ == dtype:
# TODO: this will also be a copy, we probably want to support bitmasks as well
bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))
- array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
+ array = numpy_bool_from_arrow_bitmap(bitmap, len(arrow_array) + offset)[offset:]
else:
- array = np.frombuffer(data_buffer, dtype, len(arrow_array))
+ array = np.frombuffer(data_buffer, dtype, len(arrow_array) + offset)[offset:]
if bitmap_buffer is not None:
bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))
- mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
+ mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array) + offset)[offset:]
array = np.ma.MaskedArray(array, mask=mask)
return array
@@ -83,7 +84,10 @@
# we do have to change the ordering of the bits
return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]
-
+def numpy_bool_from_arrow_bitmap(bitmap, length):
+ # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md
+ # we do have to change the ordering of the bits
+ return np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length].view(np.bool_)
def arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):
"""Implementation of Dataset.to_arrow_table"""
| {"golden_diff": "diff --git a/packages/vaex-arrow/vaex_arrow/convert.py b/packages/vaex-arrow/vaex_arrow/convert.py\n--- a/packages/vaex-arrow/vaex_arrow/convert.py\n+++ b/packages/vaex-arrow/vaex_arrow/convert.py\n@@ -53,6 +53,7 @@\n buffers = arrow_array.buffers()\n assert len(buffers) == 2\n bitmap_buffer, data_buffer = buffers\n+ offset = arrow_array.offset\n if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?\n # mimics python/pyarrow/array.pxi::Array::to_numpy\n assert len(buffers) == 2\n@@ -68,13 +69,13 @@\n if np.bool_ == dtype:\n # TODO: this will also be a copy, we probably want to support bitmasks as well\n bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))\n- array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n+ array = numpy_bool_from_arrow_bitmap(bitmap, len(arrow_array) + offset)[offset:]\n else:\n- array = np.frombuffer(data_buffer, dtype, len(arrow_array))\n+ array = np.frombuffer(data_buffer, dtype, len(arrow_array) + offset)[offset:]\n \n if bitmap_buffer is not None:\n bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))\n- mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n+ mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array) + offset)[offset:]\n array = np.ma.MaskedArray(array, mask=mask)\n return array\n \n@@ -83,7 +84,10 @@\n # we do have to change the ordering of the bits\n return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]\n \n-\n+def numpy_bool_from_arrow_bitmap(bitmap, length):\n+ # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md\n+ # we do have to change the ordering of the bits\n+ return np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length].view(np.bool_)\n \n def arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):\n \"\"\"Implementation of Dataset.to_arrow_table\"\"\"\n", "issue": "Bool values get flipped when converting Arrow table to DataFrame\nUsing the latest version:\r\n`vaex==2.6.1`\r\n\r\nJust realised that when converting an Arrow table to a DataFrame, bool columns get flipped and converted to integers:\r\n\r\n```python\r\nimport vaex\r\nfrom pyarrow import feather\r\n\r\nbool_array = [False, True, True, False]\r\npdf = pd.DataFrame({\"col1\": bool_array})\r\npdf.to_feather(\"test_data.feather\")\r\narrow_table = feather.read_table(\"test_data.feather\")\r\nvaex.from_arrow_table(arrow_table)\r\n```\r\n\r\n```\r\n# | col1\r\n-- | --\r\n0 | 1\r\n1 | 0\r\n2 | 0\r\n3 | 1\r\n```\n", "before_files": [{"content": "\"\"\"Convert between arrow and vaex/numpy columns/arrays without doing memory copies.\"\"\"\nimport pyarrow\nimport numpy as np\nfrom vaex.column import ColumnStringArrow\n\ndef arrow_array_from_numpy_array(array):\n dtype = array.dtype\n mask = None\n if np.ma.isMaskedArray(array):\n mask = array.mask\n # arrow 0.16 behaves weird in this case https://github.com/vaexio/vaex/pull/639\n if mask is np.False_:\n mask = None\n elif mask is np.True_:\n raise ValueError('not sure what pyarrow does with mask=True')\n array = array.data\n if dtype.kind == 'S':\n type = pyarrow.binary(dtype.itemsize)\n arrow_array = pyarrow.array(array, type, mask=mask)\n else:\n if not dtype.isnative:\n array = array.astype(dtype.newbyteorder('='))\n arrow_array = pyarrow.Array.from_pandas(array, mask=mask)\n return arrow_array\n\nfrom vaex.dataframe import Column\n\n\ndef column_from_arrow_array(arrow_array):\n arrow_type = arrow_array.type\n buffers = arrow_array.buffers()\n if len(buffers) == 2:\n return numpy_array_from_arrow_array(arrow_array)\n elif len(buffers) == 3 and isinstance(arrow_array.type, type(pyarrow.string())):\n bitmap_buffer, offsets, string_bytes = arrow_array.buffers()\n if arrow_array.null_count == 0:\n null_bitmap = None # we drop any null_bitmap when there are no null counts\n else:\n null_bitmap = np.frombuffer(bitmap_buffer, 'uint8', len(bitmap_buffer))\n offsets = np.frombuffer(offsets, np.int32, len(offsets)//4)\n if string_bytes is None:\n string_bytes = np.array([], dtype='S1')\n else:\n string_bytes = np.frombuffer(string_bytes, 'S1', len(string_bytes))\n column = ColumnStringArrow(offsets, string_bytes, len(arrow_array), null_bitmap=null_bitmap)\n return column\n else:\n raise TypeError('type unsupported: %r' % arrow_type)\n\n\ndef numpy_array_from_arrow_array(arrow_array):\n arrow_type = arrow_array.type\n buffers = arrow_array.buffers()\n assert len(buffers) == 2\n bitmap_buffer, data_buffer = buffers\n if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?\n # mimics python/pyarrow/array.pxi::Array::to_numpy\n assert len(buffers) == 2\n dtype = \"S\" + str(arrow_type.byte_width)\n # arrow seems to do padding, check if it is all ok\n expected_length = arrow_type.byte_width * len(arrow_array)\n actual_length = len(buffers[-1])\n if actual_length < expected_length:\n raise ValueError('buffer is smaller (%d) than expected (%d)' % (actual_length, expected_length))\n array = np.frombuffer(buffers[-1], dtype, len(arrow_array))# TODO: deal with offset ? [arrow_array.offset:arrow_array.offset + len(arrow_array)]\n else:\n dtype = arrow_array.type.to_pandas_dtype()\n if np.bool_ == dtype:\n # TODO: this will also be a copy, we probably want to support bitmasks as well\n bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))\n array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n else:\n array = np.frombuffer(data_buffer, dtype, len(arrow_array))\n\n if bitmap_buffer is not None:\n bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))\n mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n array = np.ma.MaskedArray(array, mask=mask)\n return array\n\ndef numpy_mask_from_arrow_mask(bitmap, length):\n # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md\n # we do have to change the ordering of the bits\n return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]\n\n\n\ndef arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):\n \"\"\"Implementation of Dataset.to_arrow_table\"\"\"\n names = []\n arrays = []\n for name, array in ds.to_items(column_names=column_names, selection=selection, strings=strings, virtual=virtual):\n names.append(name)\n arrays.append(arrow_array_from_numpy_array(array))\n return pyarrow.Table.from_arrays(arrays, names)\n\ndef vaex_df_from_arrow_table(table):\n from .dataset import DatasetArrow\n return DatasetArrow(table=table)\n", "path": "packages/vaex-arrow/vaex_arrow/convert.py"}], "after_files": [{"content": "\"\"\"Convert between arrow and vaex/numpy columns/arrays without doing memory copies.\"\"\"\nimport pyarrow\nimport numpy as np\nfrom vaex.column import ColumnStringArrow\n\ndef arrow_array_from_numpy_array(array):\n dtype = array.dtype\n mask = None\n if np.ma.isMaskedArray(array):\n mask = array.mask\n # arrow 0.16 behaves weird in this case https://github.com/vaexio/vaex/pull/639\n if mask is np.False_:\n mask = None\n elif mask is np.True_:\n raise ValueError('not sure what pyarrow does with mask=True')\n array = array.data\n if dtype.kind == 'S':\n type = pyarrow.binary(dtype.itemsize)\n arrow_array = pyarrow.array(array, type, mask=mask)\n else:\n if not dtype.isnative:\n array = array.astype(dtype.newbyteorder('='))\n arrow_array = pyarrow.Array.from_pandas(array, mask=mask)\n return arrow_array\n\nfrom vaex.dataframe import Column\n\n\ndef column_from_arrow_array(arrow_array):\n arrow_type = arrow_array.type\n buffers = arrow_array.buffers()\n if len(buffers) == 2:\n return numpy_array_from_arrow_array(arrow_array)\n elif len(buffers) == 3 and isinstance(arrow_array.type, type(pyarrow.string())):\n bitmap_buffer, offsets, string_bytes = arrow_array.buffers()\n if arrow_array.null_count == 0:\n null_bitmap = None # we drop any null_bitmap when there are no null counts\n else:\n null_bitmap = np.frombuffer(bitmap_buffer, 'uint8', len(bitmap_buffer))\n offsets = np.frombuffer(offsets, np.int32, len(offsets)//4)\n if string_bytes is None:\n string_bytes = np.array([], dtype='S1')\n else:\n string_bytes = np.frombuffer(string_bytes, 'S1', len(string_bytes))\n column = ColumnStringArrow(offsets, string_bytes, len(arrow_array), null_bitmap=null_bitmap)\n return column\n else:\n raise TypeError('type unsupported: %r' % arrow_type)\n\n\ndef numpy_array_from_arrow_array(arrow_array):\n arrow_type = arrow_array.type\n buffers = arrow_array.buffers()\n assert len(buffers) == 2\n bitmap_buffer, data_buffer = buffers\n offset = arrow_array.offset\n if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?\n # mimics python/pyarrow/array.pxi::Array::to_numpy\n assert len(buffers) == 2\n dtype = \"S\" + str(arrow_type.byte_width)\n # arrow seems to do padding, check if it is all ok\n expected_length = arrow_type.byte_width * len(arrow_array)\n actual_length = len(buffers[-1])\n if actual_length < expected_length:\n raise ValueError('buffer is smaller (%d) than expected (%d)' % (actual_length, expected_length))\n array = np.frombuffer(buffers[-1], dtype, len(arrow_array))# TODO: deal with offset ? [arrow_array.offset:arrow_array.offset + len(arrow_array)]\n else:\n dtype = arrow_array.type.to_pandas_dtype()\n if np.bool_ == dtype:\n # TODO: this will also be a copy, we probably want to support bitmasks as well\n bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))\n array = numpy_bool_from_arrow_bitmap(bitmap, len(arrow_array) + offset)[offset:]\n else:\n array = np.frombuffer(data_buffer, dtype, len(arrow_array) + offset)[offset:]\n\n if bitmap_buffer is not None:\n bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))\n mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array) + offset)[offset:]\n array = np.ma.MaskedArray(array, mask=mask)\n return array\n\ndef numpy_mask_from_arrow_mask(bitmap, length):\n # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md\n # we do have to change the ordering of the bits\n return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]\n\ndef numpy_bool_from_arrow_bitmap(bitmap, length):\n # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md\n # we do have to change the ordering of the bits\n return np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length].view(np.bool_)\n\ndef arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):\n \"\"\"Implementation of Dataset.to_arrow_table\"\"\"\n names = []\n arrays = []\n for name, array in ds.to_items(column_names=column_names, selection=selection, strings=strings, virtual=virtual):\n names.append(name)\n arrays.append(arrow_array_from_numpy_array(array))\n return pyarrow.Table.from_arrays(arrays, names)\n\ndef vaex_df_from_arrow_table(table):\n from .dataset import DatasetArrow\n return DatasetArrow(table=table)\n", "path": "packages/vaex-arrow/vaex_arrow/convert.py"}]} | 1,649 | 543 |
gh_patches_debug_64467 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3019 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
testing 2958: bplan verification mail
**URL:** mail
**user:** administration staff working via imperia
**expected behaviour:** /
**behaviour:** wording changed, see below
**important screensize:**/
**device & browser:** /
**Comment/Question:**
- cross out the word "Betreff" in e-mail-subject
- correct "Projektü**n**ersicht" to "Projektübersicht"
- can you write "Uhr" behind date and time?
- I already know that it is complicated to separate date and time via comma, I guess this hasn't changed?
- the word "identifier" shouldn't be there but I guess it is only there because you entered it into the field together with the identifier itself, right?
Screenshot?
<img width="707" alt="Bildschirmfoto 2020-07-02 um 12 25 14" src="https://user-images.githubusercontent.com/35491681/86348098-7ccdd280-bc5f-11ea-9fb7-010f71c1a1a9.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/bplan/signals.py`
Content:
```
1 from django.db.models.signals import post_save
2 from django.dispatch import receiver
3
4 from . import emails
5 from . import tasks
6 from .models import Bplan
7 from .models import Statement
8
9
10 @receiver(post_save, sender=Bplan)
11 def get_location(sender, instance, update_fields, **kwargs):
12 if instance.identifier and (not update_fields or
13 'point' not in update_fields):
14 tasks.get_location_information(instance.pk)
15
16
17 @receiver(post_save, sender=Statement)
18 def send_notification(sender, instance, created, **kwargs):
19 if created:
20 emails.OfficeWorkerNotification.send(instance)
21
22 if instance.email:
23 emails.SubmitterConfirmation.send(instance)
24
25
26 @receiver(post_save, sender=Bplan)
27 def send_update(sender, instance, update_fields, **kwargs):
28 if update_fields:
29 emails.OfficeWorkerUpdateConfirmation.send(instance)
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/bplan/signals.py b/meinberlin/apps/bplan/signals.py
--- a/meinberlin/apps/bplan/signals.py
+++ b/meinberlin/apps/bplan/signals.py
@@ -25,5 +25,5 @@
@receiver(post_save, sender=Bplan)
def send_update(sender, instance, update_fields, **kwargs):
- if update_fields:
+ if not update_fields or 'point' not in update_fields:
emails.OfficeWorkerUpdateConfirmation.send(instance)
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/signals.py b/meinberlin/apps/bplan/signals.py\n--- a/meinberlin/apps/bplan/signals.py\n+++ b/meinberlin/apps/bplan/signals.py\n@@ -25,5 +25,5 @@\n \n @receiver(post_save, sender=Bplan)\n def send_update(sender, instance, update_fields, **kwargs):\n- if update_fields:\n+ if not update_fields or 'point' not in update_fields:\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "issue": "testing 2958: bplan verification mail\n**URL:** mail\r\n**user:** administration staff working via imperia\r\n**expected behaviour:** /\r\n**behaviour:** wording changed, see below\r\n**important screensize:**/\r\n**device & browser:** /\r\n**Comment/Question:**\r\n\r\n- cross out the word \"Betreff\" in e-mail-subject\r\n\r\n- correct \"Projekt\u00fc**n**ersicht\" to \"Projekt\u00fcbersicht\"\r\n\r\n- can you write \"Uhr\" behind date and time?\r\n\r\n- I already know that it is complicated to separate date and time via comma, I guess this hasn't changed?\r\n\r\n- the word \"identifier\" shouldn't be there but I guess it is only there because you entered it into the field together with the identifier itself, right?\r\n\r\nScreenshot?\r\n<img width=\"707\" alt=\"Bildschirmfoto 2020-07-02 um 12 25 14\" src=\"https://user-images.githubusercontent.com/35491681/86348098-7ccdd280-bc5f-11ea-9fb7-010f71c1a1a9.png\">\r\n\r\n\r\n\n", "before_files": [{"content": "from django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom . import emails\nfrom . import tasks\nfrom .models import Bplan\nfrom .models import Statement\n\n\n@receiver(post_save, sender=Bplan)\ndef get_location(sender, instance, update_fields, **kwargs):\n if instance.identifier and (not update_fields or\n 'point' not in update_fields):\n tasks.get_location_information(instance.pk)\n\n\n@receiver(post_save, sender=Statement)\ndef send_notification(sender, instance, created, **kwargs):\n if created:\n emails.OfficeWorkerNotification.send(instance)\n\n if instance.email:\n emails.SubmitterConfirmation.send(instance)\n\n\n@receiver(post_save, sender=Bplan)\ndef send_update(sender, instance, update_fields, **kwargs):\n if update_fields:\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "path": "meinberlin/apps/bplan/signals.py"}], "after_files": [{"content": "from django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom . import emails\nfrom . import tasks\nfrom .models import Bplan\nfrom .models import Statement\n\n\n@receiver(post_save, sender=Bplan)\ndef get_location(sender, instance, update_fields, **kwargs):\n if instance.identifier and (not update_fields or\n 'point' not in update_fields):\n tasks.get_location_information(instance.pk)\n\n\n@receiver(post_save, sender=Statement)\ndef send_notification(sender, instance, created, **kwargs):\n if created:\n emails.OfficeWorkerNotification.send(instance)\n\n if instance.email:\n emails.SubmitterConfirmation.send(instance)\n\n\n@receiver(post_save, sender=Bplan)\ndef send_update(sender, instance, update_fields, **kwargs):\n if not update_fields or 'point' not in update_fields:\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "path": "meinberlin/apps/bplan/signals.py"}]} | 762 | 117 |
gh_patches_debug_11254 | rasdani/github-patches | git_diff | nipy__nipype-3007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mne.WatershedBEM creates incorrect command line
### Summary
The mne.WatershedBEM interface `_cmd` do not correspond to the behavior of the stable release of mne [see documentation](https://martinos.org/mne/stable/generated/commands.html#create-bem-surfaces-using-the-watershed-algorithm-included-with-freesurfer)
[This line](https://github.com/nipy/nipype/blob/f79581edc042ed38064f48e85b6dcc38bc30a084/nipype/interfaces/mne/base.py#L97) `_cmd = 'mne_watershed_bem'` should be `_cmd = 'mne watershed_bem'`
mne.WatershedBEM creates incorrect command line
### Summary
The mne.WatershedBEM interface `_cmd` do not correspond to the behavior of the stable release of mne [see documentation](https://martinos.org/mne/stable/generated/commands.html#create-bem-surfaces-using-the-watershed-algorithm-included-with-freesurfer)
[This line](https://github.com/nipy/nipype/blob/f79581edc042ed38064f48e85b6dcc38bc30a084/nipype/interfaces/mne/base.py#L97) `_cmd = 'mne_watershed_bem'` should be `_cmd = 'mne watershed_bem'`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/interfaces/mne/base.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import (print_function, division, unicode_literals,
3 absolute_import)
4 from builtins import str, bytes
5
6 import os.path as op
7 import glob
8
9 from ... import logging
10 from ...utils.filemanip import simplify_list
11 from ..base import (traits, File, Directory, TraitedSpec, OutputMultiPath)
12 from ..freesurfer.base import FSCommand, FSTraitedSpec
13
14 iflogger = logging.getLogger('nipype.interface')
15
16
17 class WatershedBEMInputSpec(FSTraitedSpec):
18 subject_id = traits.Str(
19 argstr='--subject %s',
20 mandatory=True,
21 desc='Subject ID (must have a complete Freesurfer directory)')
22 subjects_dir = Directory(
23 exists=True,
24 mandatory=True,
25 usedefault=True,
26 desc='Path to Freesurfer subjects directory')
27 volume = traits.Enum(
28 'T1',
29 'aparc+aseg',
30 'aseg',
31 'brain',
32 'orig',
33 'brainmask',
34 'ribbon',
35 argstr='--volume %s',
36 usedefault=True,
37 desc='The volume from the "mri" directory to use (defaults to T1)')
38 overwrite = traits.Bool(
39 True,
40 usedefault=True,
41 argstr='--overwrite',
42 desc='Overwrites the existing files')
43 atlas_mode = traits.Bool(
44 argstr='--atlas',
45 desc='Use atlas mode for registration (default: no rigid alignment)')
46
47
48 class WatershedBEMOutputSpec(TraitedSpec):
49 mesh_files = OutputMultiPath(
50 File(exists=True),
51 desc=('Paths to the output meshes (brain, inner '
52 'skull, outer skull, outer skin)'))
53 brain_surface = File(
54 exists=True,
55 loc='bem/watershed',
56 desc='Brain surface (in Freesurfer format)')
57 inner_skull_surface = File(
58 exists=True,
59 loc='bem/watershed',
60 desc='Inner skull surface (in Freesurfer format)')
61 outer_skull_surface = File(
62 exists=True,
63 loc='bem/watershed',
64 desc='Outer skull surface (in Freesurfer format)')
65 outer_skin_surface = File(
66 exists=True,
67 loc='bem/watershed',
68 desc='Outer skin surface (in Freesurfer format)')
69 fif_file = File(
70 exists=True,
71 loc='bem',
72 altkey='fif',
73 desc='"fif" format file for EEG processing in MNE')
74 cor_files = OutputMultiPath(
75 File(exists=True),
76 loc='bem/watershed/ws',
77 altkey='COR',
78 desc='"COR" format files')
79
80
81 class WatershedBEM(FSCommand):
82 """Uses mne_watershed_bem to get information from dicom directories
83
84 Examples
85 --------
86
87 >>> from nipype.interfaces.mne import WatershedBEM
88 >>> bem = WatershedBEM()
89 >>> bem.inputs.subject_id = 'subj1'
90 >>> bem.inputs.subjects_dir = '.'
91 >>> bem.cmdline
92 'mne_watershed_bem --overwrite --subject subj1 --volume T1'
93 >>> bem.run() # doctest: +SKIP
94
95 """
96
97 _cmd = 'mne_watershed_bem'
98 input_spec = WatershedBEMInputSpec
99 output_spec = WatershedBEMOutputSpec
100 _additional_metadata = ['loc', 'altkey']
101
102 def _get_files(self, path, key, dirval, altkey=None):
103 globsuffix = '*'
104 globprefix = '*'
105 keydir = op.join(path, dirval)
106 if altkey:
107 key = altkey
108 globpattern = op.join(keydir, ''.join((globprefix, key, globsuffix)))
109 return glob.glob(globpattern)
110
111 def _list_outputs(self):
112 outputs = self.output_spec().get()
113 subjects_dir = self.inputs.subjects_dir
114 subject_path = op.join(subjects_dir, self.inputs.subject_id)
115 output_traits = self._outputs()
116 mesh_paths = []
117 for k in list(outputs.keys()):
118 if k != 'mesh_files':
119 val = self._get_files(subject_path, k,
120 output_traits.traits()[k].loc,
121 output_traits.traits()[k].altkey)
122 if val:
123 value_list = simplify_list(val)
124 if isinstance(value_list, list):
125 out_files = []
126 for value in value_list:
127 out_files.append(op.abspath(value))
128 elif isinstance(value_list, (str, bytes)):
129 out_files = op.abspath(value_list)
130 else:
131 raise TypeError
132 outputs[k] = out_files
133 if not k.rfind('surface') == -1:
134 mesh_paths.append(out_files)
135 outputs['mesh_files'] = mesh_paths
136 return outputs
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/interfaces/mne/base.py b/nipype/interfaces/mne/base.py
--- a/nipype/interfaces/mne/base.py
+++ b/nipype/interfaces/mne/base.py
@@ -89,12 +89,12 @@
>>> bem.inputs.subject_id = 'subj1'
>>> bem.inputs.subjects_dir = '.'
>>> bem.cmdline
- 'mne_watershed_bem --overwrite --subject subj1 --volume T1'
+ 'mne watershed_bem --overwrite --subject subj1 --volume T1'
>>> bem.run() # doctest: +SKIP
"""
- _cmd = 'mne_watershed_bem'
+ _cmd = 'mne watershed_bem'
input_spec = WatershedBEMInputSpec
output_spec = WatershedBEMOutputSpec
_additional_metadata = ['loc', 'altkey']
| {"golden_diff": "diff --git a/nipype/interfaces/mne/base.py b/nipype/interfaces/mne/base.py\n--- a/nipype/interfaces/mne/base.py\n+++ b/nipype/interfaces/mne/base.py\n@@ -89,12 +89,12 @@\n >>> bem.inputs.subject_id = 'subj1'\n >>> bem.inputs.subjects_dir = '.'\n >>> bem.cmdline\n- 'mne_watershed_bem --overwrite --subject subj1 --volume T1'\n+ 'mne watershed_bem --overwrite --subject subj1 --volume T1'\n >>> bem.run() \t\t\t\t# doctest: +SKIP\n \n \"\"\"\n \n- _cmd = 'mne_watershed_bem'\n+ _cmd = 'mne watershed_bem'\n input_spec = WatershedBEMInputSpec\n output_spec = WatershedBEMOutputSpec\n _additional_metadata = ['loc', 'altkey']\n", "issue": "mne.WatershedBEM creates incorrect command line\n### Summary\r\nThe mne.WatershedBEM interface `_cmd` do not correspond to the behavior of the stable release of mne [see documentation](https://martinos.org/mne/stable/generated/commands.html#create-bem-surfaces-using-the-watershed-algorithm-included-with-freesurfer)\r\n [This line](https://github.com/nipy/nipype/blob/f79581edc042ed38064f48e85b6dcc38bc30a084/nipype/interfaces/mne/base.py#L97) `_cmd = 'mne_watershed_bem'` should be `_cmd = 'mne watershed_bem'`\nmne.WatershedBEM creates incorrect command line\n### Summary\r\nThe mne.WatershedBEM interface `_cmd` do not correspond to the behavior of the stable release of mne [see documentation](https://martinos.org/mne/stable/generated/commands.html#create-bem-surfaces-using-the-watershed-algorithm-included-with-freesurfer)\r\n [This line](https://github.com/nipy/nipype/blob/f79581edc042ed38064f48e85b6dcc38bc30a084/nipype/interfaces/mne/base.py#L97) `_cmd = 'mne_watershed_bem'` should be `_cmd = 'mne watershed_bem'`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nfrom builtins import str, bytes\n\nimport os.path as op\nimport glob\n\nfrom ... import logging\nfrom ...utils.filemanip import simplify_list\nfrom ..base import (traits, File, Directory, TraitedSpec, OutputMultiPath)\nfrom ..freesurfer.base import FSCommand, FSTraitedSpec\n\niflogger = logging.getLogger('nipype.interface')\n\n\nclass WatershedBEMInputSpec(FSTraitedSpec):\n subject_id = traits.Str(\n argstr='--subject %s',\n mandatory=True,\n desc='Subject ID (must have a complete Freesurfer directory)')\n subjects_dir = Directory(\n exists=True,\n mandatory=True,\n usedefault=True,\n desc='Path to Freesurfer subjects directory')\n volume = traits.Enum(\n 'T1',\n 'aparc+aseg',\n 'aseg',\n 'brain',\n 'orig',\n 'brainmask',\n 'ribbon',\n argstr='--volume %s',\n usedefault=True,\n desc='The volume from the \"mri\" directory to use (defaults to T1)')\n overwrite = traits.Bool(\n True,\n usedefault=True,\n argstr='--overwrite',\n desc='Overwrites the existing files')\n atlas_mode = traits.Bool(\n argstr='--atlas',\n desc='Use atlas mode for registration (default: no rigid alignment)')\n\n\nclass WatershedBEMOutputSpec(TraitedSpec):\n mesh_files = OutputMultiPath(\n File(exists=True),\n desc=('Paths to the output meshes (brain, inner '\n 'skull, outer skull, outer skin)'))\n brain_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Brain surface (in Freesurfer format)')\n inner_skull_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Inner skull surface (in Freesurfer format)')\n outer_skull_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Outer skull surface (in Freesurfer format)')\n outer_skin_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Outer skin surface (in Freesurfer format)')\n fif_file = File(\n exists=True,\n loc='bem',\n altkey='fif',\n desc='\"fif\" format file for EEG processing in MNE')\n cor_files = OutputMultiPath(\n File(exists=True),\n loc='bem/watershed/ws',\n altkey='COR',\n desc='\"COR\" format files')\n\n\nclass WatershedBEM(FSCommand):\n \"\"\"Uses mne_watershed_bem to get information from dicom directories\n\n Examples\n --------\n\n >>> from nipype.interfaces.mne import WatershedBEM\n >>> bem = WatershedBEM()\n >>> bem.inputs.subject_id = 'subj1'\n >>> bem.inputs.subjects_dir = '.'\n >>> bem.cmdline\n 'mne_watershed_bem --overwrite --subject subj1 --volume T1'\n >>> bem.run() \t\t\t\t# doctest: +SKIP\n\n \"\"\"\n\n _cmd = 'mne_watershed_bem'\n input_spec = WatershedBEMInputSpec\n output_spec = WatershedBEMOutputSpec\n _additional_metadata = ['loc', 'altkey']\n\n def _get_files(self, path, key, dirval, altkey=None):\n globsuffix = '*'\n globprefix = '*'\n keydir = op.join(path, dirval)\n if altkey:\n key = altkey\n globpattern = op.join(keydir, ''.join((globprefix, key, globsuffix)))\n return glob.glob(globpattern)\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n subjects_dir = self.inputs.subjects_dir\n subject_path = op.join(subjects_dir, self.inputs.subject_id)\n output_traits = self._outputs()\n mesh_paths = []\n for k in list(outputs.keys()):\n if k != 'mesh_files':\n val = self._get_files(subject_path, k,\n output_traits.traits()[k].loc,\n output_traits.traits()[k].altkey)\n if val:\n value_list = simplify_list(val)\n if isinstance(value_list, list):\n out_files = []\n for value in value_list:\n out_files.append(op.abspath(value))\n elif isinstance(value_list, (str, bytes)):\n out_files = op.abspath(value_list)\n else:\n raise TypeError\n outputs[k] = out_files\n if not k.rfind('surface') == -1:\n mesh_paths.append(out_files)\n outputs['mesh_files'] = mesh_paths\n return outputs\n", "path": "nipype/interfaces/mne/base.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nfrom builtins import str, bytes\n\nimport os.path as op\nimport glob\n\nfrom ... import logging\nfrom ...utils.filemanip import simplify_list\nfrom ..base import (traits, File, Directory, TraitedSpec, OutputMultiPath)\nfrom ..freesurfer.base import FSCommand, FSTraitedSpec\n\niflogger = logging.getLogger('nipype.interface')\n\n\nclass WatershedBEMInputSpec(FSTraitedSpec):\n subject_id = traits.Str(\n argstr='--subject %s',\n mandatory=True,\n desc='Subject ID (must have a complete Freesurfer directory)')\n subjects_dir = Directory(\n exists=True,\n mandatory=True,\n usedefault=True,\n desc='Path to Freesurfer subjects directory')\n volume = traits.Enum(\n 'T1',\n 'aparc+aseg',\n 'aseg',\n 'brain',\n 'orig',\n 'brainmask',\n 'ribbon',\n argstr='--volume %s',\n usedefault=True,\n desc='The volume from the \"mri\" directory to use (defaults to T1)')\n overwrite = traits.Bool(\n True,\n usedefault=True,\n argstr='--overwrite',\n desc='Overwrites the existing files')\n atlas_mode = traits.Bool(\n argstr='--atlas',\n desc='Use atlas mode for registration (default: no rigid alignment)')\n\n\nclass WatershedBEMOutputSpec(TraitedSpec):\n mesh_files = OutputMultiPath(\n File(exists=True),\n desc=('Paths to the output meshes (brain, inner '\n 'skull, outer skull, outer skin)'))\n brain_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Brain surface (in Freesurfer format)')\n inner_skull_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Inner skull surface (in Freesurfer format)')\n outer_skull_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Outer skull surface (in Freesurfer format)')\n outer_skin_surface = File(\n exists=True,\n loc='bem/watershed',\n desc='Outer skin surface (in Freesurfer format)')\n fif_file = File(\n exists=True,\n loc='bem',\n altkey='fif',\n desc='\"fif\" format file for EEG processing in MNE')\n cor_files = OutputMultiPath(\n File(exists=True),\n loc='bem/watershed/ws',\n altkey='COR',\n desc='\"COR\" format files')\n\n\nclass WatershedBEM(FSCommand):\n \"\"\"Uses mne_watershed_bem to get information from dicom directories\n\n Examples\n --------\n\n >>> from nipype.interfaces.mne import WatershedBEM\n >>> bem = WatershedBEM()\n >>> bem.inputs.subject_id = 'subj1'\n >>> bem.inputs.subjects_dir = '.'\n >>> bem.cmdline\n 'mne watershed_bem --overwrite --subject subj1 --volume T1'\n >>> bem.run() \t\t\t\t# doctest: +SKIP\n\n \"\"\"\n\n _cmd = 'mne watershed_bem'\n input_spec = WatershedBEMInputSpec\n output_spec = WatershedBEMOutputSpec\n _additional_metadata = ['loc', 'altkey']\n\n def _get_files(self, path, key, dirval, altkey=None):\n globsuffix = '*'\n globprefix = '*'\n keydir = op.join(path, dirval)\n if altkey:\n key = altkey\n globpattern = op.join(keydir, ''.join((globprefix, key, globsuffix)))\n return glob.glob(globpattern)\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n subjects_dir = self.inputs.subjects_dir\n subject_path = op.join(subjects_dir, self.inputs.subject_id)\n output_traits = self._outputs()\n mesh_paths = []\n for k in list(outputs.keys()):\n if k != 'mesh_files':\n val = self._get_files(subject_path, k,\n output_traits.traits()[k].loc,\n output_traits.traits()[k].altkey)\n if val:\n value_list = simplify_list(val)\n if isinstance(value_list, list):\n out_files = []\n for value in value_list:\n out_files.append(op.abspath(value))\n elif isinstance(value_list, (str, bytes)):\n out_files = op.abspath(value_list)\n else:\n raise TypeError\n outputs[k] = out_files\n if not k.rfind('surface') == -1:\n mesh_paths.append(out_files)\n outputs['mesh_files'] = mesh_paths\n return outputs\n", "path": "nipype/interfaces/mne/base.py"}]} | 1,952 | 202 |
gh_patches_debug_2177 | rasdani/github-patches | git_diff | yt-project__yt-2259 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Index Error updating from YT-3.4.0 to YT-3.5.1
<!--To help us understand and resolve your issue, please fill out the form to
the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Bug report
**Bug summary**
Index error after yt upgrade
**Code for reproduction**
<!--A minimum code snippet required to reproduce the bug, also minimizing the
number of dependencies required.-->
<!-- If you need to use a data file to trigger the issue you're having, consider
using one of the datasets from the yt data hub (http://yt-project.org/data). If
your issue cannot be triggered using a public dataset, you can use the yt
curldrop (https://docs.hub.yt/services.html#curldrop) to share data
files. Please include a link to the dataset in the issue if you use the
curldrop.-->
```
import yt
from yt.units import kpc
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(threshold=1500)
filename="/lunarc/nobackup/users/samvad/FINAL-50-0.5/output/output_00018/info_00018.txt"
ds=yt.load(filename)
for i in sorted(ds.derived_field_list):
print(i)
```
**Actual outcome**
<!--The output produced by the above code, which may be a screenshot, console
output, etc.-->
```
File "fields.py", line 10, in <module>
for i in sorted(ds.derived_field_list):
File "yt/data_objects/static_output.py", line 216, in ireq
self.index
File "yt/data_objects/static_output.py", line 509, in index
self, dataset_type=self.dataset_type)
File "yt/frontends/ramses/data_structures.py", line 236, in __init__
super(RAMSESIndex, self).__init__(ds, dataset_type)
File "yt/geometry/geometry_handler.py", line 50, in __init__
self._setup_geometry()
File "yt/geometry/oct_geometry_handler.py", line 25, in _setup_geometry
self._initialize_oct_handler()
File "yt/frontends/ramses/data_structures.py", line 245, in _initialize_oct_handler
for i in cpu_list]
File "yt/frontends/ramses/data_structures.py", line 245, in <listcomp>
for i in cpu_list]
File "yt/frontends/ramses/data_structures.py", line 82, in __init__
self._read_amr_header()
File "yt/frontends/ramses/data_structures.py", line 141, in _read_amr_header
hvals.update(f.read_attrs(header))
File "yt/utilities/cython_fortran_utils.pyx", line 223, in yt.utilities.cython_fortran_utils.FortranFile.read_attrs
IndexError: index 0 is out of bounds for axis 0 with size 0
```
**Expected outcome**
has to print the fields in the data. Was working with yt 3.4.0
**Version Information**
<!--Please specify your platform and versions of the relevant libraries you are
using:-->
* Operating System: Mac
* Python Version: 3.6
* yt version: 3.5.1
* Other Libraries (if applicable):
installed Anaconda separately and then did conda installation of YT using 'forge'
<!--Please tell us how you installed yt and python e.g., from source,
pip, conda. If you installed from conda, please specify which channel you used
if not the default-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt/frontends/ramses/definitions.py`
Content:
```
1 """
2 Definitions for RAMSES files
3
4
5
6
7 """
8
9 #-----------------------------------------------------------------------------
10 # Copyright (c) 2013, yt Development Team.
11 #
12 # Distributed under the terms of the Modified BSD License.
13 #
14 # The full license is in the file COPYING.txt, distributed with this software.
15 #-----------------------------------------------------------------------------
16
17 # These functions are RAMSES-specific
18 from yt.config import ytcfg
19 from yt.funcs import mylog
20 import re
21
22 def ramses_header(hvals):
23 header = ( ('ncpu', 1, 'i'),
24 ('ndim', 1, 'i'),
25 ('nx', 3, 'i'),
26 ('nlevelmax', 1, 'i'),
27 ('ngridmax', 1, 'i'),
28 ('nboundary', 1, 'i'),
29 ('ngrid_current', 1, 'i'),
30 ('boxlen', 1, 'd'),
31 ('nout', 3, 'i')
32 )
33 yield header
34 # TODO: REMOVE
35 noutput, iout, ifout = hvals['nout']
36 next_set = ( ('tout', noutput, 'd'),
37 ('aout', noutput, 'd'),
38 ('t', 1, 'd'),
39 ('dtold', hvals['nlevelmax'], 'd'),
40 ('dtnew', hvals['nlevelmax'], 'd'),
41 ('nstep', 2, 'i'),
42 ('stat', 3, 'd'),
43 ('cosm', 7, 'd'),
44 ('timing', 5, 'd'),
45 ('mass_sph', 1, 'd') )
46 yield next_set
47
48 field_aliases = {
49 'standard_five': ('Density',
50 'x-velocity',
51 'y-velocity',
52 'z-velocity',
53 'Pressure'),
54 'standard_six': ('Density',
55 'x-velocity',
56 'y-velocity',
57 'z-velocity',
58 'Pressure',
59 'Metallicity'),
60
61 }
62
63 ## Regular expressions used to parse file descriptors
64 VERSION_RE = re.compile(r'# version: *(\d+)')
65 # This will match comma-separated strings, discarding whitespaces
66 # on the left hand side
67 VAR_DESC_RE = re.compile(r'\s*([^\s]+),\s*([^\s]+),\s*([^\s]+)')
68
69
70 ## Configure family mapping
71 particle_families = {
72 'DM': 1,
73 'star': 2,
74 'cloud': 3,
75 'dust': 4,
76 'star_tracer': -2,
77 'cloud_tracer': -3,
78 'dust_tracer': -4,
79 'gas_tracer': 0
80 }
81
82 if ytcfg.has_section('ramses-families'):
83 for key in particle_families.keys():
84 val = ytcfg.getint('ramses-families', key, fallback=None)
85 if val is not None:
86 mylog.info('Changing family %s from %s to %s' % (key, particle_families[key], val))
87 particle_families[key] = val
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/yt/frontends/ramses/definitions.py b/yt/frontends/ramses/definitions.py
--- a/yt/frontends/ramses/definitions.py
+++ b/yt/frontends/ramses/definitions.py
@@ -42,7 +42,8 @@
('stat', 3, 'd'),
('cosm', 7, 'd'),
('timing', 5, 'd'),
- ('mass_sph', 1, 'd') )
+ ('mass_sph', 1, 'd', True)
+ )
yield next_set
field_aliases = {
| {"golden_diff": "diff --git a/yt/frontends/ramses/definitions.py b/yt/frontends/ramses/definitions.py\n--- a/yt/frontends/ramses/definitions.py\n+++ b/yt/frontends/ramses/definitions.py\n@@ -42,7 +42,8 @@\n ('stat', 3, 'd'),\n ('cosm', 7, 'd'),\n ('timing', 5, 'd'),\n- ('mass_sph', 1, 'd') )\n+ ('mass_sph', 1, 'd', True)\n+ )\n yield next_set\n \n field_aliases = {\n", "issue": "Index Error updating from YT-3.4.0 to YT-3.5.1\n<!--To help us understand and resolve your issue, please fill out the form to\r\nthe best of your ability.-->\r\n<!--You can feel free to delete the sections that do not apply.-->\r\n\r\n### Bug report\r\n\r\n**Bug summary**\r\n\r\nIndex error after yt upgrade \r\n\r\n**Code for reproduction**\r\n\r\n<!--A minimum code snippet required to reproduce the bug, also minimizing the\r\nnumber of dependencies required.-->\r\n\r\n<!-- If you need to use a data file to trigger the issue you're having, consider\r\nusing one of the datasets from the yt data hub (http://yt-project.org/data). If\r\nyour issue cannot be triggered using a public dataset, you can use the yt\r\ncurldrop (https://docs.hub.yt/services.html#curldrop) to share data\r\nfiles. Please include a link to the dataset in the issue if you use the\r\ncurldrop.-->\r\n\r\n```\r\nimport yt\r\nfrom yt.units import kpc\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\nnp.set_printoptions(threshold=1500)\r\nfilename=\"/lunarc/nobackup/users/samvad/FINAL-50-0.5/output/output_00018/info_00018.txt\"\r\nds=yt.load(filename)\r\n\r\nfor i in sorted(ds.derived_field_list):\r\n print(i)\r\n```\r\n\r\n**Actual outcome**\r\n\r\n<!--The output produced by the above code, which may be a screenshot, console\r\noutput, etc.-->\r\n\r\n```\r\nFile \"fields.py\", line 10, in <module>\r\n for i in sorted(ds.derived_field_list):\r\n File \"yt/data_objects/static_output.py\", line 216, in ireq\r\n self.index\r\n File \"yt/data_objects/static_output.py\", line 509, in index\r\n self, dataset_type=self.dataset_type)\r\n File \"yt/frontends/ramses/data_structures.py\", line 236, in __init__\r\n super(RAMSESIndex, self).__init__(ds, dataset_type)\r\n File \"yt/geometry/geometry_handler.py\", line 50, in __init__\r\n self._setup_geometry()\r\n File \"yt/geometry/oct_geometry_handler.py\", line 25, in _setup_geometry\r\n self._initialize_oct_handler()\r\n File \"yt/frontends/ramses/data_structures.py\", line 245, in _initialize_oct_handler\r\n for i in cpu_list]\r\n File \"yt/frontends/ramses/data_structures.py\", line 245, in <listcomp>\r\n for i in cpu_list]\r\n File \"yt/frontends/ramses/data_structures.py\", line 82, in __init__\r\n self._read_amr_header()\r\n File \"yt/frontends/ramses/data_structures.py\", line 141, in _read_amr_header\r\n hvals.update(f.read_attrs(header))\r\n File \"yt/utilities/cython_fortran_utils.pyx\", line 223, in yt.utilities.cython_fortran_utils.FortranFile.read_attrs\r\nIndexError: index 0 is out of bounds for axis 0 with size 0\r\n```\r\n\r\n**Expected outcome**\r\n\r\nhas to print the fields in the data. Was working with yt 3.4.0\r\n\r\n**Version Information**\r\n<!--Please specify your platform and versions of the relevant libraries you are\r\nusing:-->\r\n * Operating System: Mac\r\n * Python Version: 3.6\r\n * yt version: 3.5.1\r\n * Other Libraries (if applicable): \r\n\r\ninstalled Anaconda separately and then did conda installation of YT using 'forge'\r\n<!--Please tell us how you installed yt and python e.g., from source,\r\npip, conda. If you installed from conda, please specify which channel you used\r\nif not the default-->\r\n\n", "before_files": [{"content": "\"\"\"\nDefinitions for RAMSES files\n\n\n\n\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, yt Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n# These functions are RAMSES-specific\nfrom yt.config import ytcfg\nfrom yt.funcs import mylog\nimport re\n\ndef ramses_header(hvals):\n header = ( ('ncpu', 1, 'i'),\n ('ndim', 1, 'i'),\n ('nx', 3, 'i'),\n ('nlevelmax', 1, 'i'),\n ('ngridmax', 1, 'i'),\n ('nboundary', 1, 'i'),\n ('ngrid_current', 1, 'i'),\n ('boxlen', 1, 'd'),\n ('nout', 3, 'i')\n )\n yield header\n # TODO: REMOVE\n noutput, iout, ifout = hvals['nout']\n next_set = ( ('tout', noutput, 'd'),\n ('aout', noutput, 'd'),\n ('t', 1, 'd'),\n ('dtold', hvals['nlevelmax'], 'd'),\n ('dtnew', hvals['nlevelmax'], 'd'),\n ('nstep', 2, 'i'),\n ('stat', 3, 'd'),\n ('cosm', 7, 'd'),\n ('timing', 5, 'd'),\n ('mass_sph', 1, 'd') )\n yield next_set\n\nfield_aliases = {\n 'standard_five': ('Density',\n 'x-velocity',\n 'y-velocity',\n 'z-velocity',\n 'Pressure'),\n 'standard_six': ('Density',\n 'x-velocity',\n 'y-velocity',\n 'z-velocity',\n 'Pressure',\n 'Metallicity'),\n\n}\n\n## Regular expressions used to parse file descriptors\nVERSION_RE = re.compile(r'# version: *(\\d+)')\n# This will match comma-separated strings, discarding whitespaces\n# on the left hand side\nVAR_DESC_RE = re.compile(r'\\s*([^\\s]+),\\s*([^\\s]+),\\s*([^\\s]+)')\n\n\n## Configure family mapping\nparticle_families = {\n 'DM': 1,\n 'star': 2,\n 'cloud': 3,\n 'dust': 4,\n 'star_tracer': -2,\n 'cloud_tracer': -3,\n 'dust_tracer': -4,\n 'gas_tracer': 0\n}\n\nif ytcfg.has_section('ramses-families'):\n for key in particle_families.keys():\n val = ytcfg.getint('ramses-families', key, fallback=None)\n if val is not None:\n mylog.info('Changing family %s from %s to %s' % (key, particle_families[key], val))\n particle_families[key] = val\n", "path": "yt/frontends/ramses/definitions.py"}], "after_files": [{"content": "\"\"\"\nDefinitions for RAMSES files\n\n\n\n\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, yt Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n# These functions are RAMSES-specific\nfrom yt.config import ytcfg\nfrom yt.funcs import mylog\nimport re\n\ndef ramses_header(hvals):\n header = ( ('ncpu', 1, 'i'),\n ('ndim', 1, 'i'),\n ('nx', 3, 'i'),\n ('nlevelmax', 1, 'i'),\n ('ngridmax', 1, 'i'),\n ('nboundary', 1, 'i'),\n ('ngrid_current', 1, 'i'),\n ('boxlen', 1, 'd'),\n ('nout', 3, 'i')\n )\n yield header\n # TODO: REMOVE\n noutput, iout, ifout = hvals['nout']\n next_set = ( ('tout', noutput, 'd'),\n ('aout', noutput, 'd'),\n ('t', 1, 'd'),\n ('dtold', hvals['nlevelmax'], 'd'),\n ('dtnew', hvals['nlevelmax'], 'd'),\n ('nstep', 2, 'i'),\n ('stat', 3, 'd'),\n ('cosm', 7, 'd'),\n ('timing', 5, 'd'),\n ('mass_sph', 1, 'd', True)\n )\n yield next_set\n\nfield_aliases = {\n 'standard_five': ('Density',\n 'x-velocity',\n 'y-velocity',\n 'z-velocity',\n 'Pressure'),\n 'standard_six': ('Density',\n 'x-velocity',\n 'y-velocity',\n 'z-velocity',\n 'Pressure',\n 'Metallicity'),\n\n}\n\n## Regular expressions used to parse file descriptors\nVERSION_RE = re.compile(r'# version: *(\\d+)')\n# This will match comma-separated strings, discarding whitespaces\n# on the left hand side\nVAR_DESC_RE = re.compile(r'\\s*([^\\s]+),\\s*([^\\s]+),\\s*([^\\s]+)')\n\n\n## Configure family mapping\nparticle_families = {\n 'DM': 1,\n 'star': 2,\n 'cloud': 3,\n 'dust': 4,\n 'star_tracer': -2,\n 'cloud_tracer': -3,\n 'dust_tracer': -4,\n 'gas_tracer': 0\n}\n\nif ytcfg.has_section('ramses-families'):\n for key in particle_families.keys():\n val = ytcfg.getint('ramses-families', key, fallback=None)\n if val is not None:\n mylog.info('Changing family %s from %s to %s' % (key, particle_families[key], val))\n particle_families[key] = val\n", "path": "yt/frontends/ramses/definitions.py"}]} | 1,930 | 136 |
gh_patches_debug_28484 | rasdani/github-patches | git_diff | pytorch__TensorRT-2198 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unclear comparisons in conditional operator implementation
<!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank
line. Leave an empty line before beginning the body of the issue. -->
https://github.com/pytorch/TensorRT/blob/918e9832207d462c2a3aa42f9e7d3ab7aa7415aa/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py#L68
https://github.com/pytorch/TensorRT/blob/918e9832207d462c2a3aa42f9e7d3ab7aa7415aa/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py#L75-L76
https://github.com/pytorch/TensorRT/blob/918e9832207d462c2a3aa42f9e7d3ab7aa7415aa/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py
The above lines are being flagged by mypy as the following:
```sh
py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py:68: error: Non-overlapping equality check (left operand type: "list[Any]", right operand type: "int") [comparison-overlap]
py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py:76: error: Non-overlapping equality check (left operand type: "list[Any]", right operand type: "int") [comparison-overlap]
py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py:98: error: Non-overlapping equality check (left operand type: "list[Any]", right operand type: "int") [comparison-overlap]
```
I cant really figure out what is being checked here but it is likely a bug.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py`
Content:
```
1 from typing import Optional
2
3 import torch
4 from torch.fx.node import Target
5 from torch_tensorrt.dynamo._SourceIR import SourceIR
6 from torch_tensorrt.dynamo.conversion.converter_utils import broadcastable
7 from torch_tensorrt.dynamo.conversion.impl.slice import expand
8 from torch_tensorrt.fx.converters.converter_utils import (
9 broadcast,
10 get_trt_tensor,
11 set_layer_name,
12 )
13 from torch_tensorrt.fx.types import TRTNetwork, TRTTensor
14
15 import tensorrt as trt
16
17
18 def where(
19 network: TRTNetwork,
20 target: Target,
21 source_ir: Optional[SourceIR],
22 name: str,
23 input: TRTTensor,
24 other: TRTTensor,
25 condition: TRTTensor,
26 ) -> TRTTensor:
27 input_dim = len(tuple(input.shape))
28 other_dim = len(tuple(other.shape))
29 condition_dim = len(tuple(condition.shape))
30
31 if type(input) != TRTTensor:
32 assert type(input) is torch.Tensor, f"value {input} is not torch.Tensor!"
33
34 if type(other) != TRTTensor:
35 assert type(other) is torch.Tensor, f"value {other} is not torch.Tensor!"
36
37 if not (broadcastable(input, other)):
38 assert "The two torch tensors should be broadcastable"
39
40 # get output shape
41 # purpose of this is to bring input and other rank same as
42 # output_shape to input it to the add_expand operation
43 # condition will have dimension of either input or other
44 input, other = broadcast(network, input, other, f"{name}_x", f"{name}_y")
45 if len(tuple(condition.shape)) != len(tuple(input.shape)):
46 condition, input = broadcast(
47 network, condition, input, f"{name}_condition", f"{name}_x"
48 )
49
50 x_shape = list(input.shape)
51 y_shape = list(other.shape)
52 condition_shape = list(condition.shape)
53 output_shape = list(torch.broadcast_shapes(condition_shape, x_shape, y_shape))
54
55 # expand shape
56 if type(condition) != TRTTensor:
57 assert condition.dtype == torch.bool, "condition dtype is not bool"
58 if condition_shape != output_shape:
59 condition.expand(output_shape)
60 condition = condition.to(torch.int32)
61 condition_const = get_trt_tensor(network, condition, f"{name}_condition")
62 condition_layer = network.add_identity(condition_const)
63 condition_layer.set_output_type(0, trt.bool)
64 set_layer_name(condition_layer, target, f"{name}_condition")
65 condition_val = condition_layer.get_output(0)
66 else:
67 assert condition.dtype == trt.bool, "mask dtype is not bool!"
68 if condition_shape != condition_dim: # TODO: What is this checking?
69 condition_val = expand(
70 network, target, source_ir, f"{name}_expand", condition, output_shape
71 )
72 else:
73 condition_val = condition
74
75 if type(input) != TRTTensor:
76 if x_shape != input_dim: # TODO: What is this checking?
77 # special case where 1 element in input
78 if len(input.shape) == 0:
79 input = input.unsqueeze(0)
80 input = input.expand(output_shape)
81 x_val = get_trt_tensor(network, input, f"{name}_x")
82 else:
83 x_val = input
84 if x_shape != output_shape:
85 x_val = expand(
86 network, target, source_ir, f"{name}_x_expand", input, output_shape
87 )
88
89 if type(other) != TRTTensor:
90 if y_shape != output_shape:
91 # special case where 1 element in other
92 if len(other.shape) == 0:
93 other = other.unsqueeze(0)
94 other = other.expand(output_shape)
95 y_val = get_trt_tensor(network, other, f"{name}_y")
96 else:
97 y_val = other
98 if y_shape != other_dim: # TODO: What is this checking?
99 y_val = expand(
100 network, target, source_ir, f"{name}_y_expand", y_val, output_shape
101 )
102
103 select_layer = network.add_select(condition_val, x_val, y_val)
104
105 set_layer_name(select_layer, target, f"{name}_select")
106
107 return select_layer.get_output(0)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py b/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py
--- a/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py
+++ b/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py
@@ -65,7 +65,7 @@
condition_val = condition_layer.get_output(0)
else:
assert condition.dtype == trt.bool, "mask dtype is not bool!"
- if condition_shape != condition_dim: # TODO: What is this checking?
+ if len(condition_shape) != condition_dim:
condition_val = expand(
network, target, source_ir, f"{name}_expand", condition, output_shape
)
@@ -73,7 +73,7 @@
condition_val = condition
if type(input) != TRTTensor:
- if x_shape != input_dim: # TODO: What is this checking?
+ if x_shape != output_shape:
# special case where 1 element in input
if len(input.shape) == 0:
input = input.unsqueeze(0)
@@ -95,7 +95,7 @@
y_val = get_trt_tensor(network, other, f"{name}_y")
else:
y_val = other
- if y_shape != other_dim: # TODO: What is this checking?
+ if y_shape != output_shape:
y_val = expand(
network, target, source_ir, f"{name}_y_expand", y_val, output_shape
)
| {"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py b/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py\n--- a/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py\n+++ b/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py\n@@ -65,7 +65,7 @@\n condition_val = condition_layer.get_output(0)\n else:\n assert condition.dtype == trt.bool, \"mask dtype is not bool!\"\n- if condition_shape != condition_dim: # TODO: What is this checking?\n+ if len(condition_shape) != condition_dim:\n condition_val = expand(\n network, target, source_ir, f\"{name}_expand\", condition, output_shape\n )\n@@ -73,7 +73,7 @@\n condition_val = condition\n \n if type(input) != TRTTensor:\n- if x_shape != input_dim: # TODO: What is this checking?\n+ if x_shape != output_shape:\n # special case where 1 element in input\n if len(input.shape) == 0:\n input = input.unsqueeze(0)\n@@ -95,7 +95,7 @@\n y_val = get_trt_tensor(network, other, f\"{name}_y\")\n else:\n y_val = other\n- if y_shape != other_dim: # TODO: What is this checking?\n+ if y_shape != output_shape:\n y_val = expand(\n network, target, source_ir, f\"{name}_y_expand\", y_val, output_shape\n )\n", "issue": "Unclear comparisons in conditional operator implementation\n<!-- Edit the body of your new issue then click the \u2713 \"Create Issue\" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank\nline. Leave an empty line before beginning the body of the issue. -->\n\nhttps://github.com/pytorch/TensorRT/blob/918e9832207d462c2a3aa42f9e7d3ab7aa7415aa/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py#L68\nhttps://github.com/pytorch/TensorRT/blob/918e9832207d462c2a3aa42f9e7d3ab7aa7415aa/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py#L75-L76\nhttps://github.com/pytorch/TensorRT/blob/918e9832207d462c2a3aa42f9e7d3ab7aa7415aa/py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py\n\nThe above lines are being flagged by mypy as the following:\n\n```sh\npy/torch_tensorrt/dynamo/conversion/impl/condition/ops.py:68: error: Non-overlapping equality check (left operand type: \"list[Any]\", right operand type: \"int\") [comparison-overlap]\npy/torch_tensorrt/dynamo/conversion/impl/condition/ops.py:76: error: Non-overlapping equality check (left operand type: \"list[Any]\", right operand type: \"int\") [comparison-overlap]\npy/torch_tensorrt/dynamo/conversion/impl/condition/ops.py:98: error: Non-overlapping equality check (left operand type: \"list[Any]\", right operand type: \"int\") [comparison-overlap]\n```\n\nI cant really figure out what is being checked here but it is likely a bug.\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nfrom torch.fx.node import Target\nfrom torch_tensorrt.dynamo._SourceIR import SourceIR\nfrom torch_tensorrt.dynamo.conversion.converter_utils import broadcastable\nfrom torch_tensorrt.dynamo.conversion.impl.slice import expand\nfrom torch_tensorrt.fx.converters.converter_utils import (\n broadcast,\n get_trt_tensor,\n set_layer_name,\n)\nfrom torch_tensorrt.fx.types import TRTNetwork, TRTTensor\n\nimport tensorrt as trt\n\n\ndef where(\n network: TRTNetwork,\n target: Target,\n source_ir: Optional[SourceIR],\n name: str,\n input: TRTTensor,\n other: TRTTensor,\n condition: TRTTensor,\n) -> TRTTensor:\n input_dim = len(tuple(input.shape))\n other_dim = len(tuple(other.shape))\n condition_dim = len(tuple(condition.shape))\n\n if type(input) != TRTTensor:\n assert type(input) is torch.Tensor, f\"value {input} is not torch.Tensor!\"\n\n if type(other) != TRTTensor:\n assert type(other) is torch.Tensor, f\"value {other} is not torch.Tensor!\"\n\n if not (broadcastable(input, other)):\n assert \"The two torch tensors should be broadcastable\"\n\n # get output shape\n # purpose of this is to bring input and other rank same as\n # output_shape to input it to the add_expand operation\n # condition will have dimension of either input or other\n input, other = broadcast(network, input, other, f\"{name}_x\", f\"{name}_y\")\n if len(tuple(condition.shape)) != len(tuple(input.shape)):\n condition, input = broadcast(\n network, condition, input, f\"{name}_condition\", f\"{name}_x\"\n )\n\n x_shape = list(input.shape)\n y_shape = list(other.shape)\n condition_shape = list(condition.shape)\n output_shape = list(torch.broadcast_shapes(condition_shape, x_shape, y_shape))\n\n # expand shape\n if type(condition) != TRTTensor:\n assert condition.dtype == torch.bool, \"condition dtype is not bool\"\n if condition_shape != output_shape:\n condition.expand(output_shape)\n condition = condition.to(torch.int32)\n condition_const = get_trt_tensor(network, condition, f\"{name}_condition\")\n condition_layer = network.add_identity(condition_const)\n condition_layer.set_output_type(0, trt.bool)\n set_layer_name(condition_layer, target, f\"{name}_condition\")\n condition_val = condition_layer.get_output(0)\n else:\n assert condition.dtype == trt.bool, \"mask dtype is not bool!\"\n if condition_shape != condition_dim: # TODO: What is this checking?\n condition_val = expand(\n network, target, source_ir, f\"{name}_expand\", condition, output_shape\n )\n else:\n condition_val = condition\n\n if type(input) != TRTTensor:\n if x_shape != input_dim: # TODO: What is this checking?\n # special case where 1 element in input\n if len(input.shape) == 0:\n input = input.unsqueeze(0)\n input = input.expand(output_shape)\n x_val = get_trt_tensor(network, input, f\"{name}_x\")\n else:\n x_val = input\n if x_shape != output_shape:\n x_val = expand(\n network, target, source_ir, f\"{name}_x_expand\", input, output_shape\n )\n\n if type(other) != TRTTensor:\n if y_shape != output_shape:\n # special case where 1 element in other\n if len(other.shape) == 0:\n other = other.unsqueeze(0)\n other = other.expand(output_shape)\n y_val = get_trt_tensor(network, other, f\"{name}_y\")\n else:\n y_val = other\n if y_shape != other_dim: # TODO: What is this checking?\n y_val = expand(\n network, target, source_ir, f\"{name}_y_expand\", y_val, output_shape\n )\n\n select_layer = network.add_select(condition_val, x_val, y_val)\n\n set_layer_name(select_layer, target, f\"{name}_select\")\n\n return select_layer.get_output(0)\n", "path": "py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py"}], "after_files": [{"content": "from typing import Optional\n\nimport torch\nfrom torch.fx.node import Target\nfrom torch_tensorrt.dynamo._SourceIR import SourceIR\nfrom torch_tensorrt.dynamo.conversion.converter_utils import broadcastable\nfrom torch_tensorrt.dynamo.conversion.impl.slice import expand\nfrom torch_tensorrt.fx.converters.converter_utils import (\n broadcast,\n get_trt_tensor,\n set_layer_name,\n)\nfrom torch_tensorrt.fx.types import TRTNetwork, TRTTensor\n\nimport tensorrt as trt\n\n\ndef where(\n network: TRTNetwork,\n target: Target,\n source_ir: Optional[SourceIR],\n name: str,\n input: TRTTensor,\n other: TRTTensor,\n condition: TRTTensor,\n) -> TRTTensor:\n input_dim = len(tuple(input.shape))\n other_dim = len(tuple(other.shape))\n condition_dim = len(tuple(condition.shape))\n\n if type(input) != TRTTensor:\n assert type(input) is torch.Tensor, f\"value {input} is not torch.Tensor!\"\n\n if type(other) != TRTTensor:\n assert type(other) is torch.Tensor, f\"value {other} is not torch.Tensor!\"\n\n if not (broadcastable(input, other)):\n assert \"The two torch tensors should be broadcastable\"\n\n # get output shape\n # purpose of this is to bring input and other rank same as\n # output_shape to input it to the add_expand operation\n # condition will have dimension of either input or other\n input, other = broadcast(network, input, other, f\"{name}_x\", f\"{name}_y\")\n if len(tuple(condition.shape)) != len(tuple(input.shape)):\n condition, input = broadcast(\n network, condition, input, f\"{name}_condition\", f\"{name}_x\"\n )\n\n x_shape = list(input.shape)\n y_shape = list(other.shape)\n condition_shape = list(condition.shape)\n output_shape = list(torch.broadcast_shapes(condition_shape, x_shape, y_shape))\n\n # expand shape\n if type(condition) != TRTTensor:\n assert condition.dtype == torch.bool, \"condition dtype is not bool\"\n if condition_shape != output_shape:\n condition.expand(output_shape)\n condition = condition.to(torch.int32)\n condition_const = get_trt_tensor(network, condition, f\"{name}_condition\")\n condition_layer = network.add_identity(condition_const)\n condition_layer.set_output_type(0, trt.bool)\n set_layer_name(condition_layer, target, f\"{name}_condition\")\n condition_val = condition_layer.get_output(0)\n else:\n assert condition.dtype == trt.bool, \"mask dtype is not bool!\"\n if len(condition_shape) != condition_dim:\n condition_val = expand(\n network, target, source_ir, f\"{name}_expand\", condition, output_shape\n )\n else:\n condition_val = condition\n\n if type(input) != TRTTensor:\n if x_shape != output_shape:\n # special case where 1 element in input\n if len(input.shape) == 0:\n input = input.unsqueeze(0)\n input = input.expand(output_shape)\n x_val = get_trt_tensor(network, input, f\"{name}_x\")\n else:\n x_val = input\n if x_shape != output_shape:\n x_val = expand(\n network, target, source_ir, f\"{name}_x_expand\", input, output_shape\n )\n\n if type(other) != TRTTensor:\n if y_shape != output_shape:\n # special case where 1 element in other\n if len(other.shape) == 0:\n other = other.unsqueeze(0)\n other = other.expand(output_shape)\n y_val = get_trt_tensor(network, other, f\"{name}_y\")\n else:\n y_val = other\n if y_shape != output_shape:\n y_val = expand(\n network, target, source_ir, f\"{name}_y_expand\", y_val, output_shape\n )\n\n select_layer = network.add_select(condition_val, x_val, y_val)\n\n set_layer_name(select_layer, target, f\"{name}_select\")\n\n return select_layer.get_output(0)\n", "path": "py/torch_tensorrt/dynamo/conversion/impl/condition/ops.py"}]} | 1,858 | 353 |
gh_patches_debug_13787 | rasdani/github-patches | git_diff | saleor__saleor-2851 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Creating empty order draft causes API to explode
### What I'm trying to achieve
To get order draft details from API.
### Steps to reproduce the problem
Execute this query
```
{
orders {
edges {
node {
id
userEmail
}
}
}
}
```
### What I expected to happen
Definitely not to throw an error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/order/types.py`
Content:
```
1 import graphene
2 from graphene import relay
3
4 from ...order import OrderEvents, models
5 from ..account.types import User
6 from ..core.types.common import CountableDjangoObjectType
7 from ..core.types.money import Money, TaxedMoney
8 from decimal import Decimal
9
10 OrderEventsEnum = graphene.Enum.from_enum(OrderEvents)
11
12
13 class OrderEvent(CountableDjangoObjectType):
14 date = graphene.types.datetime.DateTime(
15 description='Date when event happened at in ISO 8601 format.')
16 type = OrderEventsEnum(description='Order event type')
17 user = graphene.Field(
18 User, id=graphene.Argument(graphene.ID),
19 description='User who performed the action.')
20 message = graphene.String(
21 description='Content of a note added to the order.')
22 email = graphene.String(description='Email of the customer')
23 email_type = graphene.String(
24 description='Type of an email sent to the customer')
25 amount = graphene.Float(description='Amount of money.')
26 quantity = graphene.Int(description='Number of items.')
27 composed_id = graphene.String(
28 description='Composed id of the Fulfillment.')
29
30 class Meta:
31 description = 'History log of the order.'
32 model = models.OrderEvent
33 interfaces = [relay.Node]
34 exclude_fields = ['order', 'parameters']
35
36 def resolve_email(self, info):
37 return self.parameters.get('email', None)
38
39 def resolve_email_type(self, info):
40 return self.parameters.get('email_type', None)
41
42 def resolve_amount(self, info):
43 amount = self.parameters.get('amount', None)
44 return Decimal(amount) if amount else None
45
46 def resolve_quantity(self, info):
47 quantity = self.parameters.get('quantity', None)
48 return int(quantity) if quantity else None
49
50 def resolve_message(self, info):
51 return self.parameters.get('message', None)
52
53 def resolve_composed_id(self, info):
54 return self.parameters.get('composed_id', None)
55
56
57 class Fulfillment(CountableDjangoObjectType):
58 status_display = graphene.String(
59 description='User-friendly fulfillment status.')
60
61 class Meta:
62 description = 'Represents order fulfillment.'
63 interfaces = [relay.Node]
64 model = models.Fulfillment
65 exclude_fields = ['order']
66
67 def resolve_status_display(self, info):
68 return self.get_status_display()
69
70
71 class FulfillmentLine(CountableDjangoObjectType):
72 class Meta:
73 description = 'Represents line of the fulfillment.'
74 interfaces = [relay.Node]
75 model = models.FulfillmentLine
76 exclude_fields = ['fulfillment']
77
78
79 class Order(CountableDjangoObjectType):
80 fulfillments = graphene.List(
81 Fulfillment,
82 required=True,
83 description='List of shipments for the order.')
84 is_paid = graphene.Boolean(
85 description='Informs if an order is fully paid.')
86 number = graphene.String(description='User-friendly number of an order.')
87 payment_status = graphene.String(description='Internal payment status.')
88 payment_status_display = graphene.String(
89 description='User-friendly payment status.')
90 subtotal = graphene.Field(
91 TaxedMoney,
92 description='The sum of line prices not including shipping.')
93 status_display = graphene.String(description='User-friendly order status.')
94 total_authorized = graphene.Field(
95 Money, description='Amount authorized for the order.')
96 total_captured = graphene.Field(
97 Money, description='Amount captured by payment.')
98 events = graphene.List(
99 OrderEvent,
100 description='List of events associated with the order.')
101
102 class Meta:
103 description = 'Represents an order in the shop.'
104 interfaces = [relay.Node]
105 model = models.Order
106 exclude_fields = [
107 'shipping_price_gross', 'shipping_price_net', 'total_gross',
108 'total_net']
109
110 @staticmethod
111 def resolve_subtotal(obj, info):
112 return obj.get_subtotal()
113
114 @staticmethod
115 def resolve_total_authorized(obj, info):
116 payment = obj.get_last_payment()
117 if payment:
118 return payment.get_total_price().gross
119
120 @staticmethod
121 def resolve_total_captured(obj, info):
122 payment = obj.get_last_payment()
123 if payment:
124 return payment.get_captured_price()
125
126 @staticmethod
127 def resolve_fulfillments(obj, info):
128 return obj.fulfillments.all()
129
130 @staticmethod
131 def resolve_events(obj, info):
132 return obj.events.all()
133
134 @staticmethod
135 def resolve_is_paid(obj, info):
136 return obj.is_fully_paid()
137
138 @staticmethod
139 def resolve_number(obj, info):
140 return str(obj.pk)
141
142 @staticmethod
143 def resolve_payment_status(obj, info):
144 return obj.get_last_payment_status()
145
146 @staticmethod
147 def resolve_payment_status_display(obj, info):
148 return obj.get_last_payment_status_display()
149
150 @staticmethod
151 def resolve_status_display(obj, info):
152 return obj.get_status_display()
153
154 @staticmethod
155 def resolve_user_email(obj, info):
156 if obj.user_email:
157 return obj.user_email
158 if obj.user_id:
159 return obj.user.email
160
161
162 class OrderLine(CountableDjangoObjectType):
163 class Meta:
164 description = 'Represents order line of particular order.'
165 model = models.OrderLine
166 interfaces = [relay.Node]
167 exclude_fields = [
168 'order', 'unit_price_gross', 'unit_price_net', 'variant']
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/order/types.py b/saleor/graphql/order/types.py
--- a/saleor/graphql/order/types.py
+++ b/saleor/graphql/order/types.py
@@ -98,6 +98,8 @@
events = graphene.List(
OrderEvent,
description='List of events associated with the order.')
+ user_email = graphene.String(
+ required=False, description='Email address of the customer.')
class Meta:
description = 'Represents an order in the shop.'
@@ -157,6 +159,7 @@
return obj.user_email
if obj.user_id:
return obj.user.email
+ return None
class OrderLine(CountableDjangoObjectType):
| {"golden_diff": "diff --git a/saleor/graphql/order/types.py b/saleor/graphql/order/types.py\n--- a/saleor/graphql/order/types.py\n+++ b/saleor/graphql/order/types.py\n@@ -98,6 +98,8 @@\n events = graphene.List(\n OrderEvent,\n description='List of events associated with the order.')\n+ user_email = graphene.String(\n+ required=False, description='Email address of the customer.')\n \n class Meta:\n description = 'Represents an order in the shop.'\n@@ -157,6 +159,7 @@\n return obj.user_email\n if obj.user_id:\n return obj.user.email\n+ return None\n \n \n class OrderLine(CountableDjangoObjectType):\n", "issue": "Creating empty order draft causes API to explode\n### What I'm trying to achieve\r\nTo get order draft details from API.\r\n\r\n### Steps to reproduce the problem\r\nExecute this query\r\n```\r\n{\r\n orders {\r\n edges {\r\n node {\r\n id\r\n userEmail\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n### What I expected to happen\r\nDefinitely not to throw an error.\n", "before_files": [{"content": "import graphene\nfrom graphene import relay\n\nfrom ...order import OrderEvents, models\nfrom ..account.types import User\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import Money, TaxedMoney\nfrom decimal import Decimal\n\nOrderEventsEnum = graphene.Enum.from_enum(OrderEvents)\n\n\nclass OrderEvent(CountableDjangoObjectType):\n date = graphene.types.datetime.DateTime(\n description='Date when event happened at in ISO 8601 format.')\n type = OrderEventsEnum(description='Order event type')\n user = graphene.Field(\n User, id=graphene.Argument(graphene.ID),\n description='User who performed the action.')\n message = graphene.String(\n description='Content of a note added to the order.')\n email = graphene.String(description='Email of the customer')\n email_type = graphene.String(\n description='Type of an email sent to the customer')\n amount = graphene.Float(description='Amount of money.')\n quantity = graphene.Int(description='Number of items.')\n composed_id = graphene.String(\n description='Composed id of the Fulfillment.')\n\n class Meta:\n description = 'History log of the order.'\n model = models.OrderEvent\n interfaces = [relay.Node]\n exclude_fields = ['order', 'parameters']\n\n def resolve_email(self, info):\n return self.parameters.get('email', None)\n\n def resolve_email_type(self, info):\n return self.parameters.get('email_type', None)\n\n def resolve_amount(self, info):\n amount = self.parameters.get('amount', None)\n return Decimal(amount) if amount else None\n\n def resolve_quantity(self, info):\n quantity = self.parameters.get('quantity', None)\n return int(quantity) if quantity else None\n\n def resolve_message(self, info):\n return self.parameters.get('message', None)\n\n def resolve_composed_id(self, info):\n return self.parameters.get('composed_id', None)\n\n\nclass Fulfillment(CountableDjangoObjectType):\n status_display = graphene.String(\n description='User-friendly fulfillment status.')\n\n class Meta:\n description = 'Represents order fulfillment.'\n interfaces = [relay.Node]\n model = models.Fulfillment\n exclude_fields = ['order']\n\n def resolve_status_display(self, info):\n return self.get_status_display()\n\n\nclass FulfillmentLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents line of the fulfillment.'\n interfaces = [relay.Node]\n model = models.FulfillmentLine\n exclude_fields = ['fulfillment']\n\n\nclass Order(CountableDjangoObjectType):\n fulfillments = graphene.List(\n Fulfillment,\n required=True,\n description='List of shipments for the order.')\n is_paid = graphene.Boolean(\n description='Informs if an order is fully paid.')\n number = graphene.String(description='User-friendly number of an order.')\n payment_status = graphene.String(description='Internal payment status.')\n payment_status_display = graphene.String(\n description='User-friendly payment status.')\n subtotal = graphene.Field(\n TaxedMoney,\n description='The sum of line prices not including shipping.')\n status_display = graphene.String(description='User-friendly order status.')\n total_authorized = graphene.Field(\n Money, description='Amount authorized for the order.')\n total_captured = graphene.Field(\n Money, description='Amount captured by payment.')\n events = graphene.List(\n OrderEvent,\n description='List of events associated with the order.')\n\n class Meta:\n description = 'Represents an order in the shop.'\n interfaces = [relay.Node]\n model = models.Order\n exclude_fields = [\n 'shipping_price_gross', 'shipping_price_net', 'total_gross',\n 'total_net']\n\n @staticmethod\n def resolve_subtotal(obj, info):\n return obj.get_subtotal()\n\n @staticmethod\n def resolve_total_authorized(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_total_price().gross\n\n @staticmethod\n def resolve_total_captured(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_captured_price()\n\n @staticmethod\n def resolve_fulfillments(obj, info):\n return obj.fulfillments.all()\n\n @staticmethod\n def resolve_events(obj, info):\n return obj.events.all()\n\n @staticmethod\n def resolve_is_paid(obj, info):\n return obj.is_fully_paid()\n\n @staticmethod\n def resolve_number(obj, info):\n return str(obj.pk)\n\n @staticmethod\n def resolve_payment_status(obj, info):\n return obj.get_last_payment_status()\n\n @staticmethod\n def resolve_payment_status_display(obj, info):\n return obj.get_last_payment_status_display()\n\n @staticmethod\n def resolve_status_display(obj, info):\n return obj.get_status_display()\n\n @staticmethod\n def resolve_user_email(obj, info):\n if obj.user_email:\n return obj.user_email\n if obj.user_id:\n return obj.user.email\n\n\nclass OrderLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents order line of particular order.'\n model = models.OrderLine\n interfaces = [relay.Node]\n exclude_fields = [\n 'order', 'unit_price_gross', 'unit_price_net', 'variant']\n", "path": "saleor/graphql/order/types.py"}], "after_files": [{"content": "import graphene\nfrom graphene import relay\n\nfrom ...order import OrderEvents, models\nfrom ..account.types import User\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import Money, TaxedMoney\nfrom decimal import Decimal\n\nOrderEventsEnum = graphene.Enum.from_enum(OrderEvents)\n\n\nclass OrderEvent(CountableDjangoObjectType):\n date = graphene.types.datetime.DateTime(\n description='Date when event happened at in ISO 8601 format.')\n type = OrderEventsEnum(description='Order event type')\n user = graphene.Field(\n User, id=graphene.Argument(graphene.ID),\n description='User who performed the action.')\n message = graphene.String(\n description='Content of a note added to the order.')\n email = graphene.String(description='Email of the customer')\n email_type = graphene.String(\n description='Type of an email sent to the customer')\n amount = graphene.Float(description='Amount of money.')\n quantity = graphene.Int(description='Number of items.')\n composed_id = graphene.String(\n description='Composed id of the Fulfillment.')\n\n class Meta:\n description = 'History log of the order.'\n model = models.OrderEvent\n interfaces = [relay.Node]\n exclude_fields = ['order', 'parameters']\n\n def resolve_email(self, info):\n return self.parameters.get('email', None)\n\n def resolve_email_type(self, info):\n return self.parameters.get('email_type', None)\n\n def resolve_amount(self, info):\n amount = self.parameters.get('amount', None)\n return Decimal(amount) if amount else None\n\n def resolve_quantity(self, info):\n quantity = self.parameters.get('quantity', None)\n return int(quantity) if quantity else None\n\n def resolve_message(self, info):\n return self.parameters.get('message', None)\n\n def resolve_composed_id(self, info):\n return self.parameters.get('composed_id', None)\n\n\nclass Fulfillment(CountableDjangoObjectType):\n status_display = graphene.String(\n description='User-friendly fulfillment status.')\n\n class Meta:\n description = 'Represents order fulfillment.'\n interfaces = [relay.Node]\n model = models.Fulfillment\n exclude_fields = ['order']\n\n def resolve_status_display(self, info):\n return self.get_status_display()\n\n\nclass FulfillmentLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents line of the fulfillment.'\n interfaces = [relay.Node]\n model = models.FulfillmentLine\n exclude_fields = ['fulfillment']\n\n\nclass Order(CountableDjangoObjectType):\n fulfillments = graphene.List(\n Fulfillment,\n required=True,\n description='List of shipments for the order.')\n is_paid = graphene.Boolean(\n description='Informs if an order is fully paid.')\n number = graphene.String(description='User-friendly number of an order.')\n payment_status = graphene.String(description='Internal payment status.')\n payment_status_display = graphene.String(\n description='User-friendly payment status.')\n subtotal = graphene.Field(\n TaxedMoney,\n description='The sum of line prices not including shipping.')\n status_display = graphene.String(description='User-friendly order status.')\n total_authorized = graphene.Field(\n Money, description='Amount authorized for the order.')\n total_captured = graphene.Field(\n Money, description='Amount captured by payment.')\n events = graphene.List(\n OrderEvent,\n description='List of events associated with the order.')\n user_email = graphene.String(\n required=False, description='Email address of the customer.')\n\n class Meta:\n description = 'Represents an order in the shop.'\n interfaces = [relay.Node]\n model = models.Order\n exclude_fields = [\n 'shipping_price_gross', 'shipping_price_net', 'total_gross',\n 'total_net']\n\n @staticmethod\n def resolve_subtotal(obj, info):\n return obj.get_subtotal()\n\n @staticmethod\n def resolve_total_authorized(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_total_price().gross\n\n @staticmethod\n def resolve_total_captured(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_captured_price()\n\n @staticmethod\n def resolve_fulfillments(obj, info):\n return obj.fulfillments.all()\n\n @staticmethod\n def resolve_events(obj, info):\n return obj.events.all()\n\n @staticmethod\n def resolve_is_paid(obj, info):\n return obj.is_fully_paid()\n\n @staticmethod\n def resolve_number(obj, info):\n return str(obj.pk)\n\n @staticmethod\n def resolve_payment_status(obj, info):\n return obj.get_last_payment_status()\n\n @staticmethod\n def resolve_payment_status_display(obj, info):\n return obj.get_last_payment_status_display()\n\n @staticmethod\n def resolve_status_display(obj, info):\n return obj.get_status_display()\n\n @staticmethod\n def resolve_user_email(obj, info):\n if obj.user_email:\n return obj.user_email\n if obj.user_id:\n return obj.user.email\n return None\n\n\nclass OrderLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents order line of particular order.'\n model = models.OrderLine\n interfaces = [relay.Node]\n exclude_fields = [\n 'order', 'unit_price_gross', 'unit_price_net', 'variant']\n", "path": "saleor/graphql/order/types.py"}]} | 1,878 | 159 |
gh_patches_debug_61112 | rasdani/github-patches | git_diff | pre-commit__pre-commit-933 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pre-commit autoupdate fails when config is empty
Running `pre-commit autoupdate` with an empty `.pre-commit-config.yaml` results in the below error:
```An unexpected error has occurred: IndexError: list index out of range
Traceback (most recent call last):
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/error_handler.py", line 46, in error_handler
yield
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/main.py", line 286, in main
repos=args.repos,
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/autoupdate.py", line 117, in autoupdate
migrate_config(config_file, quiet=True)
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py", line 52, in migrate_config
contents = _migrate_map(contents)
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py", line 24, in _migrate_map
while _is_header_line(lines[i]):
IndexError: list index out of range
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/commands/migrate_config.py`
Content:
```
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import re
6
7 import yaml
8 from aspy.yaml import ordered_load
9
10
11 def _indent(s):
12 lines = s.splitlines(True)
13 return ''.join(' ' * 4 + line if line.strip() else line for line in lines)
14
15
16 def _is_header_line(line):
17 return (line.startswith(('#', '---')) or not line.strip())
18
19
20 def _migrate_map(contents):
21 # Find the first non-header line
22 lines = contents.splitlines(True)
23 i = 0
24 while _is_header_line(lines[i]):
25 i += 1
26
27 header = ''.join(lines[:i])
28 rest = ''.join(lines[i:])
29
30 if isinstance(ordered_load(contents), list):
31 # If they are using the "default" flow style of yaml, this operation
32 # will yield a valid configuration
33 try:
34 trial_contents = header + 'repos:\n' + rest
35 ordered_load(trial_contents)
36 contents = trial_contents
37 except yaml.YAMLError:
38 contents = header + 'repos:\n' + _indent(rest)
39
40 return contents
41
42
43 def _migrate_sha_to_rev(contents):
44 reg = re.compile(r'(\n\s+)sha:')
45 return reg.sub(r'\1rev:', contents)
46
47
48 def migrate_config(config_file, quiet=False):
49 with io.open(config_file) as f:
50 orig_contents = contents = f.read()
51
52 contents = _migrate_map(contents)
53 contents = _migrate_sha_to_rev(contents)
54
55 if contents != orig_contents:
56 with io.open(config_file, 'w') as f:
57 f.write(contents)
58
59 print('Configuration has been migrated.')
60 elif not quiet:
61 print('Configuration is already migrated.')
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py
--- a/pre_commit/commands/migrate_config.py
+++ b/pre_commit/commands/migrate_config.py
@@ -21,7 +21,8 @@
# Find the first non-header line
lines = contents.splitlines(True)
i = 0
- while _is_header_line(lines[i]):
+ # Only loop on non empty configuration file
+ while i < len(lines) and _is_header_line(lines[i]):
i += 1
header = ''.join(lines[:i])
| {"golden_diff": "diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py\n--- a/pre_commit/commands/migrate_config.py\n+++ b/pre_commit/commands/migrate_config.py\n@@ -21,7 +21,8 @@\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n- while _is_header_line(lines[i]):\n+ # Only loop on non empty configuration file\n+ while i < len(lines) and _is_header_line(lines[i]):\n i += 1\n \n header = ''.join(lines[:i])\n", "issue": "pre-commit autoupdate fails when config is empty\nRunning `pre-commit autoupdate` with an empty `.pre-commit-config.yaml` results in the below error:\r\n```An unexpected error has occurred: IndexError: list index out of range\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/error_handler.py\", line 46, in error_handler\r\n yield\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/main.py\", line 286, in main\r\n repos=args.repos,\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/autoupdate.py\", line 117, in autoupdate\r\n migrate_config(config_file, quiet=True)\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py\", line 52, in migrate_config\r\n contents = _migrate_map(contents)\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py\", line 24, in _migrate_map\r\n while _is_header_line(lines[i]):\r\nIndexError: list index out of range\r\n```\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport re\n\nimport yaml\nfrom aspy.yaml import ordered_load\n\n\ndef _indent(s):\n lines = s.splitlines(True)\n return ''.join(' ' * 4 + line if line.strip() else line for line in lines)\n\n\ndef _is_header_line(line):\n return (line.startswith(('#', '---')) or not line.strip())\n\n\ndef _migrate_map(contents):\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n while _is_header_line(lines[i]):\n i += 1\n\n header = ''.join(lines[:i])\n rest = ''.join(lines[i:])\n\n if isinstance(ordered_load(contents), list):\n # If they are using the \"default\" flow style of yaml, this operation\n # will yield a valid configuration\n try:\n trial_contents = header + 'repos:\\n' + rest\n ordered_load(trial_contents)\n contents = trial_contents\n except yaml.YAMLError:\n contents = header + 'repos:\\n' + _indent(rest)\n\n return contents\n\n\ndef _migrate_sha_to_rev(contents):\n reg = re.compile(r'(\\n\\s+)sha:')\n return reg.sub(r'\\1rev:', contents)\n\n\ndef migrate_config(config_file, quiet=False):\n with io.open(config_file) as f:\n orig_contents = contents = f.read()\n\n contents = _migrate_map(contents)\n contents = _migrate_sha_to_rev(contents)\n\n if contents != orig_contents:\n with io.open(config_file, 'w') as f:\n f.write(contents)\n\n print('Configuration has been migrated.')\n elif not quiet:\n print('Configuration is already migrated.')\n", "path": "pre_commit/commands/migrate_config.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport re\n\nimport yaml\nfrom aspy.yaml import ordered_load\n\n\ndef _indent(s):\n lines = s.splitlines(True)\n return ''.join(' ' * 4 + line if line.strip() else line for line in lines)\n\n\ndef _is_header_line(line):\n return (line.startswith(('#', '---')) or not line.strip())\n\n\ndef _migrate_map(contents):\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n # Only loop on non empty configuration file\n while i < len(lines) and _is_header_line(lines[i]):\n i += 1\n\n header = ''.join(lines[:i])\n rest = ''.join(lines[i:])\n\n if isinstance(ordered_load(contents), list):\n # If they are using the \"default\" flow style of yaml, this operation\n # will yield a valid configuration\n try:\n trial_contents = header + 'repos:\\n' + rest\n ordered_load(trial_contents)\n contents = trial_contents\n except yaml.YAMLError:\n contents = header + 'repos:\\n' + _indent(rest)\n\n return contents\n\n\ndef _migrate_sha_to_rev(contents):\n reg = re.compile(r'(\\n\\s+)sha:')\n return reg.sub(r'\\1rev:', contents)\n\n\ndef migrate_config(config_file, quiet=False):\n with io.open(config_file) as f:\n orig_contents = contents = f.read()\n\n contents = _migrate_map(contents)\n contents = _migrate_sha_to_rev(contents)\n\n if contents != orig_contents:\n with io.open(config_file, 'w') as f:\n f.write(contents)\n\n print('Configuration has been migrated.')\n elif not quiet:\n print('Configuration is already migrated.')\n", "path": "pre_commit/commands/migrate_config.py"}]} | 1,089 | 131 |
gh_patches_debug_18466 | rasdani/github-patches | git_diff | vyperlang__vyper-2399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't import interface using structs
### Version Information
* vyper Version (output of `vyper --version`): 0.2.12+commit.2c6842c
* OS: linux
* Python Version (output of `python --version`): 3.8.5
* Environment (output of `pip freeze`):
```
asttokens==2.0.4
pycryptodome==3.10.1
semantic-version==2.8.5
six==1.15.0
vyper==0.2.12
```
### What's your issue about?
Can't import an interface if it uses structs. Simple example:
foo.vy:
```
struct Widget:
name: String[8]
count: uint256
widget: Widget
@external
def show() -> (String[8], uint256):
return (self.widget.name, self.widget.count)
@external
def __init__():
self.widget = Widget({
name: "thing",
count: 1
})
```
bar.vy
```
import foo as Foo
@external
def __init__():
pass
```
Throw both in the same dir.
`vyper foo.vy` results in a successful compilation
`vyper bar.vy` results in:
```
Error compiling: bar.vy
vyper.exceptions.InvalidType: Invalid base type: Widget
contract "Foo", line 5:8
4
---> 5 widget: Widget
---------------^
6
```
### How can it be fixed?
Haven't spent time fixing yet
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/ast/signatures/interface.py`
Content:
```
1 # TODO does this module not get imported?
2
3 import importlib
4 import pkgutil
5
6 import vyper.builtin_interfaces
7 from vyper import ast as vy_ast
8 from vyper.ast.signatures import sig_utils
9 from vyper.ast.signatures.function_signature import FunctionSignature
10 from vyper.exceptions import StructureException
11 from vyper.old_codegen.global_context import GlobalContext
12
13
14 # Populate built-in interfaces.
15 def get_builtin_interfaces():
16 interface_names = [x.name for x in pkgutil.iter_modules(vyper.builtin_interfaces.__path__)]
17 return {
18 name: extract_sigs(
19 {
20 "type": "vyper",
21 "code": importlib.import_module(f"vyper.builtin_interfaces.{name}",).interface_code,
22 },
23 name,
24 )
25 for name in interface_names
26 }
27
28
29 def abi_type_to_ast(atype, expected_size):
30 if atype in ("int128", "uint256", "bool", "address", "bytes32"):
31 return vy_ast.Name(id=atype)
32 elif atype == "fixed168x10":
33 return vy_ast.Name(id="decimal")
34 elif atype in ("bytes", "string"):
35 # expected_size is the maximum length for inputs, minimum length for outputs
36 return vy_ast.Subscript(
37 value=vy_ast.Name(id=atype.capitalize()),
38 slice=vy_ast.Index(value=vy_ast.Int(value=expected_size)),
39 )
40 else:
41 raise StructureException(f"Type {atype} not supported by vyper.")
42
43
44 # Vyper defines a maximum length for bytes and string types, but Solidity does not.
45 # To maximize interoperability, we internally considers these types to have a
46 # a length of 1Mb (1024 * 1024 * 1 byte) for inputs, and 1 for outputs.
47 # Ths approach solves the issue because Vyper allows for an implicit casting
48 # from a lower length into a higher one. (@iamdefinitelyahuman)
49 def mk_full_signature_from_json(abi):
50 funcs = [func for func in abi if func["type"] == "function"]
51 sigs = []
52
53 for func in funcs:
54 args = []
55 returns = None
56 for a in func["inputs"]:
57 arg = vy_ast.arg(
58 arg=a["name"],
59 annotation=abi_type_to_ast(a["type"], 1048576),
60 lineno=0,
61 col_offset=0,
62 )
63 args.append(arg)
64
65 if len(func["outputs"]) == 1:
66 returns = abi_type_to_ast(func["outputs"][0]["type"], 1)
67 elif len(func["outputs"]) > 1:
68 returns = vy_ast.Tuple(
69 elements=[abi_type_to_ast(a["type"], 1) for a in func["outputs"]]
70 )
71
72 decorator_list = [vy_ast.Name(id="external")]
73 # Handle either constant/payable or stateMutability field
74 if ("constant" in func and func["constant"]) or (
75 "stateMutability" in func and func["stateMutability"] == "view"
76 ):
77 decorator_list.append(vy_ast.Name(id="view"))
78 if ("payable" in func and func["payable"]) or (
79 "stateMutability" in func and func["stateMutability"] == "payable"
80 ):
81 decorator_list.append(vy_ast.Name(id="payable"))
82
83 sig = FunctionSignature.from_definition(
84 code=vy_ast.FunctionDef(
85 name=func["name"],
86 args=vy_ast.arguments(args=args),
87 decorator_list=decorator_list,
88 returns=returns,
89 ),
90 custom_structs=dict(),
91 is_from_json=True,
92 )
93 sigs.append(sig)
94 return sigs
95
96
97 def extract_sigs(sig_code, interface_name=None):
98 if sig_code["type"] == "vyper":
99 interface_ast = [
100 i
101 for i in vy_ast.parse_to_ast(sig_code["code"], contract_name=interface_name)
102 if isinstance(i, vy_ast.FunctionDef)
103 or isinstance(i, vy_ast.EventDef)
104 or (isinstance(i, vy_ast.AnnAssign) and i.target.id != "implements")
105 ]
106 global_ctx = GlobalContext.get_global_context(interface_ast)
107 return sig_utils.mk_full_signature(global_ctx, sig_formatter=lambda x: x)
108 elif sig_code["type"] == "json":
109 return mk_full_signature_from_json(sig_code["code"])
110 else:
111 raise Exception(
112 (
113 f"Unknown interface signature type '{sig_code['type']}' supplied. "
114 "'vyper' & 'json' are supported"
115 )
116 )
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/ast/signatures/interface.py b/vyper/ast/signatures/interface.py
--- a/vyper/ast/signatures/interface.py
+++ b/vyper/ast/signatures/interface.py
@@ -99,8 +99,20 @@
interface_ast = [
i
for i in vy_ast.parse_to_ast(sig_code["code"], contract_name=interface_name)
- if isinstance(i, vy_ast.FunctionDef)
- or isinstance(i, vy_ast.EventDef)
+ # all the nodes visited by ModuleNodeVisitor.
+ if isinstance(
+ i,
+ (
+ vy_ast.FunctionDef,
+ vy_ast.EventDef,
+ vy_ast.StructDef,
+ vy_ast.InterfaceDef,
+ # parsing import statements at this stage
+ # causes issues with recursive imports
+ # vy_ast.Import,
+ # vy_ast.ImportFrom,
+ ),
+ )
or (isinstance(i, vy_ast.AnnAssign) and i.target.id != "implements")
]
global_ctx = GlobalContext.get_global_context(interface_ast)
| {"golden_diff": "diff --git a/vyper/ast/signatures/interface.py b/vyper/ast/signatures/interface.py\n--- a/vyper/ast/signatures/interface.py\n+++ b/vyper/ast/signatures/interface.py\n@@ -99,8 +99,20 @@\n interface_ast = [\n i\n for i in vy_ast.parse_to_ast(sig_code[\"code\"], contract_name=interface_name)\n- if isinstance(i, vy_ast.FunctionDef)\n- or isinstance(i, vy_ast.EventDef)\n+ # all the nodes visited by ModuleNodeVisitor.\n+ if isinstance(\n+ i,\n+ (\n+ vy_ast.FunctionDef,\n+ vy_ast.EventDef,\n+ vy_ast.StructDef,\n+ vy_ast.InterfaceDef,\n+ # parsing import statements at this stage\n+ # causes issues with recursive imports\n+ # vy_ast.Import,\n+ # vy_ast.ImportFrom,\n+ ),\n+ )\n or (isinstance(i, vy_ast.AnnAssign) and i.target.id != \"implements\")\n ]\n global_ctx = GlobalContext.get_global_context(interface_ast)\n", "issue": "Can't import interface using structs\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): 0.2.12+commit.2c6842c\r\n* OS: linux\r\n* Python Version (output of `python --version`): 3.8.5\r\n* Environment (output of `pip freeze`):\r\n```\r\nasttokens==2.0.4\r\npycryptodome==3.10.1\r\nsemantic-version==2.8.5\r\nsix==1.15.0\r\nvyper==0.2.12\r\n```\r\n\r\n### What's your issue about?\r\n\r\nCan't import an interface if it uses structs. Simple example:\r\n\r\nfoo.vy:\r\n```\r\nstruct Widget:\r\n name: String[8]\r\n count: uint256\r\n\r\nwidget: Widget\r\n\r\n@external\r\ndef show() -> (String[8], uint256):\r\n return (self.widget.name, self.widget.count)\r\n\r\n@external\r\ndef __init__():\r\n self.widget = Widget({\r\n name: \"thing\",\r\n count: 1\r\n })\r\n```\r\nbar.vy\r\n```\r\nimport foo as Foo\r\n\r\n@external\r\ndef __init__():\r\n pass\r\n```\r\n\r\nThrow both in the same dir.\r\n\r\n`vyper foo.vy` results in a successful compilation\r\n\r\n`vyper bar.vy` results in:\r\n```\r\nError compiling: bar.vy\r\nvyper.exceptions.InvalidType: Invalid base type: Widget\r\n contract \"Foo\", line 5:8 \r\n 4\r\n ---> 5 widget: Widget\r\n ---------------^\r\n 6\r\n```\r\n\r\n### How can it be fixed?\r\n\r\nHaven't spent time fixing yet\n", "before_files": [{"content": "# TODO does this module not get imported?\n\nimport importlib\nimport pkgutil\n\nimport vyper.builtin_interfaces\nfrom vyper import ast as vy_ast\nfrom vyper.ast.signatures import sig_utils\nfrom vyper.ast.signatures.function_signature import FunctionSignature\nfrom vyper.exceptions import StructureException\nfrom vyper.old_codegen.global_context import GlobalContext\n\n\n# Populate built-in interfaces.\ndef get_builtin_interfaces():\n interface_names = [x.name for x in pkgutil.iter_modules(vyper.builtin_interfaces.__path__)]\n return {\n name: extract_sigs(\n {\n \"type\": \"vyper\",\n \"code\": importlib.import_module(f\"vyper.builtin_interfaces.{name}\",).interface_code,\n },\n name,\n )\n for name in interface_names\n }\n\n\ndef abi_type_to_ast(atype, expected_size):\n if atype in (\"int128\", \"uint256\", \"bool\", \"address\", \"bytes32\"):\n return vy_ast.Name(id=atype)\n elif atype == \"fixed168x10\":\n return vy_ast.Name(id=\"decimal\")\n elif atype in (\"bytes\", \"string\"):\n # expected_size is the maximum length for inputs, minimum length for outputs\n return vy_ast.Subscript(\n value=vy_ast.Name(id=atype.capitalize()),\n slice=vy_ast.Index(value=vy_ast.Int(value=expected_size)),\n )\n else:\n raise StructureException(f\"Type {atype} not supported by vyper.\")\n\n\n# Vyper defines a maximum length for bytes and string types, but Solidity does not.\n# To maximize interoperability, we internally considers these types to have a\n# a length of 1Mb (1024 * 1024 * 1 byte) for inputs, and 1 for outputs.\n# Ths approach solves the issue because Vyper allows for an implicit casting\n# from a lower length into a higher one. (@iamdefinitelyahuman)\ndef mk_full_signature_from_json(abi):\n funcs = [func for func in abi if func[\"type\"] == \"function\"]\n sigs = []\n\n for func in funcs:\n args = []\n returns = None\n for a in func[\"inputs\"]:\n arg = vy_ast.arg(\n arg=a[\"name\"],\n annotation=abi_type_to_ast(a[\"type\"], 1048576),\n lineno=0,\n col_offset=0,\n )\n args.append(arg)\n\n if len(func[\"outputs\"]) == 1:\n returns = abi_type_to_ast(func[\"outputs\"][0][\"type\"], 1)\n elif len(func[\"outputs\"]) > 1:\n returns = vy_ast.Tuple(\n elements=[abi_type_to_ast(a[\"type\"], 1) for a in func[\"outputs\"]]\n )\n\n decorator_list = [vy_ast.Name(id=\"external\")]\n # Handle either constant/payable or stateMutability field\n if (\"constant\" in func and func[\"constant\"]) or (\n \"stateMutability\" in func and func[\"stateMutability\"] == \"view\"\n ):\n decorator_list.append(vy_ast.Name(id=\"view\"))\n if (\"payable\" in func and func[\"payable\"]) or (\n \"stateMutability\" in func and func[\"stateMutability\"] == \"payable\"\n ):\n decorator_list.append(vy_ast.Name(id=\"payable\"))\n\n sig = FunctionSignature.from_definition(\n code=vy_ast.FunctionDef(\n name=func[\"name\"],\n args=vy_ast.arguments(args=args),\n decorator_list=decorator_list,\n returns=returns,\n ),\n custom_structs=dict(),\n is_from_json=True,\n )\n sigs.append(sig)\n return sigs\n\n\ndef extract_sigs(sig_code, interface_name=None):\n if sig_code[\"type\"] == \"vyper\":\n interface_ast = [\n i\n for i in vy_ast.parse_to_ast(sig_code[\"code\"], contract_name=interface_name)\n if isinstance(i, vy_ast.FunctionDef)\n or isinstance(i, vy_ast.EventDef)\n or (isinstance(i, vy_ast.AnnAssign) and i.target.id != \"implements\")\n ]\n global_ctx = GlobalContext.get_global_context(interface_ast)\n return sig_utils.mk_full_signature(global_ctx, sig_formatter=lambda x: x)\n elif sig_code[\"type\"] == \"json\":\n return mk_full_signature_from_json(sig_code[\"code\"])\n else:\n raise Exception(\n (\n f\"Unknown interface signature type '{sig_code['type']}' supplied. \"\n \"'vyper' & 'json' are supported\"\n )\n )\n", "path": "vyper/ast/signatures/interface.py"}], "after_files": [{"content": "# TODO does this module not get imported?\n\nimport importlib\nimport pkgutil\n\nimport vyper.builtin_interfaces\nfrom vyper import ast as vy_ast\nfrom vyper.ast.signatures import sig_utils\nfrom vyper.ast.signatures.function_signature import FunctionSignature\nfrom vyper.exceptions import StructureException\nfrom vyper.old_codegen.global_context import GlobalContext\n\n\n# Populate built-in interfaces.\ndef get_builtin_interfaces():\n interface_names = [x.name for x in pkgutil.iter_modules(vyper.builtin_interfaces.__path__)]\n return {\n name: extract_sigs(\n {\n \"type\": \"vyper\",\n \"code\": importlib.import_module(f\"vyper.builtin_interfaces.{name}\",).interface_code,\n },\n name,\n )\n for name in interface_names\n }\n\n\ndef abi_type_to_ast(atype, expected_size):\n if atype in (\"int128\", \"uint256\", \"bool\", \"address\", \"bytes32\"):\n return vy_ast.Name(id=atype)\n elif atype == \"fixed168x10\":\n return vy_ast.Name(id=\"decimal\")\n elif atype in (\"bytes\", \"string\"):\n # expected_size is the maximum length for inputs, minimum length for outputs\n return vy_ast.Subscript(\n value=vy_ast.Name(id=atype.capitalize()),\n slice=vy_ast.Index(value=vy_ast.Int(value=expected_size)),\n )\n else:\n raise StructureException(f\"Type {atype} not supported by vyper.\")\n\n\n# Vyper defines a maximum length for bytes and string types, but Solidity does not.\n# To maximize interoperability, we internally considers these types to have a\n# a length of 1Mb (1024 * 1024 * 1 byte) for inputs, and 1 for outputs.\n# Ths approach solves the issue because Vyper allows for an implicit casting\n# from a lower length into a higher one. (@iamdefinitelyahuman)\ndef mk_full_signature_from_json(abi):\n funcs = [func for func in abi if func[\"type\"] == \"function\"]\n sigs = []\n\n for func in funcs:\n args = []\n returns = None\n for a in func[\"inputs\"]:\n arg = vy_ast.arg(\n arg=a[\"name\"],\n annotation=abi_type_to_ast(a[\"type\"], 1048576),\n lineno=0,\n col_offset=0,\n )\n args.append(arg)\n\n if len(func[\"outputs\"]) == 1:\n returns = abi_type_to_ast(func[\"outputs\"][0][\"type\"], 1)\n elif len(func[\"outputs\"]) > 1:\n returns = vy_ast.Tuple(\n elements=[abi_type_to_ast(a[\"type\"], 1) for a in func[\"outputs\"]]\n )\n\n decorator_list = [vy_ast.Name(id=\"external\")]\n # Handle either constant/payable or stateMutability field\n if (\"constant\" in func and func[\"constant\"]) or (\n \"stateMutability\" in func and func[\"stateMutability\"] == \"view\"\n ):\n decorator_list.append(vy_ast.Name(id=\"view\"))\n if (\"payable\" in func and func[\"payable\"]) or (\n \"stateMutability\" in func and func[\"stateMutability\"] == \"payable\"\n ):\n decorator_list.append(vy_ast.Name(id=\"payable\"))\n\n sig = FunctionSignature.from_definition(\n code=vy_ast.FunctionDef(\n name=func[\"name\"],\n args=vy_ast.arguments(args=args),\n decorator_list=decorator_list,\n returns=returns,\n ),\n custom_structs=dict(),\n is_from_json=True,\n )\n sigs.append(sig)\n return sigs\n\n\ndef extract_sigs(sig_code, interface_name=None):\n if sig_code[\"type\"] == \"vyper\":\n interface_ast = [\n i\n for i in vy_ast.parse_to_ast(sig_code[\"code\"], contract_name=interface_name)\n # all the nodes visited by ModuleNodeVisitor.\n if isinstance(\n i,\n (\n vy_ast.FunctionDef,\n vy_ast.EventDef,\n vy_ast.StructDef,\n vy_ast.InterfaceDef,\n # parsing import statements at this stage\n # causes issues with recursive imports\n # vy_ast.Import,\n # vy_ast.ImportFrom,\n ),\n )\n or (isinstance(i, vy_ast.AnnAssign) and i.target.id != \"implements\")\n ]\n global_ctx = GlobalContext.get_global_context(interface_ast)\n return sig_utils.mk_full_signature(global_ctx, sig_formatter=lambda x: x)\n elif sig_code[\"type\"] == \"json\":\n return mk_full_signature_from_json(sig_code[\"code\"])\n else:\n raise Exception(\n (\n f\"Unknown interface signature type '{sig_code['type']}' supplied. \"\n \"'vyper' & 'json' are supported\"\n )\n )\n", "path": "vyper/ast/signatures/interface.py"}]} | 1,859 | 234 |
gh_patches_debug_8989 | rasdani/github-patches | git_diff | certbot__certbot-4248 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
returned non-string (type Error)
Hey there, I installed certbot as per the doc's from letsencrypt on Debian Jessie and certbot in manual mode returns:
certbot certonly --manual -d mydomain.com
```
An unexpected error occurred:
TypeError: __str__ returned non-string (type Error)
```
```
pip2 list
acme (0.9.3)
...
certbot (0.9.3)
cryptography (1.5.3)
...
pyOpenSSL (16.0.0)
```
Anyone seen this before and can offer a solution? Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `acme/setup.py`
Content:
```
1 import sys
2
3 from setuptools import setup
4 from setuptools import find_packages
5
6
7 version = '0.12.0.dev0'
8
9 # Please update tox.ini when modifying dependency version requirements
10 install_requires = [
11 # load_pem_private/public_key (>=0.6)
12 # rsa_recover_prime_factors (>=0.8)
13 'cryptography>=0.8',
14 # Connection.set_tlsext_host_name (>=0.13)
15 'PyOpenSSL>=0.13',
16 'pyrfc3339',
17 'pytz',
18 'requests[security]>=2.4.1', # security extras added in 2.4.1
19 # For pkg_resources. >=1.0 so pip resolves it to a version cryptography
20 # will tolerate; see #2599:
21 'setuptools>=1.0',
22 'six',
23 ]
24
25 # env markers in extras_require cause problems with older pip: #517
26 # Keep in sync with conditional_requirements.py.
27 if sys.version_info < (2, 7):
28 install_requires.extend([
29 # only some distros recognize stdlib argparse as already satisfying
30 'argparse',
31 'mock<1.1.0',
32 ])
33 else:
34 install_requires.append('mock')
35
36 dev_extras = [
37 'nose',
38 'tox',
39 ]
40
41 docs_extras = [
42 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
43 'sphinx_rtd_theme',
44 ]
45
46
47 setup(
48 name='acme',
49 version=version,
50 description='ACME protocol implementation in Python',
51 url='https://github.com/letsencrypt/letsencrypt',
52 author="Certbot Project",
53 author_email='[email protected]',
54 license='Apache License 2.0',
55 classifiers=[
56 'Development Status :: 3 - Alpha',
57 'Intended Audience :: Developers',
58 'License :: OSI Approved :: Apache Software License',
59 'Programming Language :: Python',
60 'Programming Language :: Python :: 2',
61 'Programming Language :: Python :: 2.6',
62 'Programming Language :: Python :: 2.7',
63 'Programming Language :: Python :: 3',
64 'Programming Language :: Python :: 3.3',
65 'Programming Language :: Python :: 3.4',
66 'Programming Language :: Python :: 3.5',
67 'Topic :: Internet :: WWW/HTTP',
68 'Topic :: Security',
69 ],
70
71 packages=find_packages(),
72 include_package_data=True,
73 install_requires=install_requires,
74 extras_require={
75 'dev': dev_extras,
76 'docs': docs_extras,
77 },
78 entry_points={
79 'console_scripts': [
80 'jws = acme.jose.jws:CLI.run',
81 ],
82 },
83 test_suite='acme',
84 )
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/acme/setup.py b/acme/setup.py
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -15,7 +15,11 @@
'PyOpenSSL>=0.13',
'pyrfc3339',
'pytz',
- 'requests[security]>=2.4.1', # security extras added in 2.4.1
+ # requests>=2.10 is required to fix
+ # https://github.com/shazow/urllib3/issues/556. This requirement can be
+ # relaxed to 'requests[security]>=2.4.1', however, less useful errors
+ # will be raised for some network/SSL errors.
+ 'requests[security]>=2.10',
# For pkg_resources. >=1.0 so pip resolves it to a version cryptography
# will tolerate; see #2599:
'setuptools>=1.0',
| {"golden_diff": "diff --git a/acme/setup.py b/acme/setup.py\n--- a/acme/setup.py\n+++ b/acme/setup.py\n@@ -15,7 +15,11 @@\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n- 'requests[security]>=2.4.1', # security extras added in 2.4.1\n+ # requests>=2.10 is required to fix\n+ # https://github.com/shazow/urllib3/issues/556. This requirement can be\n+ # relaxed to 'requests[security]>=2.4.1', however, less useful errors\n+ # will be raised for some network/SSL errors.\n+ 'requests[security]>=2.10',\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n", "issue": "returned non-string (type Error)\nHey there, I installed certbot as per the doc's from letsencrypt on Debian Jessie and certbot in manual mode returns:\r\n\r\ncertbot certonly --manual -d mydomain.com\r\n\r\n```\r\nAn unexpected error occurred:\r\nTypeError: __str__ returned non-string (type Error)\r\n```\r\n\r\n```\r\npip2 list\r\nacme (0.9.3)\r\n...\r\ncertbot (0.9.3)\r\ncryptography (1.5.3)\r\n...\r\npyOpenSSL (16.0.0)\r\n```\r\n\r\nAnyone seen this before and can offer a solution? Thanks\r\n\n", "before_files": [{"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.12.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n 'requests[security]>=2.4.1', # security extras added in 2.4.1\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n 'six',\n]\n\n# env markers in extras_require cause problems with older pip: #517\n# Keep in sync with conditional_requirements.py.\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\ndev_extras = [\n 'nose',\n 'tox',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}], "after_files": [{"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.12.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n # requests>=2.10 is required to fix\n # https://github.com/shazow/urllib3/issues/556. This requirement can be\n # relaxed to 'requests[security]>=2.4.1', however, less useful errors\n # will be raised for some network/SSL errors.\n 'requests[security]>=2.10',\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n 'six',\n]\n\n# env markers in extras_require cause problems with older pip: #517\n# Keep in sync with conditional_requirements.py.\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\ndev_extras = [\n 'nose',\n 'tox',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}]} | 1,164 | 219 |
gh_patches_debug_13762 | rasdani/github-patches | git_diff | CTFd__CTFd-1876 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The REST API for deleting files does not remove the file's directory and does not update the Media Library list
**Environment**:
- CTFd Version/Commit: 3.2.1
- Operating System: Docker (`python:3.6-slim-buster`)
- Web Browser and Version: NA
**What happened?**
I am using the REST API for deleting files (e.g. `"DELETE /api/v1/files/41 HTTP/1.1"`) and it seems to work fine. The file is removed. However, two things do go wrong I think (at least to my expectation).
1. The file's directory (which has a hash based name) is not deleted. This means, after a while there will be a lot of empty directories.
1. The list of files used by the Media Library is not updated (i.e. file is removed from the list) which means the list grows constantly. The result is a list with many non-existing files as they are deleted.
The REST API returns a successful `200` code which seems to match with the expected behaviour.
**What did you expect to happen?**
When a file is deleted using the REST API, I expect the directory (the hash-based name) to be deleted and that the list used by the Media Library is updated accordingly.
**How to reproduce your issue**
1. Upload a file (e.g. via the web interface or REST API).
1. Use the REST API to delete this file.
1. Check the `upload/` folder and the Media Library for the behaviour described above.
**Any associated stack traces or error logs**
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/uploads/uploaders.py`
Content:
```
1 import os
2 import posixpath
3 import string
4 from shutil import copyfileobj
5
6 import boto3
7 from flask import current_app, redirect, send_file
8 from flask.helpers import safe_join
9 from werkzeug.utils import secure_filename
10
11 from CTFd.utils import get_app_config
12 from CTFd.utils.encoding import hexencode
13
14
15 class BaseUploader(object):
16 def __init__(self):
17 raise NotImplementedError
18
19 def store(self, fileobj, filename):
20 raise NotImplementedError
21
22 def upload(self, file_obj, filename):
23 raise NotImplementedError
24
25 def download(self, filename):
26 raise NotImplementedError
27
28 def delete(self, filename):
29 raise NotImplementedError
30
31 def sync(self):
32 raise NotImplementedError
33
34
35 class FilesystemUploader(BaseUploader):
36 def __init__(self, base_path=None):
37 super(BaseUploader, self).__init__()
38 self.base_path = base_path or current_app.config.get("UPLOAD_FOLDER")
39
40 def store(self, fileobj, filename):
41 location = os.path.join(self.base_path, filename)
42 directory = os.path.dirname(location)
43
44 if not os.path.exists(directory):
45 os.makedirs(directory)
46
47 with open(location, "wb") as dst:
48 copyfileobj(fileobj, dst, 16384)
49
50 return filename
51
52 def upload(self, file_obj, filename):
53 if len(filename) == 0:
54 raise Exception("Empty filenames cannot be used")
55
56 filename = secure_filename(filename)
57 md5hash = hexencode(os.urandom(16))
58 file_path = posixpath.join(md5hash, filename)
59
60 return self.store(file_obj, file_path)
61
62 def download(self, filename):
63 return send_file(safe_join(self.base_path, filename), as_attachment=True)
64
65 def delete(self, filename):
66 if os.path.exists(os.path.join(self.base_path, filename)):
67 os.unlink(os.path.join(self.base_path, filename))
68 return True
69 return False
70
71 def sync(self):
72 pass
73
74
75 class S3Uploader(BaseUploader):
76 def __init__(self):
77 super(BaseUploader, self).__init__()
78 self.s3 = self._get_s3_connection()
79 self.bucket = get_app_config("AWS_S3_BUCKET")
80
81 def _get_s3_connection(self):
82 access_key = get_app_config("AWS_ACCESS_KEY_ID")
83 secret_key = get_app_config("AWS_SECRET_ACCESS_KEY")
84 endpoint = get_app_config("AWS_S3_ENDPOINT_URL")
85 client = boto3.client(
86 "s3",
87 aws_access_key_id=access_key,
88 aws_secret_access_key=secret_key,
89 endpoint_url=endpoint,
90 )
91 return client
92
93 def _clean_filename(self, c):
94 if c in string.ascii_letters + string.digits + "-" + "_" + ".":
95 return True
96
97 def store(self, fileobj, filename):
98 self.s3.upload_fileobj(fileobj, self.bucket, filename)
99 return filename
100
101 def upload(self, file_obj, filename):
102 filename = filter(
103 self._clean_filename, secure_filename(filename).replace(" ", "_")
104 )
105 filename = "".join(filename)
106 if len(filename) <= 0:
107 return False
108
109 md5hash = hexencode(os.urandom(16))
110
111 dst = md5hash + "/" + filename
112 self.s3.upload_fileobj(file_obj, self.bucket, dst)
113 return dst
114
115 def download(self, filename):
116 key = filename
117 filename = filename.split("/").pop()
118 url = self.s3.generate_presigned_url(
119 "get_object",
120 Params={
121 "Bucket": self.bucket,
122 "Key": key,
123 "ResponseContentDisposition": "attachment; filename={}".format(
124 filename
125 ),
126 },
127 )
128 return redirect(url)
129
130 def delete(self, filename):
131 self.s3.delete_object(Bucket=self.bucket, Key=filename)
132 return True
133
134 def sync(self):
135 local_folder = current_app.config.get("UPLOAD_FOLDER")
136 # If the bucket is empty then Contents will not be in the response
137 bucket_list = self.s3.list_objects(Bucket=self.bucket).get("Contents", [])
138
139 for s3_key in bucket_list:
140 s3_object = s3_key["Key"]
141 # We don't want to download any directories
142 if s3_object.endswith("/") is False:
143 local_path = os.path.join(local_folder, s3_object)
144 directory = os.path.dirname(local_path)
145 if not os.path.exists(directory):
146 os.makedirs(directory)
147
148 self.s3.download_file(self.bucket, s3_object, local_path)
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/uploads/uploaders.py b/CTFd/utils/uploads/uploaders.py
--- a/CTFd/utils/uploads/uploaders.py
+++ b/CTFd/utils/uploads/uploaders.py
@@ -1,7 +1,8 @@
import os
import posixpath
import string
-from shutil import copyfileobj
+from pathlib import PurePath
+from shutil import copyfileobj, rmtree
import boto3
from flask import current_app, redirect, send_file
@@ -64,7 +65,8 @@
def delete(self, filename):
if os.path.exists(os.path.join(self.base_path, filename)):
- os.unlink(os.path.join(self.base_path, filename))
+ file_path = PurePath(filename).parts[0]
+ rmtree(os.path.join(self.base_path, file_path))
return True
return False
| {"golden_diff": "diff --git a/CTFd/utils/uploads/uploaders.py b/CTFd/utils/uploads/uploaders.py\n--- a/CTFd/utils/uploads/uploaders.py\n+++ b/CTFd/utils/uploads/uploaders.py\n@@ -1,7 +1,8 @@\n import os\n import posixpath\n import string\n-from shutil import copyfileobj\n+from pathlib import PurePath\n+from shutil import copyfileobj, rmtree\n \n import boto3\n from flask import current_app, redirect, send_file\n@@ -64,7 +65,8 @@\n \n def delete(self, filename):\n if os.path.exists(os.path.join(self.base_path, filename)):\n- os.unlink(os.path.join(self.base_path, filename))\n+ file_path = PurePath(filename).parts[0]\n+ rmtree(os.path.join(self.base_path, file_path))\n return True\n return False\n", "issue": "The REST API for deleting files does not remove the file's directory and does not update the Media Library list\n**Environment**: \r\n\r\n- CTFd Version/Commit: 3.2.1\r\n- Operating System: Docker (`python:3.6-slim-buster`)\r\n- Web Browser and Version: NA\r\n\r\n**What happened?**\r\n\r\nI am using the REST API for deleting files (e.g. `\"DELETE /api/v1/files/41 HTTP/1.1\"`) and it seems to work fine. The file is removed. However, two things do go wrong I think (at least to my expectation).\r\n\r\n1. The file's directory (which has a hash based name) is not deleted. This means, after a while there will be a lot of empty directories.\r\n1. The list of files used by the Media Library is not updated (i.e. file is removed from the list) which means the list grows constantly. The result is a list with many non-existing files as they are deleted.\r\n\r\nThe REST API returns a successful `200` code which seems to match with the expected behaviour.\r\n\r\n**What did you expect to happen?**\r\n\r\nWhen a file is deleted using the REST API, I expect the directory (the hash-based name) to be deleted and that the list used by the Media Library is updated accordingly.\r\n\r\n**How to reproduce your issue**\r\n\r\n1. Upload a file (e.g. via the web interface or REST API). \r\n1. Use the REST API to delete this file.\r\n1. Check the `upload/` folder and the Media Library for the behaviour described above.\r\n\r\n**Any associated stack traces or error logs**\r\n\r\nNone\r\n\n", "before_files": [{"content": "import os\nimport posixpath\nimport string\nfrom shutil import copyfileobj\n\nimport boto3\nfrom flask import current_app, redirect, send_file\nfrom flask.helpers import safe_join\nfrom werkzeug.utils import secure_filename\n\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.encoding import hexencode\n\n\nclass BaseUploader(object):\n def __init__(self):\n raise NotImplementedError\n\n def store(self, fileobj, filename):\n raise NotImplementedError\n\n def upload(self, file_obj, filename):\n raise NotImplementedError\n\n def download(self, filename):\n raise NotImplementedError\n\n def delete(self, filename):\n raise NotImplementedError\n\n def sync(self):\n raise NotImplementedError\n\n\nclass FilesystemUploader(BaseUploader):\n def __init__(self, base_path=None):\n super(BaseUploader, self).__init__()\n self.base_path = base_path or current_app.config.get(\"UPLOAD_FOLDER\")\n\n def store(self, fileobj, filename):\n location = os.path.join(self.base_path, filename)\n directory = os.path.dirname(location)\n\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n with open(location, \"wb\") as dst:\n copyfileobj(fileobj, dst, 16384)\n\n return filename\n\n def upload(self, file_obj, filename):\n if len(filename) == 0:\n raise Exception(\"Empty filenames cannot be used\")\n\n filename = secure_filename(filename)\n md5hash = hexencode(os.urandom(16))\n file_path = posixpath.join(md5hash, filename)\n\n return self.store(file_obj, file_path)\n\n def download(self, filename):\n return send_file(safe_join(self.base_path, filename), as_attachment=True)\n\n def delete(self, filename):\n if os.path.exists(os.path.join(self.base_path, filename)):\n os.unlink(os.path.join(self.base_path, filename))\n return True\n return False\n\n def sync(self):\n pass\n\n\nclass S3Uploader(BaseUploader):\n def __init__(self):\n super(BaseUploader, self).__init__()\n self.s3 = self._get_s3_connection()\n self.bucket = get_app_config(\"AWS_S3_BUCKET\")\n\n def _get_s3_connection(self):\n access_key = get_app_config(\"AWS_ACCESS_KEY_ID\")\n secret_key = get_app_config(\"AWS_SECRET_ACCESS_KEY\")\n endpoint = get_app_config(\"AWS_S3_ENDPOINT_URL\")\n client = boto3.client(\n \"s3\",\n aws_access_key_id=access_key,\n aws_secret_access_key=secret_key,\n endpoint_url=endpoint,\n )\n return client\n\n def _clean_filename(self, c):\n if c in string.ascii_letters + string.digits + \"-\" + \"_\" + \".\":\n return True\n\n def store(self, fileobj, filename):\n self.s3.upload_fileobj(fileobj, self.bucket, filename)\n return filename\n\n def upload(self, file_obj, filename):\n filename = filter(\n self._clean_filename, secure_filename(filename).replace(\" \", \"_\")\n )\n filename = \"\".join(filename)\n if len(filename) <= 0:\n return False\n\n md5hash = hexencode(os.urandom(16))\n\n dst = md5hash + \"/\" + filename\n self.s3.upload_fileobj(file_obj, self.bucket, dst)\n return dst\n\n def download(self, filename):\n key = filename\n filename = filename.split(\"/\").pop()\n url = self.s3.generate_presigned_url(\n \"get_object\",\n Params={\n \"Bucket\": self.bucket,\n \"Key\": key,\n \"ResponseContentDisposition\": \"attachment; filename={}\".format(\n filename\n ),\n },\n )\n return redirect(url)\n\n def delete(self, filename):\n self.s3.delete_object(Bucket=self.bucket, Key=filename)\n return True\n\n def sync(self):\n local_folder = current_app.config.get(\"UPLOAD_FOLDER\")\n # If the bucket is empty then Contents will not be in the response\n bucket_list = self.s3.list_objects(Bucket=self.bucket).get(\"Contents\", [])\n\n for s3_key in bucket_list:\n s3_object = s3_key[\"Key\"]\n # We don't want to download any directories\n if s3_object.endswith(\"/\") is False:\n local_path = os.path.join(local_folder, s3_object)\n directory = os.path.dirname(local_path)\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n self.s3.download_file(self.bucket, s3_object, local_path)\n", "path": "CTFd/utils/uploads/uploaders.py"}], "after_files": [{"content": "import os\nimport posixpath\nimport string\nfrom pathlib import PurePath\nfrom shutil import copyfileobj, rmtree\n\nimport boto3\nfrom flask import current_app, redirect, send_file\nfrom flask.helpers import safe_join\nfrom werkzeug.utils import secure_filename\n\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.encoding import hexencode\n\n\nclass BaseUploader(object):\n def __init__(self):\n raise NotImplementedError\n\n def store(self, fileobj, filename):\n raise NotImplementedError\n\n def upload(self, file_obj, filename):\n raise NotImplementedError\n\n def download(self, filename):\n raise NotImplementedError\n\n def delete(self, filename):\n raise NotImplementedError\n\n def sync(self):\n raise NotImplementedError\n\n\nclass FilesystemUploader(BaseUploader):\n def __init__(self, base_path=None):\n super(BaseUploader, self).__init__()\n self.base_path = base_path or current_app.config.get(\"UPLOAD_FOLDER\")\n\n def store(self, fileobj, filename):\n location = os.path.join(self.base_path, filename)\n directory = os.path.dirname(location)\n\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n with open(location, \"wb\") as dst:\n copyfileobj(fileobj, dst, 16384)\n\n return filename\n\n def upload(self, file_obj, filename):\n if len(filename) == 0:\n raise Exception(\"Empty filenames cannot be used\")\n\n filename = secure_filename(filename)\n md5hash = hexencode(os.urandom(16))\n file_path = posixpath.join(md5hash, filename)\n\n return self.store(file_obj, file_path)\n\n def download(self, filename):\n return send_file(safe_join(self.base_path, filename), as_attachment=True)\n\n def delete(self, filename):\n if os.path.exists(os.path.join(self.base_path, filename)):\n file_path = PurePath(filename).parts[0]\n rmtree(os.path.join(self.base_path, file_path))\n return True\n return False\n\n def sync(self):\n pass\n\n\nclass S3Uploader(BaseUploader):\n def __init__(self):\n super(BaseUploader, self).__init__()\n self.s3 = self._get_s3_connection()\n self.bucket = get_app_config(\"AWS_S3_BUCKET\")\n\n def _get_s3_connection(self):\n access_key = get_app_config(\"AWS_ACCESS_KEY_ID\")\n secret_key = get_app_config(\"AWS_SECRET_ACCESS_KEY\")\n endpoint = get_app_config(\"AWS_S3_ENDPOINT_URL\")\n client = boto3.client(\n \"s3\",\n aws_access_key_id=access_key,\n aws_secret_access_key=secret_key,\n endpoint_url=endpoint,\n )\n return client\n\n def _clean_filename(self, c):\n if c in string.ascii_letters + string.digits + \"-\" + \"_\" + \".\":\n return True\n\n def store(self, fileobj, filename):\n self.s3.upload_fileobj(fileobj, self.bucket, filename)\n return filename\n\n def upload(self, file_obj, filename):\n filename = filter(\n self._clean_filename, secure_filename(filename).replace(\" \", \"_\")\n )\n filename = \"\".join(filename)\n if len(filename) <= 0:\n return False\n\n md5hash = hexencode(os.urandom(16))\n\n dst = md5hash + \"/\" + filename\n self.s3.upload_fileobj(file_obj, self.bucket, dst)\n return dst\n\n def download(self, filename):\n key = filename\n filename = filename.split(\"/\").pop()\n url = self.s3.generate_presigned_url(\n \"get_object\",\n Params={\n \"Bucket\": self.bucket,\n \"Key\": key,\n \"ResponseContentDisposition\": \"attachment; filename={}\".format(\n filename\n ),\n },\n )\n return redirect(url)\n\n def delete(self, filename):\n self.s3.delete_object(Bucket=self.bucket, Key=filename)\n return True\n\n def sync(self):\n local_folder = current_app.config.get(\"UPLOAD_FOLDER\")\n # If the bucket is empty then Contents will not be in the response\n bucket_list = self.s3.list_objects(Bucket=self.bucket).get(\"Contents\", [])\n\n for s3_key in bucket_list:\n s3_object = s3_key[\"Key\"]\n # We don't want to download any directories\n if s3_object.endswith(\"/\") is False:\n local_path = os.path.join(local_folder, s3_object)\n directory = os.path.dirname(local_path)\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n self.s3.download_file(self.bucket, s3_object, local_path)\n", "path": "CTFd/utils/uploads/uploaders.py"}]} | 1,932 | 186 |
gh_patches_debug_21404 | rasdani/github-patches | git_diff | matrix-org__synapse-6578 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fatal 'Failed to upgrade database' error on startup
As of Synapse 1.7.0, when I start synapse with an old database version, I get this rather cryptic error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/storage/engines/sqlite.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015, 2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import struct
17 import threading
18
19 from synapse.storage.prepare_database import prepare_database
20
21
22 class Sqlite3Engine(object):
23 single_threaded = True
24
25 def __init__(self, database_module, database_config):
26 self.module = database_module
27
28 # The current max state_group, or None if we haven't looked
29 # in the DB yet.
30 self._current_state_group_id = None
31 self._current_state_group_id_lock = threading.Lock()
32
33 @property
34 def can_native_upsert(self):
35 """
36 Do we support native UPSERTs? This requires SQLite3 3.24+, plus some
37 more work we haven't done yet to tell what was inserted vs updated.
38 """
39 return self.module.sqlite_version_info >= (3, 24, 0)
40
41 @property
42 def supports_tuple_comparison(self):
43 """
44 Do we support comparing tuples, i.e. `(a, b) > (c, d)`? This requires
45 SQLite 3.15+.
46 """
47 return self.module.sqlite_version_info >= (3, 15, 0)
48
49 @property
50 def supports_using_any_list(self):
51 """Do we support using `a = ANY(?)` and passing a list
52 """
53 return False
54
55 def check_database(self, txn):
56 pass
57
58 def convert_param_style(self, sql):
59 return sql
60
61 def on_new_connection(self, db_conn):
62 prepare_database(db_conn, self, config=None)
63 db_conn.create_function("rank", 1, _rank)
64
65 def is_deadlock(self, error):
66 return False
67
68 def is_connection_closed(self, conn):
69 return False
70
71 def lock_table(self, txn, table):
72 return
73
74 def get_next_state_group_id(self, txn):
75 """Returns an int that can be used as a new state_group ID
76 """
77 # We do application locking here since if we're using sqlite then
78 # we are a single process synapse.
79 with self._current_state_group_id_lock:
80 if self._current_state_group_id is None:
81 txn.execute("SELECT COALESCE(max(id), 0) FROM state_groups")
82 self._current_state_group_id = txn.fetchone()[0]
83
84 self._current_state_group_id += 1
85 return self._current_state_group_id
86
87 @property
88 def server_version(self):
89 """Gets a string giving the server version. For example: '3.22.0'
90
91 Returns:
92 string
93 """
94 return "%i.%i.%i" % self.module.sqlite_version_info
95
96
97 # Following functions taken from: https://github.com/coleifer/peewee
98
99
100 def _parse_match_info(buf):
101 bufsize = len(buf)
102 return [struct.unpack("@I", buf[i : i + 4])[0] for i in range(0, bufsize, 4)]
103
104
105 def _rank(raw_match_info):
106 """Handle match_info called w/default args 'pcx' - based on the example rank
107 function http://sqlite.org/fts3.html#appendix_a
108 """
109 match_info = _parse_match_info(raw_match_info)
110 score = 0.0
111 p, c = match_info[:2]
112 for phrase_num in range(p):
113 phrase_info_idx = 2 + (phrase_num * c * 3)
114 for col_num in range(c):
115 col_idx = phrase_info_idx + (col_num * 3)
116 x1, x2 = match_info[col_idx : col_idx + 2]
117 if x1 > 0:
118 score += float(x1) / x2
119 return score
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/synapse/storage/engines/sqlite.py b/synapse/storage/engines/sqlite.py
--- a/synapse/storage/engines/sqlite.py
+++ b/synapse/storage/engines/sqlite.py
@@ -25,6 +25,9 @@
def __init__(self, database_module, database_config):
self.module = database_module
+ database = database_config.get("args", {}).get("database")
+ self._is_in_memory = database in (None, ":memory:",)
+
# The current max state_group, or None if we haven't looked
# in the DB yet.
self._current_state_group_id = None
@@ -59,7 +62,12 @@
return sql
def on_new_connection(self, db_conn):
- prepare_database(db_conn, self, config=None)
+ if self._is_in_memory:
+ # In memory databases need to be rebuilt each time. Ideally we'd
+ # reuse the same connection as we do when starting up, but that
+ # would involve using adbapi before we have started the reactor.
+ prepare_database(db_conn, self, config=None)
+
db_conn.create_function("rank", 1, _rank)
def is_deadlock(self, error):
| {"golden_diff": "diff --git a/synapse/storage/engines/sqlite.py b/synapse/storage/engines/sqlite.py\n--- a/synapse/storage/engines/sqlite.py\n+++ b/synapse/storage/engines/sqlite.py\n@@ -25,6 +25,9 @@\n def __init__(self, database_module, database_config):\n self.module = database_module\n \n+ database = database_config.get(\"args\", {}).get(\"database\")\n+ self._is_in_memory = database in (None, \":memory:\",)\n+\n # The current max state_group, or None if we haven't looked\n # in the DB yet.\n self._current_state_group_id = None\n@@ -59,7 +62,12 @@\n return sql\n \n def on_new_connection(self, db_conn):\n- prepare_database(db_conn, self, config=None)\n+ if self._is_in_memory:\n+ # In memory databases need to be rebuilt each time. Ideally we'd\n+ # reuse the same connection as we do when starting up, but that\n+ # would involve using adbapi before we have started the reactor.\n+ prepare_database(db_conn, self, config=None)\n+\n db_conn.create_function(\"rank\", 1, _rank)\n \n def is_deadlock(self, error):\n", "issue": "Fatal 'Failed to upgrade database' error on startup\nAs of Synapse 1.7.0, when I start synapse with an old database version, I get this rather cryptic error.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015, 2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport struct\nimport threading\n\nfrom synapse.storage.prepare_database import prepare_database\n\n\nclass Sqlite3Engine(object):\n single_threaded = True\n\n def __init__(self, database_module, database_config):\n self.module = database_module\n\n # The current max state_group, or None if we haven't looked\n # in the DB yet.\n self._current_state_group_id = None\n self._current_state_group_id_lock = threading.Lock()\n\n @property\n def can_native_upsert(self):\n \"\"\"\n Do we support native UPSERTs? This requires SQLite3 3.24+, plus some\n more work we haven't done yet to tell what was inserted vs updated.\n \"\"\"\n return self.module.sqlite_version_info >= (3, 24, 0)\n\n @property\n def supports_tuple_comparison(self):\n \"\"\"\n Do we support comparing tuples, i.e. `(a, b) > (c, d)`? This requires\n SQLite 3.15+.\n \"\"\"\n return self.module.sqlite_version_info >= (3, 15, 0)\n\n @property\n def supports_using_any_list(self):\n \"\"\"Do we support using `a = ANY(?)` and passing a list\n \"\"\"\n return False\n\n def check_database(self, txn):\n pass\n\n def convert_param_style(self, sql):\n return sql\n\n def on_new_connection(self, db_conn):\n prepare_database(db_conn, self, config=None)\n db_conn.create_function(\"rank\", 1, _rank)\n\n def is_deadlock(self, error):\n return False\n\n def is_connection_closed(self, conn):\n return False\n\n def lock_table(self, txn, table):\n return\n\n def get_next_state_group_id(self, txn):\n \"\"\"Returns an int that can be used as a new state_group ID\n \"\"\"\n # We do application locking here since if we're using sqlite then\n # we are a single process synapse.\n with self._current_state_group_id_lock:\n if self._current_state_group_id is None:\n txn.execute(\"SELECT COALESCE(max(id), 0) FROM state_groups\")\n self._current_state_group_id = txn.fetchone()[0]\n\n self._current_state_group_id += 1\n return self._current_state_group_id\n\n @property\n def server_version(self):\n \"\"\"Gets a string giving the server version. For example: '3.22.0'\n\n Returns:\n string\n \"\"\"\n return \"%i.%i.%i\" % self.module.sqlite_version_info\n\n\n# Following functions taken from: https://github.com/coleifer/peewee\n\n\ndef _parse_match_info(buf):\n bufsize = len(buf)\n return [struct.unpack(\"@I\", buf[i : i + 4])[0] for i in range(0, bufsize, 4)]\n\n\ndef _rank(raw_match_info):\n \"\"\"Handle match_info called w/default args 'pcx' - based on the example rank\n function http://sqlite.org/fts3.html#appendix_a\n \"\"\"\n match_info = _parse_match_info(raw_match_info)\n score = 0.0\n p, c = match_info[:2]\n for phrase_num in range(p):\n phrase_info_idx = 2 + (phrase_num * c * 3)\n for col_num in range(c):\n col_idx = phrase_info_idx + (col_num * 3)\n x1, x2 = match_info[col_idx : col_idx + 2]\n if x1 > 0:\n score += float(x1) / x2\n return score\n", "path": "synapse/storage/engines/sqlite.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015, 2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport struct\nimport threading\n\nfrom synapse.storage.prepare_database import prepare_database\n\n\nclass Sqlite3Engine(object):\n single_threaded = True\n\n def __init__(self, database_module, database_config):\n self.module = database_module\n\n database = database_config.get(\"args\", {}).get(\"database\")\n self._is_in_memory = database in (None, \":memory:\",)\n\n # The current max state_group, or None if we haven't looked\n # in the DB yet.\n self._current_state_group_id = None\n self._current_state_group_id_lock = threading.Lock()\n\n @property\n def can_native_upsert(self):\n \"\"\"\n Do we support native UPSERTs? This requires SQLite3 3.24+, plus some\n more work we haven't done yet to tell what was inserted vs updated.\n \"\"\"\n return self.module.sqlite_version_info >= (3, 24, 0)\n\n @property\n def supports_tuple_comparison(self):\n \"\"\"\n Do we support comparing tuples, i.e. `(a, b) > (c, d)`? This requires\n SQLite 3.15+.\n \"\"\"\n return self.module.sqlite_version_info >= (3, 15, 0)\n\n @property\n def supports_using_any_list(self):\n \"\"\"Do we support using `a = ANY(?)` and passing a list\n \"\"\"\n return False\n\n def check_database(self, txn):\n pass\n\n def convert_param_style(self, sql):\n return sql\n\n def on_new_connection(self, db_conn):\n if self._is_in_memory:\n # In memory databases need to be rebuilt each time. Ideally we'd\n # reuse the same connection as we do when starting up, but that\n # would involve using adbapi before we have started the reactor.\n prepare_database(db_conn, self, config=None)\n\n db_conn.create_function(\"rank\", 1, _rank)\n\n def is_deadlock(self, error):\n return False\n\n def is_connection_closed(self, conn):\n return False\n\n def lock_table(self, txn, table):\n return\n\n def get_next_state_group_id(self, txn):\n \"\"\"Returns an int that can be used as a new state_group ID\n \"\"\"\n # We do application locking here since if we're using sqlite then\n # we are a single process synapse.\n with self._current_state_group_id_lock:\n if self._current_state_group_id is None:\n txn.execute(\"SELECT COALESCE(max(id), 0) FROM state_groups\")\n self._current_state_group_id = txn.fetchone()[0]\n\n self._current_state_group_id += 1\n return self._current_state_group_id\n\n @property\n def server_version(self):\n \"\"\"Gets a string giving the server version. For example: '3.22.0'\n\n Returns:\n string\n \"\"\"\n return \"%i.%i.%i\" % self.module.sqlite_version_info\n\n\n# Following functions taken from: https://github.com/coleifer/peewee\n\n\ndef _parse_match_info(buf):\n bufsize = len(buf)\n return [struct.unpack(\"@I\", buf[i : i + 4])[0] for i in range(0, bufsize, 4)]\n\n\ndef _rank(raw_match_info):\n \"\"\"Handle match_info called w/default args 'pcx' - based on the example rank\n function http://sqlite.org/fts3.html#appendix_a\n \"\"\"\n match_info = _parse_match_info(raw_match_info)\n score = 0.0\n p, c = match_info[:2]\n for phrase_num in range(p):\n phrase_info_idx = 2 + (phrase_num * c * 3)\n for col_num in range(c):\n col_idx = phrase_info_idx + (col_num * 3)\n x1, x2 = match_info[col_idx : col_idx + 2]\n if x1 > 0:\n score += float(x1) / x2\n return score\n", "path": "synapse/storage/engines/sqlite.py"}]} | 1,499 | 284 |
gh_patches_debug_17006 | rasdani/github-patches | git_diff | sublimelsp__LSP-1950 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Side by side option for symbol action links in hover popup doesn't work if location is in same file
**Describe the bug**
The side by side icon link for "Definition" / "Type Definition" / "Declaration" from the hover popup doesn't work if the location of the definition/declaration is in the same file.
**To Reproduce**
Steps to reproduce the behavior:
1. Have `"show_symbol_action_links": true` in the settings (this is the default value)
2. Hover over symbol (e.g. function call) which has a definition in the same file
3. Click on ◨ next to "Definition", or use <kbd>Ctrl</kbd> + click on the text link
4. See that the view scrolls to the location, instead of opening the location in a new tab to the right
**Expected behavior**
LSP should open the definition in a new new to the right, similar to how the built-in definitions popup from ST does
**Environment (please complete the following information):**
- OS: Win 10
- LSP version: main
**Additional context**
Seems like the `flags` argument which includes the "side_by_side" information is lost/ignored here:
https://github.com/sublimelsp/LSP/blob/1bcd518102c1516c9d808c974b7d2a5eba7d0b80/plugin/core/open.py#L30-L31
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/open.py`
Content:
```
1 from .logging import exception_log
2 from .promise import Promise
3 from .promise import ResolveFunc
4 from .protocol import DocumentUri
5 from .protocol import Range
6 from .protocol import RangeLsp
7 from .typing import Dict, Tuple, Optional
8 from .url import parse_uri
9 from .views import range_to_region
10 import os
11 import sublime
12 import subprocess
13
14
15 opening_files = {} # type: Dict[str, Tuple[Promise[Optional[sublime.View]], ResolveFunc[Optional[sublime.View]]]]
16
17
18 def open_file(
19 window: sublime.Window, uri: DocumentUri, flags: int = 0, group: int = -1
20 ) -> Promise[Optional[sublime.View]]:
21 """
22 Open a file asynchronously.
23 It is only safe to call this function from the UI thread.
24 The provided uri MUST be a file URI
25 """
26 file = parse_uri(uri)[1]
27 # window.open_file brings the file to focus if it's already opened, which we don't want.
28 # So we first check if there's already a view for that file.
29 view = window.find_open_file(file)
30 if view:
31 return Promise.resolve(view)
32
33 view = window.open_file(file, flags, group)
34 if not view.is_loading():
35 # It's already loaded. Possibly already open in a tab.
36 return Promise.resolve(view)
37
38 # Is the view opening right now? Then return the associated unresolved promise
39 for fn, value in opening_files.items():
40 if fn == file or os.path.samefile(fn, file):
41 # Return the unresolved promise. A future on_load event will resolve the promise.
42 return value[0]
43
44 # Prepare a new promise to be resolved by a future on_load event (see the event listener in main.py)
45 def fullfill(resolve: ResolveFunc[Optional[sublime.View]]) -> None:
46 global opening_files
47 # Save the promise in the first element of the tuple -- except we cannot yet do that here
48 opening_files[file] = (None, resolve) # type: ignore
49
50 promise = Promise(fullfill)
51 tup = opening_files[file]
52 # Save the promise in the first element of the tuple so that the for-loop above can return it
53 opening_files[file] = (promise, tup[1])
54 return promise
55
56
57 def center_selection(v: sublime.View, r: RangeLsp) -> sublime.View:
58 selection = range_to_region(Range.from_lsp(r), v)
59 v.run_command("lsp_selection_set", {"regions": [(selection.a, selection.a)]})
60 window = v.window()
61 if window:
62 window.focus_view(v)
63 if int(sublime.version()) >= 4124:
64 v.show_at_center(selection, animate=False)
65 else:
66 # TODO: remove later when a stable build lands
67 v.show_at_center(selection) # type: ignore
68 return v
69
70
71 def open_externally(uri: str, take_focus: bool) -> bool:
72 """
73 A blocking function that invokes the OS's "open with default extension"
74 """
75 try:
76 # TODO: handle take_focus
77 if sublime.platform() == "windows":
78 os.startfile(uri) # type: ignore
79 elif sublime.platform() == "osx":
80 subprocess.check_call(("/usr/bin/open", uri))
81 else: # linux
82 subprocess.check_call(("xdg-open", uri))
83 return True
84 except Exception as ex:
85 exception_log("Failed to open {}".format(uri), ex)
86 return False
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/core/open.py b/plugin/core/open.py
--- a/plugin/core/open.py
+++ b/plugin/core/open.py
@@ -24,11 +24,15 @@
The provided uri MUST be a file URI
"""
file = parse_uri(uri)[1]
- # window.open_file brings the file to focus if it's already opened, which we don't want.
- # So we first check if there's already a view for that file.
+ # window.open_file brings the file to focus if it's already opened, which we don't want (unless it's supposed
+ # to open as a separate view).
view = window.find_open_file(file)
if view:
- return Promise.resolve(view)
+ opens_in_current_group = group == -1 or window.active_group() == group
+ opens_as_new_selection = (flags & (sublime.ADD_TO_SELECTION | sublime.REPLACE_MRU)) != 0
+ return_existing_view = opens_in_current_group and not opens_as_new_selection
+ if return_existing_view:
+ return Promise.resolve(view)
view = window.open_file(file, flags, group)
if not view.is_loading():
| {"golden_diff": "diff --git a/plugin/core/open.py b/plugin/core/open.py\n--- a/plugin/core/open.py\n+++ b/plugin/core/open.py\n@@ -24,11 +24,15 @@\n The provided uri MUST be a file URI\n \"\"\"\n file = parse_uri(uri)[1]\n- # window.open_file brings the file to focus if it's already opened, which we don't want.\n- # So we first check if there's already a view for that file.\n+ # window.open_file brings the file to focus if it's already opened, which we don't want (unless it's supposed\n+ # to open as a separate view).\n view = window.find_open_file(file)\n if view:\n- return Promise.resolve(view)\n+ opens_in_current_group = group == -1 or window.active_group() == group\n+ opens_as_new_selection = (flags & (sublime.ADD_TO_SELECTION | sublime.REPLACE_MRU)) != 0\n+ return_existing_view = opens_in_current_group and not opens_as_new_selection\n+ if return_existing_view:\n+ return Promise.resolve(view)\n \n view = window.open_file(file, flags, group)\n if not view.is_loading():\n", "issue": "Side by side option for symbol action links in hover popup doesn't work if location is in same file\n**Describe the bug**\r\nThe side by side icon link for \"Definition\" / \"Type Definition\" / \"Declaration\" from the hover popup doesn't work if the location of the definition/declaration is in the same file.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Have `\"show_symbol_action_links\": true` in the settings (this is the default value)\r\n2. Hover over symbol (e.g. function call) which has a definition in the same file\r\n3. Click on \u25e8 next to \"Definition\", or use <kbd>Ctrl</kbd> + click on the text link\r\n4. See that the view scrolls to the location, instead of opening the location in a new tab to the right\r\n\r\n**Expected behavior**\r\nLSP should open the definition in a new new to the right, similar to how the built-in definitions popup from ST does\r\n\r\n**Environment (please complete the following information):**\r\n- OS: Win 10\r\n- LSP version: main\r\n\r\n**Additional context**\r\n\r\nSeems like the `flags` argument which includes the \"side_by_side\" information is lost/ignored here:\r\nhttps://github.com/sublimelsp/LSP/blob/1bcd518102c1516c9d808c974b7d2a5eba7d0b80/plugin/core/open.py#L30-L31\n", "before_files": [{"content": "from .logging import exception_log\nfrom .promise import Promise\nfrom .promise import ResolveFunc\nfrom .protocol import DocumentUri\nfrom .protocol import Range\nfrom .protocol import RangeLsp\nfrom .typing import Dict, Tuple, Optional\nfrom .url import parse_uri\nfrom .views import range_to_region\nimport os\nimport sublime\nimport subprocess\n\n\nopening_files = {} # type: Dict[str, Tuple[Promise[Optional[sublime.View]], ResolveFunc[Optional[sublime.View]]]]\n\n\ndef open_file(\n window: sublime.Window, uri: DocumentUri, flags: int = 0, group: int = -1\n) -> Promise[Optional[sublime.View]]:\n \"\"\"\n Open a file asynchronously.\n It is only safe to call this function from the UI thread.\n The provided uri MUST be a file URI\n \"\"\"\n file = parse_uri(uri)[1]\n # window.open_file brings the file to focus if it's already opened, which we don't want.\n # So we first check if there's already a view for that file.\n view = window.find_open_file(file)\n if view:\n return Promise.resolve(view)\n\n view = window.open_file(file, flags, group)\n if not view.is_loading():\n # It's already loaded. Possibly already open in a tab.\n return Promise.resolve(view)\n\n # Is the view opening right now? Then return the associated unresolved promise\n for fn, value in opening_files.items():\n if fn == file or os.path.samefile(fn, file):\n # Return the unresolved promise. A future on_load event will resolve the promise.\n return value[0]\n\n # Prepare a new promise to be resolved by a future on_load event (see the event listener in main.py)\n def fullfill(resolve: ResolveFunc[Optional[sublime.View]]) -> None:\n global opening_files\n # Save the promise in the first element of the tuple -- except we cannot yet do that here\n opening_files[file] = (None, resolve) # type: ignore\n\n promise = Promise(fullfill)\n tup = opening_files[file]\n # Save the promise in the first element of the tuple so that the for-loop above can return it\n opening_files[file] = (promise, tup[1])\n return promise\n\n\ndef center_selection(v: sublime.View, r: RangeLsp) -> sublime.View:\n selection = range_to_region(Range.from_lsp(r), v)\n v.run_command(\"lsp_selection_set\", {\"regions\": [(selection.a, selection.a)]})\n window = v.window()\n if window:\n window.focus_view(v)\n if int(sublime.version()) >= 4124:\n v.show_at_center(selection, animate=False)\n else:\n # TODO: remove later when a stable build lands\n v.show_at_center(selection) # type: ignore\n return v\n\n\ndef open_externally(uri: str, take_focus: bool) -> bool:\n \"\"\"\n A blocking function that invokes the OS's \"open with default extension\"\n \"\"\"\n try:\n # TODO: handle take_focus\n if sublime.platform() == \"windows\":\n os.startfile(uri) # type: ignore\n elif sublime.platform() == \"osx\":\n subprocess.check_call((\"/usr/bin/open\", uri))\n else: # linux\n subprocess.check_call((\"xdg-open\", uri))\n return True\n except Exception as ex:\n exception_log(\"Failed to open {}\".format(uri), ex)\n return False\n", "path": "plugin/core/open.py"}], "after_files": [{"content": "from .logging import exception_log\nfrom .promise import Promise\nfrom .promise import ResolveFunc\nfrom .protocol import DocumentUri\nfrom .protocol import Range\nfrom .protocol import RangeLsp\nfrom .typing import Dict, Tuple, Optional\nfrom .url import parse_uri\nfrom .views import range_to_region\nimport os\nimport sublime\nimport subprocess\n\n\nopening_files = {} # type: Dict[str, Tuple[Promise[Optional[sublime.View]], ResolveFunc[Optional[sublime.View]]]]\n\n\ndef open_file(\n window: sublime.Window, uri: DocumentUri, flags: int = 0, group: int = -1\n) -> Promise[Optional[sublime.View]]:\n \"\"\"\n Open a file asynchronously.\n It is only safe to call this function from the UI thread.\n The provided uri MUST be a file URI\n \"\"\"\n file = parse_uri(uri)[1]\n # window.open_file brings the file to focus if it's already opened, which we don't want (unless it's supposed\n # to open as a separate view).\n view = window.find_open_file(file)\n if view:\n opens_in_current_group = group == -1 or window.active_group() == group\n opens_as_new_selection = (flags & (sublime.ADD_TO_SELECTION | sublime.REPLACE_MRU)) != 0\n return_existing_view = opens_in_current_group and not opens_as_new_selection\n if return_existing_view:\n return Promise.resolve(view)\n\n view = window.open_file(file, flags, group)\n if not view.is_loading():\n # It's already loaded. Possibly already open in a tab.\n return Promise.resolve(view)\n\n # Is the view opening right now? Then return the associated unresolved promise\n for fn, value in opening_files.items():\n if fn == file or os.path.samefile(fn, file):\n # Return the unresolved promise. A future on_load event will resolve the promise.\n return value[0]\n\n # Prepare a new promise to be resolved by a future on_load event (see the event listener in main.py)\n def fullfill(resolve: ResolveFunc[Optional[sublime.View]]) -> None:\n global opening_files\n # Save the promise in the first element of the tuple -- except we cannot yet do that here\n opening_files[file] = (None, resolve) # type: ignore\n\n promise = Promise(fullfill)\n tup = opening_files[file]\n # Save the promise in the first element of the tuple so that the for-loop above can return it\n opening_files[file] = (promise, tup[1])\n return promise\n\n\ndef center_selection(v: sublime.View, r: RangeLsp) -> sublime.View:\n selection = range_to_region(Range.from_lsp(r), v)\n v.run_command(\"lsp_selection_set\", {\"regions\": [(selection.a, selection.a)]})\n window = v.window()\n if window:\n window.focus_view(v)\n if int(sublime.version()) >= 4124:\n v.show_at_center(selection, animate=False)\n else:\n # TODO: remove later when a stable build lands\n v.show_at_center(selection) # type: ignore\n return v\n\n\ndef open_externally(uri: str, take_focus: bool) -> bool:\n \"\"\"\n A blocking function that invokes the OS's \"open with default extension\"\n \"\"\"\n try:\n # TODO: handle take_focus\n if sublime.platform() == \"windows\":\n os.startfile(uri) # type: ignore\n elif sublime.platform() == \"osx\":\n subprocess.check_call((\"/usr/bin/open\", uri))\n else: # linux\n subprocess.check_call((\"xdg-open\", uri))\n return True\n except Exception as ex:\n exception_log(\"Failed to open {}\".format(uri), ex)\n return False\n", "path": "plugin/core/open.py"}]} | 1,490 | 258 |
gh_patches_debug_42600 | rasdani/github-patches | git_diff | pyro-ppl__pyro-145 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide default implementation of batch_log_pdf
Could we provide a default implementation of `batch_log_pdf` as a simple for loop?
```py
class Distribution(object):
...
def batch_log_pdf(self, x, batch_size, *args, **kwargs):
result = torch.Tensor([batch_size])
for i in range(batch_size):
result[i] = self.log_pdf(x[i], *args, **kwargs)
return torch.autograd.Variable(result) # Caller decides whether to .sum().
```
Or do we want to instead implement correct handling of `NotImplementedError`s everywhere `batch_log_pdf` is used?
Disclaimer: I don't understand what `batch_log_pdf` does, and there is no docstring.
Edited to not sum the result.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/distributions/distribution.py`
Content:
```
1 class Distribution(object):
2 """
3 Distribution abstract base class
4 """
5
6 def __init__(self, *args, **kwargs):
7 """
8 Constructor for base distribution class.
9
10 Currently takes no explicit arguments.
11 """
12 self.reparameterized = False
13
14 def __call__(self, *args, **kwargs):
15 """
16 Samples on call
17 """
18 return self.sample(*args, **kwargs)
19
20 def sample(self, *args, **kwargs):
21 """
22 Virtual sample method.
23 """
24 raise NotImplementedError()
25
26 def log_pdf(self, x):
27 raise NotImplementedError()
28
29 def batch_log_pdf(self, x, batch_size):
30 raise NotImplementedError()
31
32 def support(self):
33 raise NotImplementedError("Support not supported for {}".format(str(type(self))))
34
35 def analytic_mean(self, *args, **kwargs):
36 """
37 Analytic mean of the distribution, to be implemented by derived classes.
38 Note that this is optional, and currently only used for testing distributions.
39 :return: Analytic mean, assuming it can be computed analytically given the distribution parameters
40 :rtype: torch.autograd.Variable.
41 """
42 raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
43
44 def analytic_var(self, *args, **kwargs):
45 """
46 Analytic variance of the distribution, to be implemented by derived classes.
47 Note that this is optional, and currently only used for testing distributions.
48 :return: Analytic variance, assuming it can be computed analytically given the distribution parameters
49 :rtype: torch.autograd.Variable.
50 """
51 raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyro/distributions/distribution.py b/pyro/distributions/distribution.py
--- a/pyro/distributions/distribution.py
+++ b/pyro/distributions/distribution.py
@@ -1,6 +1,17 @@
+import torch
+
+
class Distribution(object):
"""
- Distribution abstract base class
+ Abstract base class for probability distributions.
+
+ Instances can either be constructed from a fixed parameter and called without paramters,
+ or constructed without a parameter and called with a paramter.
+ It is not allowed to specify a parameter both during construction and when calling.
+ When calling with a parameter, it is preferred to use one of the singleton instances
+ in pyro.distributions rather than constructing a new instance without a parameter.
+
+ Derived classes must implement the `sample`, and `batch_log_pdf` methods.
"""
def __init__(self, *args, **kwargs):
@@ -13,39 +24,69 @@
def __call__(self, *args, **kwargs):
"""
- Samples on call
+ Samples a random value.
+
+ :return: A random value.
+ :rtype: torch.autograd.Variable
"""
return self.sample(*args, **kwargs)
def sample(self, *args, **kwargs):
"""
- Virtual sample method.
+ Samples a random value.
+
+ :return: A random value.
+ :rtype: torch.autograd.Variable
"""
- raise NotImplementedError()
+ raise NotImplementedError
- def log_pdf(self, x):
- raise NotImplementedError()
+ def log_pdf(self, x, *args, **kwargs):
+ """
+ Evaluates total log probability density for one or a batch of samples and parameters.
- def batch_log_pdf(self, x, batch_size):
- raise NotImplementedError()
+ :param torch.autograd.Variable x: A value.
+ :return: total log probability density as a one-dimensional torch.autograd.Variable of size 1.
+ :rtype: torch.autograd.Variable
+ """
+ return torch.sum(self.batch_log_pdf(x, *args, **kwargs))
- def support(self):
- raise NotImplementedError("Support not supported for {}".format(str(type(self))))
+ def batch_log_pdf(self, x, *args, **kwargs):
+ """
+ Evaluates log probability densities for one or a batch of samples and parameters.
+
+ :param torch.autograd.Variable x: A single value or a batch of values batched along axis 0.
+ :return: log probability densities as a one-dimensional torch.autograd.Variable.
+ :rtype: torch.autograd.Variable
+ """
+ raise NotImplementedError
+
+ def support(self, *args, **kwargs):
+ """
+ Returns a representation of the distribution's support.
+
+ :return: A representation of the distribution's support.
+ :rtype: torch.Tensor
+ """
+ raise NotImplementedError("Support not implemented for {}".format(type(self)))
def analytic_mean(self, *args, **kwargs):
"""
Analytic mean of the distribution, to be implemented by derived classes.
+
Note that this is optional, and currently only used for testing distributions.
+
:return: Analytic mean, assuming it can be computed analytically given the distribution parameters
:rtype: torch.autograd.Variable.
"""
- raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
+ raise NotImplementedError("Method not implemented by the subclass {}".format(type(self)))
def analytic_var(self, *args, **kwargs):
"""
Analytic variance of the distribution, to be implemented by derived classes.
+
Note that this is optional, and currently only used for testing distributions.
+
:return: Analytic variance, assuming it can be computed analytically given the distribution parameters
:rtype: torch.autograd.Variable.
"""
- raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
+ raise NotImplementedError("Method not implemented by the subclass {}".format(type(self)))
| {"golden_diff": "diff --git a/pyro/distributions/distribution.py b/pyro/distributions/distribution.py\n--- a/pyro/distributions/distribution.py\n+++ b/pyro/distributions/distribution.py\n@@ -1,6 +1,17 @@\n+import torch\n+\n+\n class Distribution(object):\n \"\"\"\n- Distribution abstract base class\n+ Abstract base class for probability distributions.\n+\n+ Instances can either be constructed from a fixed parameter and called without paramters,\n+ or constructed without a parameter and called with a paramter.\n+ It is not allowed to specify a parameter both during construction and when calling.\n+ When calling with a parameter, it is preferred to use one of the singleton instances\n+ in pyro.distributions rather than constructing a new instance without a parameter.\n+\n+ Derived classes must implement the `sample`, and `batch_log_pdf` methods.\n \"\"\"\n \n def __init__(self, *args, **kwargs):\n@@ -13,39 +24,69 @@\n \n def __call__(self, *args, **kwargs):\n \"\"\"\n- Samples on call\n+ Samples a random value.\n+\n+ :return: A random value.\n+ :rtype: torch.autograd.Variable\n \"\"\"\n return self.sample(*args, **kwargs)\n \n def sample(self, *args, **kwargs):\n \"\"\"\n- Virtual sample method.\n+ Samples a random value.\n+\n+ :return: A random value.\n+ :rtype: torch.autograd.Variable\n \"\"\"\n- raise NotImplementedError()\n+ raise NotImplementedError\n \n- def log_pdf(self, x):\n- raise NotImplementedError()\n+ def log_pdf(self, x, *args, **kwargs):\n+ \"\"\"\n+ Evaluates total log probability density for one or a batch of samples and parameters.\n \n- def batch_log_pdf(self, x, batch_size):\n- raise NotImplementedError()\n+ :param torch.autograd.Variable x: A value.\n+ :return: total log probability density as a one-dimensional torch.autograd.Variable of size 1.\n+ :rtype: torch.autograd.Variable\n+ \"\"\"\n+ return torch.sum(self.batch_log_pdf(x, *args, **kwargs))\n \n- def support(self):\n- raise NotImplementedError(\"Support not supported for {}\".format(str(type(self))))\n+ def batch_log_pdf(self, x, *args, **kwargs):\n+ \"\"\"\n+ Evaluates log probability densities for one or a batch of samples and parameters.\n+\n+ :param torch.autograd.Variable x: A single value or a batch of values batched along axis 0.\n+ :return: log probability densities as a one-dimensional torch.autograd.Variable.\n+ :rtype: torch.autograd.Variable\n+ \"\"\"\n+ raise NotImplementedError\n+\n+ def support(self, *args, **kwargs):\n+ \"\"\"\n+ Returns a representation of the distribution's support.\n+\n+ :return: A representation of the distribution's support.\n+ :rtype: torch.Tensor\n+ \"\"\"\n+ raise NotImplementedError(\"Support not implemented for {}\".format(type(self)))\n \n def analytic_mean(self, *args, **kwargs):\n \"\"\"\n Analytic mean of the distribution, to be implemented by derived classes.\n+\n Note that this is optional, and currently only used for testing distributions.\n+\n :return: Analytic mean, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n- raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n+ raise NotImplementedError(\"Method not implemented by the subclass {}\".format(type(self)))\n \n def analytic_var(self, *args, **kwargs):\n \"\"\"\n Analytic variance of the distribution, to be implemented by derived classes.\n+\n Note that this is optional, and currently only used for testing distributions.\n+\n :return: Analytic variance, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n- raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n+ raise NotImplementedError(\"Method not implemented by the subclass {}\".format(type(self)))\n", "issue": "Provide default implementation of batch_log_pdf\nCould we provide a default implementation of `batch_log_pdf` as a simple for loop?\r\n```py\r\nclass Distribution(object):\r\n ...\r\n def batch_log_pdf(self, x, batch_size, *args, **kwargs):\r\n result = torch.Tensor([batch_size])\r\n for i in range(batch_size):\r\n result[i] = self.log_pdf(x[i], *args, **kwargs)\r\n return torch.autograd.Variable(result) # Caller decides whether to .sum().\r\n```\r\nOr do we want to instead implement correct handling of `NotImplementedError`s everywhere `batch_log_pdf` is used?\r\n\r\nDisclaimer: I don't understand what `batch_log_pdf` does, and there is no docstring.\r\n\r\nEdited to not sum the result.\n", "before_files": [{"content": "class Distribution(object):\n \"\"\"\n Distribution abstract base class\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Constructor for base distribution class.\n\n Currently takes no explicit arguments.\n \"\"\"\n self.reparameterized = False\n\n def __call__(self, *args, **kwargs):\n \"\"\"\n Samples on call\n \"\"\"\n return self.sample(*args, **kwargs)\n\n def sample(self, *args, **kwargs):\n \"\"\"\n Virtual sample method.\n \"\"\"\n raise NotImplementedError()\n\n def log_pdf(self, x):\n raise NotImplementedError()\n\n def batch_log_pdf(self, x, batch_size):\n raise NotImplementedError()\n\n def support(self):\n raise NotImplementedError(\"Support not supported for {}\".format(str(type(self))))\n\n def analytic_mean(self, *args, **kwargs):\n \"\"\"\n Analytic mean of the distribution, to be implemented by derived classes.\n Note that this is optional, and currently only used for testing distributions.\n :return: Analytic mean, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n\n def analytic_var(self, *args, **kwargs):\n \"\"\"\n Analytic variance of the distribution, to be implemented by derived classes.\n Note that this is optional, and currently only used for testing distributions.\n :return: Analytic variance, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n", "path": "pyro/distributions/distribution.py"}], "after_files": [{"content": "import torch\n\n\nclass Distribution(object):\n \"\"\"\n Abstract base class for probability distributions.\n\n Instances can either be constructed from a fixed parameter and called without paramters,\n or constructed without a parameter and called with a paramter.\n It is not allowed to specify a parameter both during construction and when calling.\n When calling with a parameter, it is preferred to use one of the singleton instances\n in pyro.distributions rather than constructing a new instance without a parameter.\n\n Derived classes must implement the `sample`, and `batch_log_pdf` methods.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Constructor for base distribution class.\n\n Currently takes no explicit arguments.\n \"\"\"\n self.reparameterized = False\n\n def __call__(self, *args, **kwargs):\n \"\"\"\n Samples a random value.\n\n :return: A random value.\n :rtype: torch.autograd.Variable\n \"\"\"\n return self.sample(*args, **kwargs)\n\n def sample(self, *args, **kwargs):\n \"\"\"\n Samples a random value.\n\n :return: A random value.\n :rtype: torch.autograd.Variable\n \"\"\"\n raise NotImplementedError\n\n def log_pdf(self, x, *args, **kwargs):\n \"\"\"\n Evaluates total log probability density for one or a batch of samples and parameters.\n\n :param torch.autograd.Variable x: A value.\n :return: total log probability density as a one-dimensional torch.autograd.Variable of size 1.\n :rtype: torch.autograd.Variable\n \"\"\"\n return torch.sum(self.batch_log_pdf(x, *args, **kwargs))\n\n def batch_log_pdf(self, x, *args, **kwargs):\n \"\"\"\n Evaluates log probability densities for one or a batch of samples and parameters.\n\n :param torch.autograd.Variable x: A single value or a batch of values batched along axis 0.\n :return: log probability densities as a one-dimensional torch.autograd.Variable.\n :rtype: torch.autograd.Variable\n \"\"\"\n raise NotImplementedError\n\n def support(self, *args, **kwargs):\n \"\"\"\n Returns a representation of the distribution's support.\n\n :return: A representation of the distribution's support.\n :rtype: torch.Tensor\n \"\"\"\n raise NotImplementedError(\"Support not implemented for {}\".format(type(self)))\n\n def analytic_mean(self, *args, **kwargs):\n \"\"\"\n Analytic mean of the distribution, to be implemented by derived classes.\n\n Note that this is optional, and currently only used for testing distributions.\n\n :return: Analytic mean, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n raise NotImplementedError(\"Method not implemented by the subclass {}\".format(type(self)))\n\n def analytic_var(self, *args, **kwargs):\n \"\"\"\n Analytic variance of the distribution, to be implemented by derived classes.\n\n Note that this is optional, and currently only used for testing distributions.\n\n :return: Analytic variance, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n raise NotImplementedError(\"Method not implemented by the subclass {}\".format(type(self)))\n", "path": "pyro/distributions/distribution.py"}]} | 865 | 876 |
gh_patches_debug_32842 | rasdani/github-patches | git_diff | TheAlgorithms__Python-9069 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Delete base85 algorithm
### Describe your change:
Re #6216
Normally, I'm not in favour of just deleting algorithms, but I would make the argument that this is not an algorithm, rather just a snippet of code that utilises another library.
Per `CONTRIBTUING.md`
> Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values
This `base85` algorithm has essentially got two lines of code that purely utilise a singular library. The doctests only test an external library
This repository should not contains examples on how to use a certain library, that would be the library documentation here
https://docs.python.org/3/library/base64.html
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Delete an algorithm
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ciphers/base85.py`
Content:
```
1 import base64
2
3
4 def base85_encode(string: str) -> bytes:
5 """
6 >>> base85_encode("")
7 b''
8 >>> base85_encode("12345")
9 b'0etOA2#'
10 >>> base85_encode("base 85")
11 b'@UX=h+?24'
12 """
13 # encoded the input to a bytes-like object and then a85encode that
14 return base64.a85encode(string.encode("utf-8"))
15
16
17 def base85_decode(a85encoded: bytes) -> str:
18 """
19 >>> base85_decode(b"")
20 ''
21 >>> base85_decode(b"0etOA2#")
22 '12345'
23 >>> base85_decode(b"@UX=h+?24")
24 'base 85'
25 """
26 # a85decode the input into bytes and decode that into a human readable string
27 return base64.a85decode(a85encoded).decode("utf-8")
28
29
30 if __name__ == "__main__":
31 import doctest
32
33 doctest.testmod()
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ciphers/base85.py b/ciphers/base85.py
--- a/ciphers/base85.py
+++ b/ciphers/base85.py
@@ -1,30 +1,55 @@
-import base64
+"""
+Base85 (Ascii85) encoding and decoding
+https://en.wikipedia.org/wiki/Ascii85
+"""
-def base85_encode(string: str) -> bytes:
+
+def _base10_to_85(d: int) -> str:
+ return "".join(chr(d % 85 + 33)) + _base10_to_85(d // 85) if d > 0 else ""
+
+
+def _base85_to_10(digits: list) -> int:
+ return sum(char * 85**i for i, char in enumerate(reversed(digits)))
+
+
+def ascii85_encode(data: bytes) -> bytes:
"""
- >>> base85_encode("")
+ >>> ascii85_encode(b"")
b''
- >>> base85_encode("12345")
+ >>> ascii85_encode(b"12345")
b'0etOA2#'
- >>> base85_encode("base 85")
+ >>> ascii85_encode(b"base 85")
b'@UX=h+?24'
"""
- # encoded the input to a bytes-like object and then a85encode that
- return base64.a85encode(string.encode("utf-8"))
+ binary_data = "".join(bin(ord(d))[2:].zfill(8) for d in data.decode("utf-8"))
+ null_values = (32 * ((len(binary_data) // 32) + 1) - len(binary_data)) // 8
+ binary_data = binary_data.ljust(32 * ((len(binary_data) // 32) + 1), "0")
+ b85_chunks = [int(_s, 2) for _s in map("".join, zip(*[iter(binary_data)] * 32))]
+ result = "".join(_base10_to_85(chunk)[::-1] for chunk in b85_chunks)
+ return bytes(result[:-null_values] if null_values % 4 != 0 else result, "utf-8")
-def base85_decode(a85encoded: bytes) -> str:
+def ascii85_decode(data: bytes) -> bytes:
"""
- >>> base85_decode(b"")
- ''
- >>> base85_decode(b"0etOA2#")
- '12345'
- >>> base85_decode(b"@UX=h+?24")
- 'base 85'
+ >>> ascii85_decode(b"")
+ b''
+ >>> ascii85_decode(b"0etOA2#")
+ b'12345'
+ >>> ascii85_decode(b"@UX=h+?24")
+ b'base 85'
"""
- # a85decode the input into bytes and decode that into a human readable string
- return base64.a85decode(a85encoded).decode("utf-8")
+ null_values = 5 * ((len(data) // 5) + 1) - len(data)
+ binary_data = data.decode("utf-8") + "u" * null_values
+ b85_chunks = map("".join, zip(*[iter(binary_data)] * 5))
+ b85_segments = [[ord(_s) - 33 for _s in chunk] for chunk in b85_chunks]
+ results = [bin(_base85_to_10(chunk))[2::].zfill(32) for chunk in b85_segments]
+ char_chunks = [
+ [chr(int(_s, 2)) for _s in map("".join, zip(*[iter(r)] * 8))] for r in results
+ ]
+ result = "".join("".join(char) for char in char_chunks)
+ offset = int(null_values % 5 == 0)
+ return bytes(result[: offset - null_values], "utf-8")
if __name__ == "__main__":
| {"golden_diff": "diff --git a/ciphers/base85.py b/ciphers/base85.py\n--- a/ciphers/base85.py\n+++ b/ciphers/base85.py\n@@ -1,30 +1,55 @@\n-import base64\n+\"\"\"\n+Base85 (Ascii85) encoding and decoding\n \n+https://en.wikipedia.org/wiki/Ascii85\n+\"\"\"\n \n-def base85_encode(string: str) -> bytes:\n+\n+def _base10_to_85(d: int) -> str:\n+ return \"\".join(chr(d % 85 + 33)) + _base10_to_85(d // 85) if d > 0 else \"\"\n+\n+\n+def _base85_to_10(digits: list) -> int:\n+ return sum(char * 85**i for i, char in enumerate(reversed(digits)))\n+\n+\n+def ascii85_encode(data: bytes) -> bytes:\n \"\"\"\n- >>> base85_encode(\"\")\n+ >>> ascii85_encode(b\"\")\n b''\n- >>> base85_encode(\"12345\")\n+ >>> ascii85_encode(b\"12345\")\n b'0etOA2#'\n- >>> base85_encode(\"base 85\")\n+ >>> ascii85_encode(b\"base 85\")\n b'@UX=h+?24'\n \"\"\"\n- # encoded the input to a bytes-like object and then a85encode that\n- return base64.a85encode(string.encode(\"utf-8\"))\n+ binary_data = \"\".join(bin(ord(d))[2:].zfill(8) for d in data.decode(\"utf-8\"))\n+ null_values = (32 * ((len(binary_data) // 32) + 1) - len(binary_data)) // 8\n+ binary_data = binary_data.ljust(32 * ((len(binary_data) // 32) + 1), \"0\")\n+ b85_chunks = [int(_s, 2) for _s in map(\"\".join, zip(*[iter(binary_data)] * 32))]\n+ result = \"\".join(_base10_to_85(chunk)[::-1] for chunk in b85_chunks)\n+ return bytes(result[:-null_values] if null_values % 4 != 0 else result, \"utf-8\")\n \n \n-def base85_decode(a85encoded: bytes) -> str:\n+def ascii85_decode(data: bytes) -> bytes:\n \"\"\"\n- >>> base85_decode(b\"\")\n- ''\n- >>> base85_decode(b\"0etOA2#\")\n- '12345'\n- >>> base85_decode(b\"@UX=h+?24\")\n- 'base 85'\n+ >>> ascii85_decode(b\"\")\n+ b''\n+ >>> ascii85_decode(b\"0etOA2#\")\n+ b'12345'\n+ >>> ascii85_decode(b\"@UX=h+?24\")\n+ b'base 85'\n \"\"\"\n- # a85decode the input into bytes and decode that into a human readable string\n- return base64.a85decode(a85encoded).decode(\"utf-8\")\n+ null_values = 5 * ((len(data) // 5) + 1) - len(data)\n+ binary_data = data.decode(\"utf-8\") + \"u\" * null_values\n+ b85_chunks = map(\"\".join, zip(*[iter(binary_data)] * 5))\n+ b85_segments = [[ord(_s) - 33 for _s in chunk] for chunk in b85_chunks]\n+ results = [bin(_base85_to_10(chunk))[2::].zfill(32) for chunk in b85_segments]\n+ char_chunks = [\n+ [chr(int(_s, 2)) for _s in map(\"\".join, zip(*[iter(r)] * 8))] for r in results\n+ ]\n+ result = \"\".join(\"\".join(char) for char in char_chunks)\n+ offset = int(null_values % 5 == 0)\n+ return bytes(result[: offset - null_values], \"utf-8\")\n \n \n if __name__ == \"__main__\":\n", "issue": "Delete base85 algorithm\n### Describe your change:\r\nRe #6216\r\n\r\nNormally, I'm not in favour of just deleting algorithms, but I would make the argument that this is not an algorithm, rather just a snippet of code that utilises another library.\r\n\r\nPer `CONTRIBTUING.md`\r\n> Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values\r\nThis `base85` algorithm has essentially got two lines of code that purely utilise a singular library. The doctests only test an external library\r\n\r\nThis repository should not contains examples on how to use a certain library, that would be the library documentation here\r\nhttps://docs.python.org/3/library/base64.html\r\n\r\n\r\n* [ ] Add an algorithm?\r\n* [ ] Fix a bug or typo in an existing algorithm?\r\n* [ ] Documentation change?\r\n* [x] Delete an algorithm\r\n\r\n### Checklist:\r\n* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).\r\n* [x] This pull request is all my own work -- I have not plagiarized.\r\n* [x] I know that pull requests will not be merged if they fail the automated tests.\r\n* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.\r\n* [x] All new Python files are placed inside an existing directory.\r\n* [x] All filenames are in all lowercase characters with no spaces or dashes.\r\n* [x] All functions and variable names follow Python naming conventions.\r\n* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).\r\n* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.\r\n* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.\r\n* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): \"Fixes #ISSUE-NUMBER\".\r\n\n", "before_files": [{"content": "import base64\n\n\ndef base85_encode(string: str) -> bytes:\n \"\"\"\n >>> base85_encode(\"\")\n b''\n >>> base85_encode(\"12345\")\n b'0etOA2#'\n >>> base85_encode(\"base 85\")\n b'@UX=h+?24'\n \"\"\"\n # encoded the input to a bytes-like object and then a85encode that\n return base64.a85encode(string.encode(\"utf-8\"))\n\n\ndef base85_decode(a85encoded: bytes) -> str:\n \"\"\"\n >>> base85_decode(b\"\")\n ''\n >>> base85_decode(b\"0etOA2#\")\n '12345'\n >>> base85_decode(b\"@UX=h+?24\")\n 'base 85'\n \"\"\"\n # a85decode the input into bytes and decode that into a human readable string\n return base64.a85decode(a85encoded).decode(\"utf-8\")\n\n\nif __name__ == \"__main__\":\n import doctest\n\n doctest.testmod()\n", "path": "ciphers/base85.py"}], "after_files": [{"content": "\"\"\"\nBase85 (Ascii85) encoding and decoding\n\nhttps://en.wikipedia.org/wiki/Ascii85\n\"\"\"\n\n\ndef _base10_to_85(d: int) -> str:\n return \"\".join(chr(d % 85 + 33)) + _base10_to_85(d // 85) if d > 0 else \"\"\n\n\ndef _base85_to_10(digits: list) -> int:\n return sum(char * 85**i for i, char in enumerate(reversed(digits)))\n\n\ndef ascii85_encode(data: bytes) -> bytes:\n \"\"\"\n >>> ascii85_encode(b\"\")\n b''\n >>> ascii85_encode(b\"12345\")\n b'0etOA2#'\n >>> ascii85_encode(b\"base 85\")\n b'@UX=h+?24'\n \"\"\"\n binary_data = \"\".join(bin(ord(d))[2:].zfill(8) for d in data.decode(\"utf-8\"))\n null_values = (32 * ((len(binary_data) // 32) + 1) - len(binary_data)) // 8\n binary_data = binary_data.ljust(32 * ((len(binary_data) // 32) + 1), \"0\")\n b85_chunks = [int(_s, 2) for _s in map(\"\".join, zip(*[iter(binary_data)] * 32))]\n result = \"\".join(_base10_to_85(chunk)[::-1] for chunk in b85_chunks)\n return bytes(result[:-null_values] if null_values % 4 != 0 else result, \"utf-8\")\n\n\ndef ascii85_decode(data: bytes) -> bytes:\n \"\"\"\n >>> ascii85_decode(b\"\")\n b''\n >>> ascii85_decode(b\"0etOA2#\")\n b'12345'\n >>> ascii85_decode(b\"@UX=h+?24\")\n b'base 85'\n \"\"\"\n null_values = 5 * ((len(data) // 5) + 1) - len(data)\n binary_data = data.decode(\"utf-8\") + \"u\" * null_values\n b85_chunks = map(\"\".join, zip(*[iter(binary_data)] * 5))\n b85_segments = [[ord(_s) - 33 for _s in chunk] for chunk in b85_chunks]\n results = [bin(_base85_to_10(chunk))[2::].zfill(32) for chunk in b85_segments]\n char_chunks = [\n [chr(int(_s, 2)) for _s in map(\"\".join, zip(*[iter(r)] * 8))] for r in results\n ]\n result = \"\".join(\"\".join(char) for char in char_chunks)\n offset = int(null_values % 5 == 0)\n return bytes(result[: offset - null_values], \"utf-8\")\n\n\nif __name__ == \"__main__\":\n import doctest\n\n doctest.testmod()\n", "path": "ciphers/base85.py"}]} | 1,060 | 951 |
gh_patches_debug_28897 | rasdani/github-patches | git_diff | privacyidea__privacyidea-2663 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Database migration fails if the URI contains '%' signs
If the `SQLALCHEMY_DATABASE_URI` contains query parameters like `ssl_ca=/path/to/cert` the path separators will be url-encoded with `%` signs.
This fails when passing the URI to the alembic configuration (https://alembic.sqlalchemy.org/en/latest/api/config.html#alembic.config.Config.set_main_option).
The `%` signs should be escaped in the URI string before passing it to alembic.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `migrations/env.py`
Content:
```
1 from __future__ import with_statement
2 from alembic import context
3 from sqlalchemy import engine_from_config, pool
4 from sqlalchemy.engine.url import make_url
5 from logging.config import fileConfig
6
7 # this is the Alembic Config object, which provides
8 # access to the values within the .ini file in use.
9
10 config = context.config
11
12 # Interpret the config file for Python logging.
13 # This line sets up loggers basically.
14 fileConfig(config.config_file_name)
15
16 # add your model's MetaData object here
17 # for 'autogenerate' support
18 # from myapp import mymodel
19 # target_metadata = mymodel.Base.metadata
20 from flask import current_app
21
22
23 def set_database_url(config):
24 url = current_app.config.get('SQLALCHEMY_DATABASE_URI')
25 try:
26 # In case of MySQL, add ``charset=utf8`` to the parameters (if no charset is set),
27 # because this is what Flask-SQLAlchemy does
28 if url.startswith("mysql"):
29 parsed_url = make_url(url)
30 parsed_url.query.setdefault("charset", "utf8")
31 url = str(parsed_url)
32 except Exception as exx:
33 print(u"Attempted to set charset=utf8 on connection, but failed: {}".format(exx))
34 config.set_main_option('sqlalchemy.url', url)
35
36
37 set_database_url(config)
38 target_metadata = current_app.extensions['migrate'].db.metadata
39
40 # other values from the config, defined by the needs of env.py,
41 # can be acquired:
42 # my_important_option = config.get_main_option("my_important_option")
43 # ... etc.
44
45
46 def run_migrations_offline():
47 """Run migrations in 'offline' mode.
48
49 This configures the context with just a URL
50 and not an Engine, though an Engine is acceptable
51 here as well. By skipping the Engine creation
52 we don't even need a DBAPI to be available.
53
54 Calls to context.execute() here emit the given string to the
55 script output.
56
57 """
58 url = config.get_main_option("sqlalchemy.url")
59 context.configure(url=url)
60
61 with context.begin_transaction():
62 context.run_migrations()
63
64
65 def run_migrations_online():
66 """Run migrations in 'online' mode.
67
68 In this scenario we need to create an Engine
69 and associate a connection with the context.
70
71 """
72 # FIX for Postgres updates
73 url = config.get_section(config.config_ini_section).get("sqlalchemy.url")
74 driver = url.split(":")[0]
75
76 if driver == "postgresql+psycopg2":
77 engine = engine_from_config(
78 config.get_section(config.config_ini_section),
79 prefix='sqlalchemy.',
80 isolation_level="AUTOCOMMIT",
81 poolclass=pool.NullPool)
82 else:
83 engine = engine_from_config(
84 config.get_section(config.config_ini_section),
85 prefix='sqlalchemy.',
86 poolclass=pool.NullPool)
87
88 connection = engine.connect()
89 context.configure(
90 connection=connection,
91 target_metadata=target_metadata,
92 compare_type=True
93 )
94
95 try:
96 with context.begin_transaction():
97 context.run_migrations()
98 finally:
99 connection.close()
100
101 if context.is_offline_mode():
102 print("Running offline")
103 run_migrations_offline()
104 else:
105 print("Running online")
106 run_migrations_online()
107
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/migrations/env.py b/migrations/env.py
--- a/migrations/env.py
+++ b/migrations/env.py
@@ -3,6 +3,7 @@
from sqlalchemy import engine_from_config, pool
from sqlalchemy.engine.url import make_url
from logging.config import fileConfig
+from six.moves.urllib.parse import quote
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
@@ -28,10 +29,13 @@
if url.startswith("mysql"):
parsed_url = make_url(url)
parsed_url.query.setdefault("charset", "utf8")
+ # We need to quote the password in case it contains special chars
+ parsed_url.password = quote(parsed_url.password)
url = str(parsed_url)
except Exception as exx:
print(u"Attempted to set charset=utf8 on connection, but failed: {}".format(exx))
- config.set_main_option('sqlalchemy.url', url)
+ # set_main_option() requires escaped "%" signs in the string
+ config.set_main_option('sqlalchemy.url', url.replace('%', '%%'))
set_database_url(config)
@@ -98,10 +102,10 @@
finally:
connection.close()
+
if context.is_offline_mode():
print("Running offline")
run_migrations_offline()
else:
print("Running online")
run_migrations_online()
-
| {"golden_diff": "diff --git a/migrations/env.py b/migrations/env.py\n--- a/migrations/env.py\n+++ b/migrations/env.py\n@@ -3,6 +3,7 @@\n from sqlalchemy import engine_from_config, pool\n from sqlalchemy.engine.url import make_url\n from logging.config import fileConfig\n+from six.moves.urllib.parse import quote\n \n # this is the Alembic Config object, which provides\n # access to the values within the .ini file in use.\n@@ -28,10 +29,13 @@\n if url.startswith(\"mysql\"):\n parsed_url = make_url(url)\n parsed_url.query.setdefault(\"charset\", \"utf8\")\n+ # We need to quote the password in case it contains special chars\n+ parsed_url.password = quote(parsed_url.password)\n url = str(parsed_url)\n except Exception as exx:\n print(u\"Attempted to set charset=utf8 on connection, but failed: {}\".format(exx))\n- config.set_main_option('sqlalchemy.url', url)\n+ # set_main_option() requires escaped \"%\" signs in the string\n+ config.set_main_option('sqlalchemy.url', url.replace('%', '%%'))\n \n \n set_database_url(config)\n@@ -98,10 +102,10 @@\n finally:\n connection.close()\n \n+\n if context.is_offline_mode():\n print(\"Running offline\")\n run_migrations_offline()\n else:\n print(\"Running online\")\n run_migrations_online()\n-\n", "issue": "Database migration fails if the URI contains '%' signs\nIf the `SQLALCHEMY_DATABASE_URI` contains query parameters like `ssl_ca=/path/to/cert` the path separators will be url-encoded with `%` signs.\r\nThis fails when passing the URI to the alembic configuration (https://alembic.sqlalchemy.org/en/latest/api/config.html#alembic.config.Config.set_main_option).\r\nThe `%` signs should be escaped in the URI string before passing it to alembic.\n", "before_files": [{"content": "from __future__ import with_statement\nfrom alembic import context\nfrom sqlalchemy import engine_from_config, pool\nfrom sqlalchemy.engine.url import make_url\nfrom logging.config import fileConfig\n\n# this is the Alembic Config object, which provides\n# access to the values within the .ini file in use.\n\nconfig = context.config\n\n# Interpret the config file for Python logging.\n# This line sets up loggers basically.\nfileConfig(config.config_file_name)\n\n# add your model's MetaData object here\n# for 'autogenerate' support\n# from myapp import mymodel\n# target_metadata = mymodel.Base.metadata\nfrom flask import current_app\n\n\ndef set_database_url(config):\n url = current_app.config.get('SQLALCHEMY_DATABASE_URI')\n try:\n # In case of MySQL, add ``charset=utf8`` to the parameters (if no charset is set),\n # because this is what Flask-SQLAlchemy does\n if url.startswith(\"mysql\"):\n parsed_url = make_url(url)\n parsed_url.query.setdefault(\"charset\", \"utf8\")\n url = str(parsed_url)\n except Exception as exx:\n print(u\"Attempted to set charset=utf8 on connection, but failed: {}\".format(exx))\n config.set_main_option('sqlalchemy.url', url)\n\n\nset_database_url(config)\ntarget_metadata = current_app.extensions['migrate'].db.metadata\n\n# other values from the config, defined by the needs of env.py,\n# can be acquired:\n# my_important_option = config.get_main_option(\"my_important_option\")\n# ... etc.\n\n\ndef run_migrations_offline():\n \"\"\"Run migrations in 'offline' mode.\n\n This configures the context with just a URL\n and not an Engine, though an Engine is acceptable\n here as well. By skipping the Engine creation\n we don't even need a DBAPI to be available.\n\n Calls to context.execute() here emit the given string to the\n script output.\n\n \"\"\"\n url = config.get_main_option(\"sqlalchemy.url\")\n context.configure(url=url)\n\n with context.begin_transaction():\n context.run_migrations()\n\n\ndef run_migrations_online():\n \"\"\"Run migrations in 'online' mode.\n\n In this scenario we need to create an Engine\n and associate a connection with the context.\n\n \"\"\"\n # FIX for Postgres updates\n url = config.get_section(config.config_ini_section).get(\"sqlalchemy.url\")\n driver = url.split(\":\")[0]\n\n if driver == \"postgresql+psycopg2\":\n engine = engine_from_config(\n config.get_section(config.config_ini_section),\n prefix='sqlalchemy.',\n isolation_level=\"AUTOCOMMIT\",\n poolclass=pool.NullPool)\n else:\n engine = engine_from_config(\n config.get_section(config.config_ini_section),\n prefix='sqlalchemy.',\n poolclass=pool.NullPool)\n\n connection = engine.connect()\n context.configure(\n connection=connection,\n target_metadata=target_metadata,\n compare_type=True\n )\n\n try:\n with context.begin_transaction():\n context.run_migrations()\n finally:\n connection.close()\n\nif context.is_offline_mode():\n print(\"Running offline\")\n run_migrations_offline()\nelse:\n print(\"Running online\")\n run_migrations_online()\n\n", "path": "migrations/env.py"}], "after_files": [{"content": "from __future__ import with_statement\nfrom alembic import context\nfrom sqlalchemy import engine_from_config, pool\nfrom sqlalchemy.engine.url import make_url\nfrom logging.config import fileConfig\nfrom six.moves.urllib.parse import quote\n\n# this is the Alembic Config object, which provides\n# access to the values within the .ini file in use.\n\nconfig = context.config\n\n# Interpret the config file for Python logging.\n# This line sets up loggers basically.\nfileConfig(config.config_file_name)\n\n# add your model's MetaData object here\n# for 'autogenerate' support\n# from myapp import mymodel\n# target_metadata = mymodel.Base.metadata\nfrom flask import current_app\n\n\ndef set_database_url(config):\n url = current_app.config.get('SQLALCHEMY_DATABASE_URI')\n try:\n # In case of MySQL, add ``charset=utf8`` to the parameters (if no charset is set),\n # because this is what Flask-SQLAlchemy does\n if url.startswith(\"mysql\"):\n parsed_url = make_url(url)\n parsed_url.query.setdefault(\"charset\", \"utf8\")\n # We need to quote the password in case it contains special chars\n parsed_url.password = quote(parsed_url.password)\n url = str(parsed_url)\n except Exception as exx:\n print(u\"Attempted to set charset=utf8 on connection, but failed: {}\".format(exx))\n # set_main_option() requires escaped \"%\" signs in the string\n config.set_main_option('sqlalchemy.url', url.replace('%', '%%'))\n\n\nset_database_url(config)\ntarget_metadata = current_app.extensions['migrate'].db.metadata\n\n# other values from the config, defined by the needs of env.py,\n# can be acquired:\n# my_important_option = config.get_main_option(\"my_important_option\")\n# ... etc.\n\n\ndef run_migrations_offline():\n \"\"\"Run migrations in 'offline' mode.\n\n This configures the context with just a URL\n and not an Engine, though an Engine is acceptable\n here as well. By skipping the Engine creation\n we don't even need a DBAPI to be available.\n\n Calls to context.execute() here emit the given string to the\n script output.\n\n \"\"\"\n url = config.get_main_option(\"sqlalchemy.url\")\n context.configure(url=url)\n\n with context.begin_transaction():\n context.run_migrations()\n\n\ndef run_migrations_online():\n \"\"\"Run migrations in 'online' mode.\n\n In this scenario we need to create an Engine\n and associate a connection with the context.\n\n \"\"\"\n # FIX for Postgres updates\n url = config.get_section(config.config_ini_section).get(\"sqlalchemy.url\")\n driver = url.split(\":\")[0]\n\n if driver == \"postgresql+psycopg2\":\n engine = engine_from_config(\n config.get_section(config.config_ini_section),\n prefix='sqlalchemy.',\n isolation_level=\"AUTOCOMMIT\",\n poolclass=pool.NullPool)\n else:\n engine = engine_from_config(\n config.get_section(config.config_ini_section),\n prefix='sqlalchemy.',\n poolclass=pool.NullPool)\n\n connection = engine.connect()\n context.configure(\n connection=connection,\n target_metadata=target_metadata,\n compare_type=True\n )\n\n try:\n with context.begin_transaction():\n context.run_migrations()\n finally:\n connection.close()\n\n\nif context.is_offline_mode():\n print(\"Running offline\")\n run_migrations_offline()\nelse:\n print(\"Running online\")\n run_migrations_online()\n", "path": "migrations/env.py"}]} | 1,277 | 313 |
gh_patches_debug_11387 | rasdani/github-patches | git_diff | encode__uvicorn-592 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Uvicorn via gunicorn worker doesn't respect `--forwarded-allow-ips`
I use uvicorn in docker as uvicorn-worker for gunicorn for my fastapi app. My application needs to know the real client IP of each request, so I use proxy-server with the `X-Forwarded-For` header.
Gunicorn has a special option to change proxy-ip to real-ip, so I running gunicorn like this:
```
gunicorn \
ppm_telegram_bot.api:app \
--forwarded-allow-ips="*"
--worker-class=uvicorn.workers.UvicornWorker \
--bind=0.0.0.0:$PORT
```
Because I'm in a container, my WSGI/ASGI server receives requests not from the localhost, but from the docker network.
But uvicorn-worker doesn't respect gunicorn's `forwarded-allow-ips`, so in `ProxyHeadersMiddleware.trusted_hosts` I receive default `127.0.0.1` and proxy-ip instead of real-ip.
https://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/middleware/proxy_headers.py#L14-L17
It looks like uvicorn-worker can forward this information to config via `config_kwargs`: https://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/workers.py#L28-L35
I could do PR with this change, if required 🙌
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/workers.py`
Content:
```
1 import asyncio
2 import logging
3
4 from gunicorn.workers.base import Worker
5 from uvicorn.config import Config
6 from uvicorn.main import Server
7
8
9 class UvicornWorker(Worker):
10 """
11 A worker class for Gunicorn that interfaces with an ASGI consumer callable,
12 rather than a WSGI callable.
13 """
14
15 CONFIG_KWARGS = {"loop": "uvloop", "http": "httptools"}
16
17 def __init__(self, *args, **kwargs):
18 super(UvicornWorker, self).__init__(*args, **kwargs)
19
20 logger = logging.getLogger("uvicorn.error")
21 logger.handlers = self.log.error_log.handlers
22 logger.setLevel(self.log.error_log.level)
23
24 logger = logging.getLogger("uvicorn.access")
25 logger.handlers = self.log.access_log.handlers
26 logger.setLevel(self.log.access_log.level)
27
28 config_kwargs = {
29 "app": None,
30 "log_config": None,
31 "timeout_keep_alive": self.cfg.keepalive,
32 "timeout_notify": self.timeout,
33 "callback_notify": self.callback_notify,
34 "limit_max_requests": self.max_requests,
35 }
36
37 if self.cfg.is_ssl:
38 ssl_kwargs = {
39 "ssl_keyfile": self.cfg.ssl_options.get("keyfile"),
40 "ssl_certfile": self.cfg.ssl_options.get("certfile"),
41 "ssl_version": self.cfg.ssl_options.get("ssl_version"),
42 "ssl_cert_reqs": self.cfg.ssl_options.get("cert_reqs"),
43 "ssl_ca_certs": self.cfg.ssl_options.get("ca_certs"),
44 "ssl_ciphers": self.cfg.ssl_options.get("ciphers"),
45 }
46 config_kwargs.update(ssl_kwargs)
47
48 if self.cfg.settings["backlog"].value:
49 config_kwargs["backlog"] = self.cfg.settings["backlog"].value
50
51 config_kwargs.update(self.CONFIG_KWARGS)
52
53 self.config = Config(**config_kwargs)
54
55 def init_process(self):
56 self.config.setup_event_loop()
57 super(UvicornWorker, self).init_process()
58
59 def init_signals(self):
60 pass
61
62 def run(self):
63 self.config.app = self.wsgi
64 server = Server(config=self.config)
65 loop = asyncio.get_event_loop()
66 loop.run_until_complete(server.serve(sockets=self.sockets))
67
68 async def callback_notify(self):
69 self.notify()
70
71
72 class UvicornH11Worker(UvicornWorker):
73 CONFIG_KWARGS = {"loop": "asyncio", "http": "h11"}
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/workers.py b/uvicorn/workers.py
--- a/uvicorn/workers.py
+++ b/uvicorn/workers.py
@@ -2,6 +2,7 @@
import logging
from gunicorn.workers.base import Worker
+
from uvicorn.config import Config
from uvicorn.main import Server
@@ -32,6 +33,7 @@
"timeout_notify": self.timeout,
"callback_notify": self.callback_notify,
"limit_max_requests": self.max_requests,
+ "forwarded_allow_ips": self.cfg.forwarded_allow_ips,
}
if self.cfg.is_ssl:
| {"golden_diff": "diff --git a/uvicorn/workers.py b/uvicorn/workers.py\n--- a/uvicorn/workers.py\n+++ b/uvicorn/workers.py\n@@ -2,6 +2,7 @@\n import logging\n \n from gunicorn.workers.base import Worker\n+\n from uvicorn.config import Config\n from uvicorn.main import Server\n \n@@ -32,6 +33,7 @@\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n+ \"forwarded_allow_ips\": self.cfg.forwarded_allow_ips,\n }\n \n if self.cfg.is_ssl:\n", "issue": "Uvicorn via gunicorn worker doesn't respect `--forwarded-allow-ips`\nI use uvicorn in docker as uvicorn-worker for gunicorn for my fastapi app. My application needs to know the real client IP of each request, so I use proxy-server with the `X-Forwarded-For` header.\r\n\r\nGunicorn has a special option to change proxy-ip to real-ip, so I running gunicorn like this:\r\n```\r\ngunicorn \\\r\n ppm_telegram_bot.api:app \\\r\n --forwarded-allow-ips=\"*\" \r\n --worker-class=uvicorn.workers.UvicornWorker \\\r\n --bind=0.0.0.0:$PORT\r\n```\r\n\r\nBecause I'm in a container, my WSGI/ASGI server receives requests not from the localhost, but from the docker network.\r\n\r\nBut uvicorn-worker doesn't respect gunicorn's `forwarded-allow-ips`, so in `ProxyHeadersMiddleware.trusted_hosts` I receive default `127.0.0.1` and proxy-ip instead of real-ip.\r\nhttps://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/middleware/proxy_headers.py#L14-L17\r\n\r\nIt looks like uvicorn-worker can forward this information to config via `config_kwargs`: https://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/workers.py#L28-L35\r\n\r\nI could do PR with this change, if required \ud83d\ude4c \n", "before_files": [{"content": "import asyncio\nimport logging\n\nfrom gunicorn.workers.base import Worker\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n\n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n\n config_kwargs = {\n \"app\": None,\n \"log_config\": None,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n if self.cfg.settings[\"backlog\"].value:\n config_kwargs[\"backlog\"] = self.cfg.settings[\"backlog\"].value\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(server.serve(sockets=self.sockets))\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}], "after_files": [{"content": "import asyncio\nimport logging\n\nfrom gunicorn.workers.base import Worker\n\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n\n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n\n config_kwargs = {\n \"app\": None,\n \"log_config\": None,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n \"forwarded_allow_ips\": self.cfg.forwarded_allow_ips,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n if self.cfg.settings[\"backlog\"].value:\n config_kwargs[\"backlog\"] = self.cfg.settings[\"backlog\"].value\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(server.serve(sockets=self.sockets))\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}]} | 1,316 | 137 |
gh_patches_debug_1288 | rasdani/github-patches | git_diff | archlinux__archinstall-555 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version Bump in conf.py?
https://github.com/archlinux/archinstall/blob/a4033a7d3a94916f2b4972d212f9d0069fca39cd/docs/conf.py#L44
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 import os
2 import re
3 import sys
4
5 sys.path.insert(0, os.path.abspath('..'))
6
7
8 def process_docstring(app, what, name, obj, options, lines):
9 spaces_pat = re.compile(r"( {8})")
10 ll = []
11 for line in lines:
12 ll.append(spaces_pat.sub(" ", line))
13 lines[:] = ll
14
15
16 def setup(app):
17 app.connect('autodoc-process-docstring', process_docstring)
18
19
20 # Configuration file for the Sphinx documentation builder.
21 #
22 # This file only contains a selection of the most common options. For a full
23 # list see the documentation:
24 # https://www.sphinx-doc.org/en/master/usage/configuration.html
25
26 # -- Path setup --------------------------------------------------------------
27
28 # If extensions (or modules to document with autodoc) are in another directory,
29 # add these directories to sys.path here. If the directory is relative to the
30 # documentation root, use os.path.abspath to make it absolute, like shown here.
31 #
32 # import os
33 # import sys
34 # sys.path.insert(0, os.path.abspath('.'))
35
36
37 # -- Project information -----------------------------------------------------
38
39 project = 'python-archinstall'
40 copyright = '2020, Anton Hvornum'
41 author = 'Anton Hvornum'
42
43 # The full version, including alpha/beta/rc tags
44 release = 'v2.1.0'
45
46 # -- General configuration ---------------------------------------------------
47
48 master_doc = 'index'
49 # Add any Sphinx extension module names here, as strings. They can be
50 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
51 # ones.
52 extensions = [
53 'sphinx.ext.autodoc',
54 'sphinx.ext.inheritance_diagram',
55 'sphinx.ext.todo'
56 ]
57
58 # Add any paths that contain templates here, relative to this directory.
59 templates_path = ['_templates']
60
61 # List of patterns, relative to source directory, that match files and
62 # directories to ignore when looking for source files.
63 # This pattern also affects html_static_path and html_extra_path.
64 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
65
66 # -- Options for HTML output -------------------------------------------------
67
68 # The theme to use for HTML and HTML Help pages. See the documentation for
69 # a list of builtin themes.
70 #
71 # html_theme = 'alabaster'
72 html_theme = 'sphinx_rtd_theme'
73
74 html_logo = "_static/logo.png"
75
76 # Add any paths that contain custom static files (such as style sheets) here,
77 # relative to this directory. They are copied after the builtin static files,
78 # so a file named "default.css" will overwrite the builtin "default.css".
79 html_static_path = ['_static']
80
81 # If false, no module index is generated.
82 html_domain_indices = True
83
84 # If false, no index is generated.
85 html_use_index = True
86
87 # If true, the index is split into individual pages for each letter.
88 html_split_index = True
89
90 # If true, links to the reST sources are added to the pages.
91 html_show_sourcelink = False
92
93 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
94 # html_show_sphinx = True
95
96 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
97 # html_show_copyright = True
98
99 # If true, an OpenSearch description file will be output, and all pages will
100 # contain a <link> tag referring to it. The value of this option must be the
101 # base URL from which the finished HTML is served.
102 # html_use_opensearch = ''
103
104 # This is the file name suffix for HTML files (e.g. ".xhtml").
105 # html_file_suffix = None
106
107 # Output file base name for HTML help builder.
108 htmlhelp_basename = 'archinstalldoc'
109
110 # -- Options for manual page output --------------------------------------------
111
112 # One entry per manual page. List of tuples
113 # (source start file, name, description, authors, manual section).
114 man_pages = [("index", "archinstall", u"archinstall Documentation", [u"Anton Hvornum"], 1)]
115
116 # If true, show URL addresses after external links.
117 # man_show_urls = False
118
119
120 # -- Options for Texinfo output ------------------------------------------------
121
122 # Grouping the document tree into Texinfo files. List of tuples
123 # (source start file, target name, title, author,
124 # dir menu entry, description, category)
125 texinfo_documents = [
126 ("index", "archinstall", u"archinstall Documentation", u"Anton Hvornum", "archinstall", "Simple and minimal HTTP server."),
127 ]
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -41,7 +41,7 @@
author = 'Anton Hvornum'
# The full version, including alpha/beta/rc tags
-release = 'v2.1.0'
+release = 'v2.3.0.dev0'
# -- General configuration ---------------------------------------------------
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -41,7 +41,7 @@\n author = 'Anton Hvornum'\n \n # The full version, including alpha/beta/rc tags\n-release = 'v2.1.0'\n+release = 'v2.3.0.dev0'\n \n # -- General configuration ---------------------------------------------------\n", "issue": "Version Bump in conf.py?\nhttps://github.com/archlinux/archinstall/blob/a4033a7d3a94916f2b4972d212f9d0069fca39cd/docs/conf.py#L44\n", "before_files": [{"content": "import os\nimport re\nimport sys\n\nsys.path.insert(0, os.path.abspath('..'))\n\n\ndef process_docstring(app, what, name, obj, options, lines):\n\tspaces_pat = re.compile(r\"( {8})\")\n\tll = []\n\tfor line in lines:\n\t\tll.append(spaces_pat.sub(\" \", line))\n\tlines[:] = ll\n\n\ndef setup(app):\n\tapp.connect('autodoc-process-docstring', process_docstring)\n\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'python-archinstall'\ncopyright = '2020, Anton Hvornum'\nauthor = 'Anton Hvornum'\n\n# The full version, including alpha/beta/rc tags\nrelease = 'v2.1.0'\n\n# -- General configuration ---------------------------------------------------\n\nmaster_doc = 'index'\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.inheritance_diagram',\n\t'sphinx.ext.todo'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\nhtml_theme = 'sphinx_rtd_theme'\n\nhtml_logo = \"_static/logo.png\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If false, no module index is generated.\nhtml_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\nhtml_split_index = True\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'archinstalldoc'\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(\"index\", \"archinstall\", u\"archinstall Documentation\", [u\"Anton Hvornum\"], 1)]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n\t(\"index\", \"archinstall\", u\"archinstall Documentation\", u\"Anton Hvornum\", \"archinstall\", \"Simple and minimal HTTP server.\"),\n]\n", "path": "docs/conf.py"}], "after_files": [{"content": "import os\nimport re\nimport sys\n\nsys.path.insert(0, os.path.abspath('..'))\n\n\ndef process_docstring(app, what, name, obj, options, lines):\n\tspaces_pat = re.compile(r\"( {8})\")\n\tll = []\n\tfor line in lines:\n\t\tll.append(spaces_pat.sub(\" \", line))\n\tlines[:] = ll\n\n\ndef setup(app):\n\tapp.connect('autodoc-process-docstring', process_docstring)\n\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'python-archinstall'\ncopyright = '2020, Anton Hvornum'\nauthor = 'Anton Hvornum'\n\n# The full version, including alpha/beta/rc tags\nrelease = 'v2.3.0.dev0'\n\n# -- General configuration ---------------------------------------------------\n\nmaster_doc = 'index'\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.inheritance_diagram',\n\t'sphinx.ext.todo'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\nhtml_theme = 'sphinx_rtd_theme'\n\nhtml_logo = \"_static/logo.png\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If false, no module index is generated.\nhtml_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\nhtml_split_index = True\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'archinstalldoc'\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(\"index\", \"archinstall\", u\"archinstall Documentation\", [u\"Anton Hvornum\"], 1)]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n\t(\"index\", \"archinstall\", u\"archinstall Documentation\", u\"Anton Hvornum\", \"archinstall\", \"Simple and minimal HTTP server.\"),\n]\n", "path": "docs/conf.py"}]} | 1,589 | 88 |
gh_patches_debug_27326 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-207 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
regression: fallback if streaming fails is disabled
Causes https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/datasets_preview_backend/config.py`
Content:
```
1 import os
2
3 from dotenv import load_dotenv
4
5 from datasets_preview_backend.constants import (
6 DEFAULT_APP_HOSTNAME,
7 DEFAULT_APP_PORT,
8 DEFAULT_ASSETS_DIRECTORY,
9 DEFAULT_DATASETS_ENABLE_PRIVATE,
10 DEFAULT_DATASETS_REVISION,
11 DEFAULT_HF_TOKEN,
12 DEFAULT_LOG_LEVEL,
13 DEFAULT_MAX_AGE_LONG_SECONDS,
14 DEFAULT_MAX_AGE_SHORT_SECONDS,
15 DEFAULT_MONGO_CACHE_DATABASE,
16 DEFAULT_MONGO_QUEUE_DATABASE,
17 DEFAULT_MONGO_URL,
18 DEFAULT_ROWS_MAX_BYTES,
19 DEFAULT_ROWS_MAX_NUMBER,
20 DEFAULT_ROWS_MIN_NUMBER,
21 DEFAULT_WEB_CONCURRENCY,
22 )
23 from datasets_preview_backend.utils import (
24 get_bool_value,
25 get_int_value,
26 get_str_or_none_value,
27 get_str_value,
28 )
29
30 # Load environment variables defined in .env, if any
31 load_dotenv()
32
33 APP_HOSTNAME = get_str_value(d=os.environ, key="APP_HOSTNAME", default=DEFAULT_APP_HOSTNAME)
34 APP_PORT = get_int_value(d=os.environ, key="APP_PORT", default=DEFAULT_APP_PORT)
35 ASSETS_DIRECTORY = get_str_or_none_value(d=os.environ, key="ASSETS_DIRECTORY", default=DEFAULT_ASSETS_DIRECTORY)
36 DATASETS_ENABLE_PRIVATE = get_bool_value(
37 d=os.environ, key="DATASETS_ENABLE_PRIVATE", default=DEFAULT_DATASETS_ENABLE_PRIVATE
38 )
39 DATASETS_REVISION = get_str_value(d=os.environ, key="DATASETS_REVISION", default=DEFAULT_DATASETS_REVISION)
40 HF_TOKEN = get_str_or_none_value(d=os.environ, key="HF_TOKEN", default=DEFAULT_HF_TOKEN)
41 LOG_LEVEL = get_str_value(d=os.environ, key="LOG_LEVEL", default=DEFAULT_LOG_LEVEL)
42 MAX_AGE_LONG_SECONDS = get_int_value(d=os.environ, key="MAX_AGE_LONG_SECONDS", default=DEFAULT_MAX_AGE_LONG_SECONDS)
43 MAX_AGE_SHORT_SECONDS = get_int_value(d=os.environ, key="MAX_AGE_SHORT_SECONDS", default=DEFAULT_MAX_AGE_SHORT_SECONDS)
44 MONGO_CACHE_DATABASE = get_str_value(d=os.environ, key="MONGO_CACHE_DATABASE", default=DEFAULT_MONGO_CACHE_DATABASE)
45 MONGO_QUEUE_DATABASE = get_str_value(d=os.environ, key="MONGO_QUEUE_DATABASE", default=DEFAULT_MONGO_QUEUE_DATABASE)
46 MONGO_URL = get_str_value(d=os.environ, key="MONGO_URL", default=DEFAULT_MONGO_URL)
47 WEB_CONCURRENCY = get_int_value(d=os.environ, key="WEB_CONCURRENCY", default=DEFAULT_WEB_CONCURRENCY)
48
49 # Ensure datasets library uses the expected revision for canonical datasets
50 os.environ["HF_SCRIPTS_VERSION"] = DATASETS_REVISION
51
52 # for tests - to be removed
53 ROWS_MAX_BYTES = get_int_value(d=os.environ, key="ROWS_MAX_BYTES", default=DEFAULT_ROWS_MAX_BYTES)
54 ROWS_MAX_NUMBER = get_int_value(d=os.environ, key="ROWS_MAX_NUMBER", default=DEFAULT_ROWS_MAX_NUMBER)
55 ROWS_MIN_NUMBER = get_int_value(d=os.environ, key="ROWS_MIN_NUMBER", default=DEFAULT_ROWS_MIN_NUMBER)
56
```
Path: `src/datasets_preview_backend/models/row.py`
Content:
```
1 import itertools
2 import logging
3 from typing import Any, Dict, List, Optional
4
5 from datasets import Dataset, DownloadMode, IterableDataset, load_dataset
6
7 from datasets_preview_backend.constants import DEFAULT_ROWS_MAX_NUMBER
8 from datasets_preview_backend.utils import retry
9
10 logger = logging.getLogger(__name__)
11
12
13 Row = Dict[str, Any]
14
15
16 @retry(logger=logger)
17 def get_rows(
18 dataset_name: str,
19 config_name: str,
20 split_name: str,
21 hf_token: Optional[str] = None,
22 streaming: bool = True,
23 rows_max_number: Optional[int] = None,
24 ) -> List[Row]:
25 if rows_max_number is None:
26 rows_max_number = DEFAULT_ROWS_MAX_NUMBER
27 dataset = load_dataset(
28 dataset_name,
29 name=config_name,
30 split=split_name,
31 streaming=True,
32 download_mode=DownloadMode.FORCE_REDOWNLOAD,
33 use_auth_token=hf_token,
34 )
35 if streaming:
36 if not isinstance(dataset, IterableDataset):
37 raise TypeError("load_dataset should return an IterableDataset")
38 elif not isinstance(dataset, Dataset):
39 raise TypeError("load_dataset should return a Dataset")
40 rows_plus_one = list(itertools.islice(dataset, rows_max_number + 1))
41 # ^^ to be able to detect if a split has exactly ROWS_MAX_NUMBER rows
42 if len(rows_plus_one) <= rows_max_number:
43 logger.debug(f"all the rows in the split have been fetched ({len(rows_plus_one)})")
44 else:
45 logger.debug(f"the rows in the split have been truncated ({rows_max_number} rows)")
46 return rows_plus_one[:rows_max_number]
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/datasets_preview_backend/config.py b/src/datasets_preview_backend/config.py
--- a/src/datasets_preview_backend/config.py
+++ b/src/datasets_preview_backend/config.py
@@ -12,6 +12,7 @@
DEFAULT_LOG_LEVEL,
DEFAULT_MAX_AGE_LONG_SECONDS,
DEFAULT_MAX_AGE_SHORT_SECONDS,
+ DEFAULT_MAX_SIZE_FALLBACK,
DEFAULT_MONGO_CACHE_DATABASE,
DEFAULT_MONGO_QUEUE_DATABASE,
DEFAULT_MONGO_URL,
@@ -50,6 +51,7 @@
os.environ["HF_SCRIPTS_VERSION"] = DATASETS_REVISION
# for tests - to be removed
+MAX_SIZE_FALLBACK = get_int_value(os.environ, "MAX_SIZE_FALLBACK", DEFAULT_MAX_SIZE_FALLBACK)
ROWS_MAX_BYTES = get_int_value(d=os.environ, key="ROWS_MAX_BYTES", default=DEFAULT_ROWS_MAX_BYTES)
ROWS_MAX_NUMBER = get_int_value(d=os.environ, key="ROWS_MAX_NUMBER", default=DEFAULT_ROWS_MAX_NUMBER)
ROWS_MIN_NUMBER = get_int_value(d=os.environ, key="ROWS_MIN_NUMBER", default=DEFAULT_ROWS_MIN_NUMBER)
diff --git a/src/datasets_preview_backend/models/row.py b/src/datasets_preview_backend/models/row.py
--- a/src/datasets_preview_backend/models/row.py
+++ b/src/datasets_preview_backend/models/row.py
@@ -28,7 +28,7 @@
dataset_name,
name=config_name,
split=split_name,
- streaming=True,
+ streaming=streaming,
download_mode=DownloadMode.FORCE_REDOWNLOAD,
use_auth_token=hf_token,
)
| {"golden_diff": "diff --git a/src/datasets_preview_backend/config.py b/src/datasets_preview_backend/config.py\n--- a/src/datasets_preview_backend/config.py\n+++ b/src/datasets_preview_backend/config.py\n@@ -12,6 +12,7 @@\n DEFAULT_LOG_LEVEL,\n DEFAULT_MAX_AGE_LONG_SECONDS,\n DEFAULT_MAX_AGE_SHORT_SECONDS,\n+ DEFAULT_MAX_SIZE_FALLBACK,\n DEFAULT_MONGO_CACHE_DATABASE,\n DEFAULT_MONGO_QUEUE_DATABASE,\n DEFAULT_MONGO_URL,\n@@ -50,6 +51,7 @@\n os.environ[\"HF_SCRIPTS_VERSION\"] = DATASETS_REVISION\n \n # for tests - to be removed\n+MAX_SIZE_FALLBACK = get_int_value(os.environ, \"MAX_SIZE_FALLBACK\", DEFAULT_MAX_SIZE_FALLBACK)\n ROWS_MAX_BYTES = get_int_value(d=os.environ, key=\"ROWS_MAX_BYTES\", default=DEFAULT_ROWS_MAX_BYTES)\n ROWS_MAX_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MAX_NUMBER\", default=DEFAULT_ROWS_MAX_NUMBER)\n ROWS_MIN_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MIN_NUMBER\", default=DEFAULT_ROWS_MIN_NUMBER)\ndiff --git a/src/datasets_preview_backend/models/row.py b/src/datasets_preview_backend/models/row.py\n--- a/src/datasets_preview_backend/models/row.py\n+++ b/src/datasets_preview_backend/models/row.py\n@@ -28,7 +28,7 @@\n dataset_name,\n name=config_name,\n split=split_name,\n- streaming=True,\n+ streaming=streaming,\n download_mode=DownloadMode.FORCE_REDOWNLOAD,\n use_auth_token=hf_token,\n )\n", "issue": "regression: fallback if streaming fails is disabled\nCauses https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode.\n", "before_files": [{"content": "import os\n\nfrom dotenv import load_dotenv\n\nfrom datasets_preview_backend.constants import (\n DEFAULT_APP_HOSTNAME,\n DEFAULT_APP_PORT,\n DEFAULT_ASSETS_DIRECTORY,\n DEFAULT_DATASETS_ENABLE_PRIVATE,\n DEFAULT_DATASETS_REVISION,\n DEFAULT_HF_TOKEN,\n DEFAULT_LOG_LEVEL,\n DEFAULT_MAX_AGE_LONG_SECONDS,\n DEFAULT_MAX_AGE_SHORT_SECONDS,\n DEFAULT_MONGO_CACHE_DATABASE,\n DEFAULT_MONGO_QUEUE_DATABASE,\n DEFAULT_MONGO_URL,\n DEFAULT_ROWS_MAX_BYTES,\n DEFAULT_ROWS_MAX_NUMBER,\n DEFAULT_ROWS_MIN_NUMBER,\n DEFAULT_WEB_CONCURRENCY,\n)\nfrom datasets_preview_backend.utils import (\n get_bool_value,\n get_int_value,\n get_str_or_none_value,\n get_str_value,\n)\n\n# Load environment variables defined in .env, if any\nload_dotenv()\n\nAPP_HOSTNAME = get_str_value(d=os.environ, key=\"APP_HOSTNAME\", default=DEFAULT_APP_HOSTNAME)\nAPP_PORT = get_int_value(d=os.environ, key=\"APP_PORT\", default=DEFAULT_APP_PORT)\nASSETS_DIRECTORY = get_str_or_none_value(d=os.environ, key=\"ASSETS_DIRECTORY\", default=DEFAULT_ASSETS_DIRECTORY)\nDATASETS_ENABLE_PRIVATE = get_bool_value(\n d=os.environ, key=\"DATASETS_ENABLE_PRIVATE\", default=DEFAULT_DATASETS_ENABLE_PRIVATE\n)\nDATASETS_REVISION = get_str_value(d=os.environ, key=\"DATASETS_REVISION\", default=DEFAULT_DATASETS_REVISION)\nHF_TOKEN = get_str_or_none_value(d=os.environ, key=\"HF_TOKEN\", default=DEFAULT_HF_TOKEN)\nLOG_LEVEL = get_str_value(d=os.environ, key=\"LOG_LEVEL\", default=DEFAULT_LOG_LEVEL)\nMAX_AGE_LONG_SECONDS = get_int_value(d=os.environ, key=\"MAX_AGE_LONG_SECONDS\", default=DEFAULT_MAX_AGE_LONG_SECONDS)\nMAX_AGE_SHORT_SECONDS = get_int_value(d=os.environ, key=\"MAX_AGE_SHORT_SECONDS\", default=DEFAULT_MAX_AGE_SHORT_SECONDS)\nMONGO_CACHE_DATABASE = get_str_value(d=os.environ, key=\"MONGO_CACHE_DATABASE\", default=DEFAULT_MONGO_CACHE_DATABASE)\nMONGO_QUEUE_DATABASE = get_str_value(d=os.environ, key=\"MONGO_QUEUE_DATABASE\", default=DEFAULT_MONGO_QUEUE_DATABASE)\nMONGO_URL = get_str_value(d=os.environ, key=\"MONGO_URL\", default=DEFAULT_MONGO_URL)\nWEB_CONCURRENCY = get_int_value(d=os.environ, key=\"WEB_CONCURRENCY\", default=DEFAULT_WEB_CONCURRENCY)\n\n# Ensure datasets library uses the expected revision for canonical datasets\nos.environ[\"HF_SCRIPTS_VERSION\"] = DATASETS_REVISION\n\n# for tests - to be removed\nROWS_MAX_BYTES = get_int_value(d=os.environ, key=\"ROWS_MAX_BYTES\", default=DEFAULT_ROWS_MAX_BYTES)\nROWS_MAX_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MAX_NUMBER\", default=DEFAULT_ROWS_MAX_NUMBER)\nROWS_MIN_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MIN_NUMBER\", default=DEFAULT_ROWS_MIN_NUMBER)\n", "path": "src/datasets_preview_backend/config.py"}, {"content": "import itertools\nimport logging\nfrom typing import Any, Dict, List, Optional\n\nfrom datasets import Dataset, DownloadMode, IterableDataset, load_dataset\n\nfrom datasets_preview_backend.constants import DEFAULT_ROWS_MAX_NUMBER\nfrom datasets_preview_backend.utils import retry\n\nlogger = logging.getLogger(__name__)\n\n\nRow = Dict[str, Any]\n\n\n@retry(logger=logger)\ndef get_rows(\n dataset_name: str,\n config_name: str,\n split_name: str,\n hf_token: Optional[str] = None,\n streaming: bool = True,\n rows_max_number: Optional[int] = None,\n) -> List[Row]:\n if rows_max_number is None:\n rows_max_number = DEFAULT_ROWS_MAX_NUMBER\n dataset = load_dataset(\n dataset_name,\n name=config_name,\n split=split_name,\n streaming=True,\n download_mode=DownloadMode.FORCE_REDOWNLOAD,\n use_auth_token=hf_token,\n )\n if streaming:\n if not isinstance(dataset, IterableDataset):\n raise TypeError(\"load_dataset should return an IterableDataset\")\n elif not isinstance(dataset, Dataset):\n raise TypeError(\"load_dataset should return a Dataset\")\n rows_plus_one = list(itertools.islice(dataset, rows_max_number + 1))\n # ^^ to be able to detect if a split has exactly ROWS_MAX_NUMBER rows\n if len(rows_plus_one) <= rows_max_number:\n logger.debug(f\"all the rows in the split have been fetched ({len(rows_plus_one)})\")\n else:\n logger.debug(f\"the rows in the split have been truncated ({rows_max_number} rows)\")\n return rows_plus_one[:rows_max_number]\n", "path": "src/datasets_preview_backend/models/row.py"}], "after_files": [{"content": "import os\n\nfrom dotenv import load_dotenv\n\nfrom datasets_preview_backend.constants import (\n DEFAULT_APP_HOSTNAME,\n DEFAULT_APP_PORT,\n DEFAULT_ASSETS_DIRECTORY,\n DEFAULT_DATASETS_ENABLE_PRIVATE,\n DEFAULT_DATASETS_REVISION,\n DEFAULT_HF_TOKEN,\n DEFAULT_LOG_LEVEL,\n DEFAULT_MAX_AGE_LONG_SECONDS,\n DEFAULT_MAX_AGE_SHORT_SECONDS,\n DEFAULT_MAX_SIZE_FALLBACK,\n DEFAULT_MONGO_CACHE_DATABASE,\n DEFAULT_MONGO_QUEUE_DATABASE,\n DEFAULT_MONGO_URL,\n DEFAULT_ROWS_MAX_BYTES,\n DEFAULT_ROWS_MAX_NUMBER,\n DEFAULT_ROWS_MIN_NUMBER,\n DEFAULT_WEB_CONCURRENCY,\n)\nfrom datasets_preview_backend.utils import (\n get_bool_value,\n get_int_value,\n get_str_or_none_value,\n get_str_value,\n)\n\n# Load environment variables defined in .env, if any\nload_dotenv()\n\nAPP_HOSTNAME = get_str_value(d=os.environ, key=\"APP_HOSTNAME\", default=DEFAULT_APP_HOSTNAME)\nAPP_PORT = get_int_value(d=os.environ, key=\"APP_PORT\", default=DEFAULT_APP_PORT)\nASSETS_DIRECTORY = get_str_or_none_value(d=os.environ, key=\"ASSETS_DIRECTORY\", default=DEFAULT_ASSETS_DIRECTORY)\nDATASETS_ENABLE_PRIVATE = get_bool_value(\n d=os.environ, key=\"DATASETS_ENABLE_PRIVATE\", default=DEFAULT_DATASETS_ENABLE_PRIVATE\n)\nDATASETS_REVISION = get_str_value(d=os.environ, key=\"DATASETS_REVISION\", default=DEFAULT_DATASETS_REVISION)\nHF_TOKEN = get_str_or_none_value(d=os.environ, key=\"HF_TOKEN\", default=DEFAULT_HF_TOKEN)\nLOG_LEVEL = get_str_value(d=os.environ, key=\"LOG_LEVEL\", default=DEFAULT_LOG_LEVEL)\nMAX_AGE_LONG_SECONDS = get_int_value(d=os.environ, key=\"MAX_AGE_LONG_SECONDS\", default=DEFAULT_MAX_AGE_LONG_SECONDS)\nMAX_AGE_SHORT_SECONDS = get_int_value(d=os.environ, key=\"MAX_AGE_SHORT_SECONDS\", default=DEFAULT_MAX_AGE_SHORT_SECONDS)\nMONGO_CACHE_DATABASE = get_str_value(d=os.environ, key=\"MONGO_CACHE_DATABASE\", default=DEFAULT_MONGO_CACHE_DATABASE)\nMONGO_QUEUE_DATABASE = get_str_value(d=os.environ, key=\"MONGO_QUEUE_DATABASE\", default=DEFAULT_MONGO_QUEUE_DATABASE)\nMONGO_URL = get_str_value(d=os.environ, key=\"MONGO_URL\", default=DEFAULT_MONGO_URL)\nWEB_CONCURRENCY = get_int_value(d=os.environ, key=\"WEB_CONCURRENCY\", default=DEFAULT_WEB_CONCURRENCY)\n\n# Ensure datasets library uses the expected revision for canonical datasets\nos.environ[\"HF_SCRIPTS_VERSION\"] = DATASETS_REVISION\n\n# for tests - to be removed\nMAX_SIZE_FALLBACK = get_int_value(os.environ, \"MAX_SIZE_FALLBACK\", DEFAULT_MAX_SIZE_FALLBACK)\nROWS_MAX_BYTES = get_int_value(d=os.environ, key=\"ROWS_MAX_BYTES\", default=DEFAULT_ROWS_MAX_BYTES)\nROWS_MAX_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MAX_NUMBER\", default=DEFAULT_ROWS_MAX_NUMBER)\nROWS_MIN_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MIN_NUMBER\", default=DEFAULT_ROWS_MIN_NUMBER)\n", "path": "src/datasets_preview_backend/config.py"}, {"content": "import itertools\nimport logging\nfrom typing import Any, Dict, List, Optional\n\nfrom datasets import Dataset, DownloadMode, IterableDataset, load_dataset\n\nfrom datasets_preview_backend.constants import DEFAULT_ROWS_MAX_NUMBER\nfrom datasets_preview_backend.utils import retry\n\nlogger = logging.getLogger(__name__)\n\n\nRow = Dict[str, Any]\n\n\n@retry(logger=logger)\ndef get_rows(\n dataset_name: str,\n config_name: str,\n split_name: str,\n hf_token: Optional[str] = None,\n streaming: bool = True,\n rows_max_number: Optional[int] = None,\n) -> List[Row]:\n if rows_max_number is None:\n rows_max_number = DEFAULT_ROWS_MAX_NUMBER\n dataset = load_dataset(\n dataset_name,\n name=config_name,\n split=split_name,\n streaming=streaming,\n download_mode=DownloadMode.FORCE_REDOWNLOAD,\n use_auth_token=hf_token,\n )\n if streaming:\n if not isinstance(dataset, IterableDataset):\n raise TypeError(\"load_dataset should return an IterableDataset\")\n elif not isinstance(dataset, Dataset):\n raise TypeError(\"load_dataset should return a Dataset\")\n rows_plus_one = list(itertools.islice(dataset, rows_max_number + 1))\n # ^^ to be able to detect if a split has exactly ROWS_MAX_NUMBER rows\n if len(rows_plus_one) <= rows_max_number:\n logger.debug(f\"all the rows in the split have been fetched ({len(rows_plus_one)})\")\n else:\n logger.debug(f\"the rows in the split have been truncated ({rows_max_number} rows)\")\n return rows_plus_one[:rows_max_number]\n", "path": "src/datasets_preview_backend/models/row.py"}]} | 1,485 | 344 |
gh_patches_debug_51276 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3848 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
lint takes a long time
Fix that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from os import getpid
16 from socket import gethostname
17 from time import time
18
19 # pylint: disable=wrong-import-position
20 from google.protobuf.timestamp_pb2 import Timestamp
21 from opencensus.proto.agent.common.v1 import common_pb2
22 from opencensus.proto.trace.v1 import trace_pb2
23
24 from opentelemetry.exporter.opencensus.version import (
25 __version__ as opencensusexporter_exporter_version,
26 )
27 from opentelemetry.trace import SpanKind
28 from opentelemetry.util._importlib_metadata import version
29
30 OPENTELEMETRY_VERSION = version("opentelemetry-api")
31
32
33 def proto_timestamp_from_time_ns(time_ns):
34 """Converts datetime to protobuf timestamp.
35
36 Args:
37 time_ns: Time in nanoseconds
38
39 Returns:
40 Returns protobuf timestamp.
41 """
42 ts = Timestamp()
43 if time_ns is not None:
44 # pylint: disable=no-member
45 ts.FromNanoseconds(time_ns)
46 return ts
47
48
49 # pylint: disable=no-member
50 def get_collector_span_kind(kind: SpanKind):
51 if kind is SpanKind.SERVER:
52 return trace_pb2.Span.SpanKind.SERVER
53 if kind is SpanKind.CLIENT:
54 return trace_pb2.Span.SpanKind.CLIENT
55 return trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED
56
57
58 def add_proto_attribute_value(pb_attributes, key, value):
59 """Sets string, int, boolean or float value on protobuf
60 span, link or annotation attributes.
61
62 Args:
63 pb_attributes: protobuf Span's attributes property.
64 key: attribute key to set.
65 value: attribute value
66 """
67
68 if isinstance(value, bool):
69 pb_attributes.attribute_map[key].bool_value = value
70 elif isinstance(value, int):
71 pb_attributes.attribute_map[key].int_value = value
72 elif isinstance(value, str):
73 pb_attributes.attribute_map[key].string_value.value = value
74 elif isinstance(value, float):
75 pb_attributes.attribute_map[key].double_value = value
76 else:
77 pb_attributes.attribute_map[key].string_value.value = str(value)
78
79
80 # pylint: disable=no-member
81 def get_node(service_name, host_name):
82 """Generates Node message from params and system information.
83
84 Args:
85 service_name: Name of Collector service.
86 host_name: Host name.
87 """
88 return common_pb2.Node(
89 identifier=common_pb2.ProcessIdentifier(
90 host_name=gethostname() if host_name is None else host_name,
91 pid=getpid(),
92 start_timestamp=proto_timestamp_from_time_ns(int(time() * 1e9)),
93 ),
94 library_info=common_pb2.LibraryInfo(
95 language=common_pb2.LibraryInfo.Language.Value("PYTHON"),
96 exporter_version=opencensusexporter_exporter_version,
97 core_library_version=OPENTELEMETRY_VERSION,
98 ),
99 service_info=common_pb2.ServiceInfo(name=service_name),
100 )
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
--- a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
@@ -17,7 +17,9 @@
from time import time
# pylint: disable=wrong-import-position
-from google.protobuf.timestamp_pb2 import Timestamp
+from google.protobuf.timestamp_pb2 import ( # pylint: disable=no-name-in-module
+ Timestamp,
+)
from opencensus.proto.agent.common.v1 import common_pb2
from opencensus.proto.trace.v1 import trace_pb2
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py\n--- a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py\n+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py\n@@ -17,7 +17,9 @@\n from time import time\n \n # pylint: disable=wrong-import-position\n-from google.protobuf.timestamp_pb2 import Timestamp\n+from google.protobuf.timestamp_pb2 import ( # pylint: disable=no-name-in-module\n+ Timestamp,\n+)\n from opencensus.proto.agent.common.v1 import common_pb2\n from opencensus.proto.trace.v1 import trace_pb2\n", "issue": "lint takes a long time\nFix that.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom os import getpid\nfrom socket import gethostname\nfrom time import time\n\n# pylint: disable=wrong-import-position\nfrom google.protobuf.timestamp_pb2 import Timestamp\nfrom opencensus.proto.agent.common.v1 import common_pb2\nfrom opencensus.proto.trace.v1 import trace_pb2\n\nfrom opentelemetry.exporter.opencensus.version import (\n __version__ as opencensusexporter_exporter_version,\n)\nfrom opentelemetry.trace import SpanKind\nfrom opentelemetry.util._importlib_metadata import version\n\nOPENTELEMETRY_VERSION = version(\"opentelemetry-api\")\n\n\ndef proto_timestamp_from_time_ns(time_ns):\n \"\"\"Converts datetime to protobuf timestamp.\n\n Args:\n time_ns: Time in nanoseconds\n\n Returns:\n Returns protobuf timestamp.\n \"\"\"\n ts = Timestamp()\n if time_ns is not None:\n # pylint: disable=no-member\n ts.FromNanoseconds(time_ns)\n return ts\n\n\n# pylint: disable=no-member\ndef get_collector_span_kind(kind: SpanKind):\n if kind is SpanKind.SERVER:\n return trace_pb2.Span.SpanKind.SERVER\n if kind is SpanKind.CLIENT:\n return trace_pb2.Span.SpanKind.CLIENT\n return trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED\n\n\ndef add_proto_attribute_value(pb_attributes, key, value):\n \"\"\"Sets string, int, boolean or float value on protobuf\n span, link or annotation attributes.\n\n Args:\n pb_attributes: protobuf Span's attributes property.\n key: attribute key to set.\n value: attribute value\n \"\"\"\n\n if isinstance(value, bool):\n pb_attributes.attribute_map[key].bool_value = value\n elif isinstance(value, int):\n pb_attributes.attribute_map[key].int_value = value\n elif isinstance(value, str):\n pb_attributes.attribute_map[key].string_value.value = value\n elif isinstance(value, float):\n pb_attributes.attribute_map[key].double_value = value\n else:\n pb_attributes.attribute_map[key].string_value.value = str(value)\n\n\n# pylint: disable=no-member\ndef get_node(service_name, host_name):\n \"\"\"Generates Node message from params and system information.\n\n Args:\n service_name: Name of Collector service.\n host_name: Host name.\n \"\"\"\n return common_pb2.Node(\n identifier=common_pb2.ProcessIdentifier(\n host_name=gethostname() if host_name is None else host_name,\n pid=getpid(),\n start_timestamp=proto_timestamp_from_time_ns(int(time() * 1e9)),\n ),\n library_info=common_pb2.LibraryInfo(\n language=common_pb2.LibraryInfo.Language.Value(\"PYTHON\"),\n exporter_version=opencensusexporter_exporter_version,\n core_library_version=OPENTELEMETRY_VERSION,\n ),\n service_info=common_pb2.ServiceInfo(name=service_name),\n )\n", "path": "exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom os import getpid\nfrom socket import gethostname\nfrom time import time\n\n# pylint: disable=wrong-import-position\nfrom google.protobuf.timestamp_pb2 import ( # pylint: disable=no-name-in-module\n Timestamp,\n)\nfrom opencensus.proto.agent.common.v1 import common_pb2\nfrom opencensus.proto.trace.v1 import trace_pb2\n\nfrom opentelemetry.exporter.opencensus.version import (\n __version__ as opencensusexporter_exporter_version,\n)\nfrom opentelemetry.trace import SpanKind\nfrom opentelemetry.util._importlib_metadata import version\n\nOPENTELEMETRY_VERSION = version(\"opentelemetry-api\")\n\n\ndef proto_timestamp_from_time_ns(time_ns):\n \"\"\"Converts datetime to protobuf timestamp.\n\n Args:\n time_ns: Time in nanoseconds\n\n Returns:\n Returns protobuf timestamp.\n \"\"\"\n ts = Timestamp()\n if time_ns is not None:\n # pylint: disable=no-member\n ts.FromNanoseconds(time_ns)\n return ts\n\n\n# pylint: disable=no-member\ndef get_collector_span_kind(kind: SpanKind):\n if kind is SpanKind.SERVER:\n return trace_pb2.Span.SpanKind.SERVER\n if kind is SpanKind.CLIENT:\n return trace_pb2.Span.SpanKind.CLIENT\n return trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED\n\n\ndef add_proto_attribute_value(pb_attributes, key, value):\n \"\"\"Sets string, int, boolean or float value on protobuf\n span, link or annotation attributes.\n\n Args:\n pb_attributes: protobuf Span's attributes property.\n key: attribute key to set.\n value: attribute value\n \"\"\"\n\n if isinstance(value, bool):\n pb_attributes.attribute_map[key].bool_value = value\n elif isinstance(value, int):\n pb_attributes.attribute_map[key].int_value = value\n elif isinstance(value, str):\n pb_attributes.attribute_map[key].string_value.value = value\n elif isinstance(value, float):\n pb_attributes.attribute_map[key].double_value = value\n else:\n pb_attributes.attribute_map[key].string_value.value = str(value)\n\n\n# pylint: disable=no-member\ndef get_node(service_name, host_name):\n \"\"\"Generates Node message from params and system information.\n\n Args:\n service_name: Name of Collector service.\n host_name: Host name.\n \"\"\"\n return common_pb2.Node(\n identifier=common_pb2.ProcessIdentifier(\n host_name=gethostname() if host_name is None else host_name,\n pid=getpid(),\n start_timestamp=proto_timestamp_from_time_ns(int(time() * 1e9)),\n ),\n library_info=common_pb2.LibraryInfo(\n language=common_pb2.LibraryInfo.Language.Value(\"PYTHON\"),\n exporter_version=opencensusexporter_exporter_version,\n core_library_version=OPENTELEMETRY_VERSION,\n ),\n service_info=common_pb2.ServiceInfo(name=service_name),\n )\n", "path": "exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py"}]} | 1,220 | 183 |
gh_patches_debug_22398 | rasdani/github-patches | git_diff | fonttools__fonttools-1605 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Float yMin value: required argument is not an integer
If a font file has a float value in `yMin`—and I assume equally in `xMin`, `xMax` or `yMax`—it will fail to save with the error `required argument is not an integer` ([`fontTools/misc/sstruct.py in pack at line 75`](https://github.com/fonttools/fonttools/blob/3.40.0/Lib/fontTools/misc/sstruct.py#L75), fonttools v3.40.0).
Trace:
```
fontTools/misc/sstruct.py in pack at line 75
fontTools/ttLib/tables/_h_e_a_d.py in compile at line 69
fontTools/ttLib/ttFont.py in getTableData at line 651
fontTools/ttLib/ttFont.py in _writeTable at line 633
fontTools/ttLib/ttFont.py in _save at line 212
fontTools/ttLib/ttFont.py in save at line 173
```
Variables at point of error:
```python
formatstring = ">llIIHHQQhhhhHHhhh"
elements = [
65536,
65601,
1208942685,
1594834165,
3,
1000,
3551183604,
3640213847,
-132,
-170.009,
788,
835,
0,
3,
2,
0,
0
]
```
As you can see the value `-170.009` would trigger the error. If integers are expected then rounding should probably be applied.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/ttLib/tables/_h_e_a_d.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2 from fontTools.misc.py23 import *
3 from fontTools.misc import sstruct
4 from fontTools.misc.textTools import safeEval, num2binary, binary2num
5 from fontTools.misc.timeTools import timestampFromString, timestampToString, timestampNow
6 from fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat
7 from . import DefaultTable
8 import logging
9
10
11 log = logging.getLogger(__name__)
12
13 headFormat = """
14 > # big endian
15 tableVersion: 16.16F
16 fontRevision: 16.16F
17 checkSumAdjustment: I
18 magicNumber: I
19 flags: H
20 unitsPerEm: H
21 created: Q
22 modified: Q
23 xMin: h
24 yMin: h
25 xMax: h
26 yMax: h
27 macStyle: H
28 lowestRecPPEM: H
29 fontDirectionHint: h
30 indexToLocFormat: h
31 glyphDataFormat: h
32 """
33
34 class table__h_e_a_d(DefaultTable.DefaultTable):
35
36 dependencies = ['maxp', 'loca', 'CFF ']
37
38 def decompile(self, data, ttFont):
39 dummy, rest = sstruct.unpack2(headFormat, data, self)
40 if rest:
41 # this is quite illegal, but there seem to be fonts out there that do this
42 log.warning("extra bytes at the end of 'head' table")
43 assert rest == "\0\0"
44
45 # For timestamp fields, ignore the top four bytes. Some fonts have
46 # bogus values there. Since till 2038 those bytes only can be zero,
47 # ignore them.
48 #
49 # https://github.com/fonttools/fonttools/issues/99#issuecomment-66776810
50 for stamp in 'created', 'modified':
51 value = getattr(self, stamp)
52 if value > 0xFFFFFFFF:
53 log.warning("'%s' timestamp out of range; ignoring top bytes", stamp)
54 value &= 0xFFFFFFFF
55 setattr(self, stamp, value)
56 if value < 0x7C259DC0: # January 1, 1970 00:00:00
57 log.warning("'%s' timestamp seems very low; regarding as unix timestamp", stamp)
58 value += 0x7C259DC0
59 setattr(self, stamp, value)
60
61 def compile(self, ttFont):
62 if ttFont.recalcBBoxes:
63 # For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc().
64 if 'CFF ' in ttFont:
65 topDict = ttFont['CFF '].cff.topDictIndex[0]
66 self.xMin, self.yMin, self.xMax, self.yMax = topDict.FontBBox
67 if ttFont.recalcTimestamp:
68 self.modified = timestampNow()
69 data = sstruct.pack(headFormat, self)
70 return data
71
72 def toXML(self, writer, ttFont):
73 writer.comment("Most of this table will be recalculated by the compiler")
74 writer.newline()
75 formatstring, names, fixes = sstruct.getformat(headFormat)
76 for name in names:
77 value = getattr(self, name)
78 if name in ("created", "modified"):
79 value = timestampToString(value)
80 if name in ("magicNumber", "checkSumAdjustment"):
81 if value < 0:
82 value = value + 0x100000000
83 value = hex(value)
84 if value[-1:] == "L":
85 value = value[:-1]
86 elif name in ("macStyle", "flags"):
87 value = num2binary(value, 16)
88 writer.simpletag(name, value=value)
89 writer.newline()
90
91 def fromXML(self, name, attrs, content, ttFont):
92 value = attrs["value"]
93 if name in ("created", "modified"):
94 value = timestampFromString(value)
95 elif name in ("macStyle", "flags"):
96 value = binary2num(value)
97 else:
98 value = safeEval(value)
99 setattr(self, name, value)
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/ttLib/tables/_h_e_a_d.py b/Lib/fontTools/ttLib/tables/_h_e_a_d.py
--- a/Lib/fontTools/ttLib/tables/_h_e_a_d.py
+++ b/Lib/fontTools/ttLib/tables/_h_e_a_d.py
@@ -4,6 +4,7 @@
from fontTools.misc.textTools import safeEval, num2binary, binary2num
from fontTools.misc.timeTools import timestampFromString, timestampToString, timestampNow
from fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat
+from fontTools.misc.arrayTools import intRect
from . import DefaultTable
import logging
@@ -63,7 +64,7 @@
# For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc().
if 'CFF ' in ttFont:
topDict = ttFont['CFF '].cff.topDictIndex[0]
- self.xMin, self.yMin, self.xMax, self.yMax = topDict.FontBBox
+ self.xMin, self.yMin, self.xMax, self.yMax = intRect(topDict.FontBBox)
if ttFont.recalcTimestamp:
self.modified = timestampNow()
data = sstruct.pack(headFormat, self)
| {"golden_diff": "diff --git a/Lib/fontTools/ttLib/tables/_h_e_a_d.py b/Lib/fontTools/ttLib/tables/_h_e_a_d.py\n--- a/Lib/fontTools/ttLib/tables/_h_e_a_d.py\n+++ b/Lib/fontTools/ttLib/tables/_h_e_a_d.py\n@@ -4,6 +4,7 @@\n from fontTools.misc.textTools import safeEval, num2binary, binary2num\n from fontTools.misc.timeTools import timestampFromString, timestampToString, timestampNow\n from fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat\n+from fontTools.misc.arrayTools import intRect\n from . import DefaultTable\n import logging\n \n@@ -63,7 +64,7 @@\n \t\t\t# For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc().\n \t\t\tif 'CFF ' in ttFont:\n \t\t\t\ttopDict = ttFont['CFF '].cff.topDictIndex[0]\n-\t\t\t\tself.xMin, self.yMin, self.xMax, self.yMax = topDict.FontBBox\n+\t\t\t\tself.xMin, self.yMin, self.xMax, self.yMax = intRect(topDict.FontBBox)\n \t\tif ttFont.recalcTimestamp:\n \t\t\tself.modified = timestampNow()\n \t\tdata = sstruct.pack(headFormat, self)\n", "issue": "Float yMin value: required argument is not an integer\nIf a font file has a float value in `yMin`\u2014and I assume equally in `xMin`, `xMax` or `yMax`\u2014it will fail to save with the error `required argument is not an integer` ([`fontTools/misc/sstruct.py in pack at line 75`](https://github.com/fonttools/fonttools/blob/3.40.0/Lib/fontTools/misc/sstruct.py#L75), fonttools v3.40.0).\r\n\r\nTrace:\r\n```\r\nfontTools/misc/sstruct.py in pack at line 75\r\nfontTools/ttLib/tables/_h_e_a_d.py in compile at line 69\r\nfontTools/ttLib/ttFont.py in getTableData at line 651\r\nfontTools/ttLib/ttFont.py in _writeTable at line 633\r\nfontTools/ttLib/ttFont.py in _save at line 212\r\nfontTools/ttLib/ttFont.py in save at line 173\r\n```\r\n\r\nVariables at point of error:\r\n```python\r\nformatstring = \">llIIHHQQhhhhHHhhh\"\r\nelements = [\r\n 65536, \r\n 65601, \r\n 1208942685, \r\n 1594834165, \r\n 3, \r\n 1000, \r\n 3551183604, \r\n 3640213847, \r\n -132, \r\n -170.009, \r\n 788, \r\n 835, \r\n 0, \r\n 3, \r\n 2, \r\n 0, \r\n 0\r\n]\r\n```\r\n\r\nAs you can see the value `-170.009` would trigger the error. If integers are expected then rounding should probably be applied.\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval, num2binary, binary2num\nfrom fontTools.misc.timeTools import timestampFromString, timestampToString, timestampNow\nfrom fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat\nfrom . import DefaultTable\nimport logging\n\n\nlog = logging.getLogger(__name__)\n\nheadFormat = \"\"\"\n\t\t>\t# big endian\n\t\ttableVersion: 16.16F\n\t\tfontRevision: 16.16F\n\t\tcheckSumAdjustment: I\n\t\tmagicNumber: I\n\t\tflags: H\n\t\tunitsPerEm: H\n\t\tcreated: Q\n\t\tmodified: Q\n\t\txMin: h\n\t\tyMin: h\n\t\txMax: h\n\t\tyMax: h\n\t\tmacStyle: H\n\t\tlowestRecPPEM: H\n\t\tfontDirectionHint: h\n\t\tindexToLocFormat: h\n\t\tglyphDataFormat: h\n\"\"\"\n\nclass table__h_e_a_d(DefaultTable.DefaultTable):\n\n\tdependencies = ['maxp', 'loca', 'CFF ']\n\n\tdef decompile(self, data, ttFont):\n\t\tdummy, rest = sstruct.unpack2(headFormat, data, self)\n\t\tif rest:\n\t\t\t# this is quite illegal, but there seem to be fonts out there that do this\n\t\t\tlog.warning(\"extra bytes at the end of 'head' table\")\n\t\t\tassert rest == \"\\0\\0\"\n\n\t\t# For timestamp fields, ignore the top four bytes. Some fonts have\n\t\t# bogus values there. Since till 2038 those bytes only can be zero,\n\t\t# ignore them.\n\t\t#\n\t\t# https://github.com/fonttools/fonttools/issues/99#issuecomment-66776810\n\t\tfor stamp in 'created', 'modified':\n\t\t\tvalue = getattr(self, stamp)\n\t\t\tif value > 0xFFFFFFFF:\n\t\t\t\tlog.warning(\"'%s' timestamp out of range; ignoring top bytes\", stamp)\n\t\t\t\tvalue &= 0xFFFFFFFF\n\t\t\t\tsetattr(self, stamp, value)\n\t\t\tif value < 0x7C259DC0: # January 1, 1970 00:00:00\n\t\t\t\tlog.warning(\"'%s' timestamp seems very low; regarding as unix timestamp\", stamp)\n\t\t\t\tvalue += 0x7C259DC0\n\t\t\t\tsetattr(self, stamp, value)\n\n\tdef compile(self, ttFont):\n\t\tif ttFont.recalcBBoxes:\n\t\t\t# For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc().\n\t\t\tif 'CFF ' in ttFont:\n\t\t\t\ttopDict = ttFont['CFF '].cff.topDictIndex[0]\n\t\t\t\tself.xMin, self.yMin, self.xMax, self.yMax = topDict.FontBBox\n\t\tif ttFont.recalcTimestamp:\n\t\t\tself.modified = timestampNow()\n\t\tdata = sstruct.pack(headFormat, self)\n\t\treturn data\n\n\tdef toXML(self, writer, ttFont):\n\t\twriter.comment(\"Most of this table will be recalculated by the compiler\")\n\t\twriter.newline()\n\t\tformatstring, names, fixes = sstruct.getformat(headFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\tif name in (\"created\", \"modified\"):\n\t\t\t\tvalue = timestampToString(value)\n\t\t\tif name in (\"magicNumber\", \"checkSumAdjustment\"):\n\t\t\t\tif value < 0:\n\t\t\t\t\tvalue = value + 0x100000000\n\t\t\t\tvalue = hex(value)\n\t\t\t\tif value[-1:] == \"L\":\n\t\t\t\t\tvalue = value[:-1]\n\t\t\telif name in (\"macStyle\", \"flags\"):\n\t\t\t\tvalue = num2binary(value, 16)\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tvalue = attrs[\"value\"]\n\t\tif name in (\"created\", \"modified\"):\n\t\t\tvalue = timestampFromString(value)\n\t\telif name in (\"macStyle\", \"flags\"):\n\t\t\tvalue = binary2num(value)\n\t\telse:\n\t\t\tvalue = safeEval(value)\n\t\tsetattr(self, name, value)\n", "path": "Lib/fontTools/ttLib/tables/_h_e_a_d.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval, num2binary, binary2num\nfrom fontTools.misc.timeTools import timestampFromString, timestampToString, timestampNow\nfrom fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat\nfrom fontTools.misc.arrayTools import intRect\nfrom . import DefaultTable\nimport logging\n\n\nlog = logging.getLogger(__name__)\n\nheadFormat = \"\"\"\n\t\t>\t# big endian\n\t\ttableVersion: 16.16F\n\t\tfontRevision: 16.16F\n\t\tcheckSumAdjustment: I\n\t\tmagicNumber: I\n\t\tflags: H\n\t\tunitsPerEm: H\n\t\tcreated: Q\n\t\tmodified: Q\n\t\txMin: h\n\t\tyMin: h\n\t\txMax: h\n\t\tyMax: h\n\t\tmacStyle: H\n\t\tlowestRecPPEM: H\n\t\tfontDirectionHint: h\n\t\tindexToLocFormat: h\n\t\tglyphDataFormat: h\n\"\"\"\n\nclass table__h_e_a_d(DefaultTable.DefaultTable):\n\n\tdependencies = ['maxp', 'loca', 'CFF ']\n\n\tdef decompile(self, data, ttFont):\n\t\tdummy, rest = sstruct.unpack2(headFormat, data, self)\n\t\tif rest:\n\t\t\t# this is quite illegal, but there seem to be fonts out there that do this\n\t\t\tlog.warning(\"extra bytes at the end of 'head' table\")\n\t\t\tassert rest == \"\\0\\0\"\n\n\t\t# For timestamp fields, ignore the top four bytes. Some fonts have\n\t\t# bogus values there. Since till 2038 those bytes only can be zero,\n\t\t# ignore them.\n\t\t#\n\t\t# https://github.com/fonttools/fonttools/issues/99#issuecomment-66776810\n\t\tfor stamp in 'created', 'modified':\n\t\t\tvalue = getattr(self, stamp)\n\t\t\tif value > 0xFFFFFFFF:\n\t\t\t\tlog.warning(\"'%s' timestamp out of range; ignoring top bytes\", stamp)\n\t\t\t\tvalue &= 0xFFFFFFFF\n\t\t\t\tsetattr(self, stamp, value)\n\t\t\tif value < 0x7C259DC0: # January 1, 1970 00:00:00\n\t\t\t\tlog.warning(\"'%s' timestamp seems very low; regarding as unix timestamp\", stamp)\n\t\t\t\tvalue += 0x7C259DC0\n\t\t\t\tsetattr(self, stamp, value)\n\n\tdef compile(self, ttFont):\n\t\tif ttFont.recalcBBoxes:\n\t\t\t# For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc().\n\t\t\tif 'CFF ' in ttFont:\n\t\t\t\ttopDict = ttFont['CFF '].cff.topDictIndex[0]\n\t\t\t\tself.xMin, self.yMin, self.xMax, self.yMax = intRect(topDict.FontBBox)\n\t\tif ttFont.recalcTimestamp:\n\t\t\tself.modified = timestampNow()\n\t\tdata = sstruct.pack(headFormat, self)\n\t\treturn data\n\n\tdef toXML(self, writer, ttFont):\n\t\twriter.comment(\"Most of this table will be recalculated by the compiler\")\n\t\twriter.newline()\n\t\tformatstring, names, fixes = sstruct.getformat(headFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\tif name in (\"created\", \"modified\"):\n\t\t\t\tvalue = timestampToString(value)\n\t\t\tif name in (\"magicNumber\", \"checkSumAdjustment\"):\n\t\t\t\tif value < 0:\n\t\t\t\t\tvalue = value + 0x100000000\n\t\t\t\tvalue = hex(value)\n\t\t\t\tif value[-1:] == \"L\":\n\t\t\t\t\tvalue = value[:-1]\n\t\t\telif name in (\"macStyle\", \"flags\"):\n\t\t\t\tvalue = num2binary(value, 16)\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tvalue = attrs[\"value\"]\n\t\tif name in (\"created\", \"modified\"):\n\t\t\tvalue = timestampFromString(value)\n\t\telif name in (\"macStyle\", \"flags\"):\n\t\t\tvalue = binary2num(value)\n\t\telse:\n\t\t\tvalue = safeEval(value)\n\t\tsetattr(self, name, value)\n", "path": "Lib/fontTools/ttLib/tables/_h_e_a_d.py"}]} | 1,866 | 302 |
gh_patches_debug_6860 | rasdani/github-patches | git_diff | scrapy__scrapy-5858 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TLS logging broken with new cryptography
https://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/ssl.py`
Content:
```
1 import OpenSSL._util as pyOpenSSLutil
2 import OpenSSL.SSL
3
4 from scrapy.utils.python import to_unicode
5
6
7 def ffi_buf_to_string(buf):
8 return to_unicode(pyOpenSSLutil.ffi.string(buf))
9
10
11 def x509name_to_string(x509name):
12 # from OpenSSL.crypto.X509Name.__repr__
13 result_buffer = pyOpenSSLutil.ffi.new("char[]", 512)
14 pyOpenSSLutil.lib.X509_NAME_oneline(
15 x509name._name, result_buffer, len(result_buffer)
16 )
17
18 return ffi_buf_to_string(result_buffer)
19
20
21 def get_temp_key_info(ssl_object):
22 # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()
23 temp_key_p = pyOpenSSLutil.ffi.new("EVP_PKEY **")
24 if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):
25 return None
26 temp_key = temp_key_p[0]
27 if temp_key == pyOpenSSLutil.ffi.NULL:
28 return None
29 temp_key = pyOpenSSLutil.ffi.gc(temp_key, pyOpenSSLutil.lib.EVP_PKEY_free)
30 key_info = []
31 key_type = pyOpenSSLutil.lib.EVP_PKEY_id(temp_key)
32 if key_type == pyOpenSSLutil.lib.EVP_PKEY_RSA:
33 key_info.append("RSA")
34 elif key_type == pyOpenSSLutil.lib.EVP_PKEY_DH:
35 key_info.append("DH")
36 elif key_type == pyOpenSSLutil.lib.EVP_PKEY_EC:
37 key_info.append("ECDH")
38 ec_key = pyOpenSSLutil.lib.EVP_PKEY_get1_EC_KEY(temp_key)
39 ec_key = pyOpenSSLutil.ffi.gc(ec_key, pyOpenSSLutil.lib.EC_KEY_free)
40 nid = pyOpenSSLutil.lib.EC_GROUP_get_curve_name(
41 pyOpenSSLutil.lib.EC_KEY_get0_group(ec_key)
42 )
43 cname = pyOpenSSLutil.lib.EC_curve_nid2nist(nid)
44 if cname == pyOpenSSLutil.ffi.NULL:
45 cname = pyOpenSSLutil.lib.OBJ_nid2sn(nid)
46 key_info.append(ffi_buf_to_string(cname))
47 else:
48 key_info.append(ffi_buf_to_string(pyOpenSSLutil.lib.OBJ_nid2sn(key_type)))
49 key_info.append(f"{pyOpenSSLutil.lib.EVP_PKEY_bits(temp_key)} bits")
50 return ", ".join(key_info)
51
52
53 def get_openssl_version():
54 system_openssl = OpenSSL.SSL.SSLeay_version(OpenSSL.SSL.SSLEAY_VERSION).decode(
55 "ascii", errors="replace"
56 )
57 return f"{OpenSSL.version.__version__} ({system_openssl})"
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/ssl.py b/scrapy/utils/ssl.py
--- a/scrapy/utils/ssl.py
+++ b/scrapy/utils/ssl.py
@@ -20,6 +20,9 @@
def get_temp_key_info(ssl_object):
# adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()
+ if not hasattr(pyOpenSSLutil.lib, "SSL_get_server_tmp_key"):
+ # removed in cryptography 40.0.0
+ return None
temp_key_p = pyOpenSSLutil.ffi.new("EVP_PKEY **")
if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):
return None
| {"golden_diff": "diff --git a/scrapy/utils/ssl.py b/scrapy/utils/ssl.py\n--- a/scrapy/utils/ssl.py\n+++ b/scrapy/utils/ssl.py\n@@ -20,6 +20,9 @@\n \n def get_temp_key_info(ssl_object):\n # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()\n+ if not hasattr(pyOpenSSLutil.lib, \"SSL_get_server_tmp_key\"):\n+ # removed in cryptography 40.0.0\n+ return None\n temp_key_p = pyOpenSSLutil.ffi.new(\"EVP_PKEY **\")\n if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):\n return None\n", "issue": "TLS logging broken with new cryptography\nhttps://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available.\n", "before_files": [{"content": "import OpenSSL._util as pyOpenSSLutil\nimport OpenSSL.SSL\n\nfrom scrapy.utils.python import to_unicode\n\n\ndef ffi_buf_to_string(buf):\n return to_unicode(pyOpenSSLutil.ffi.string(buf))\n\n\ndef x509name_to_string(x509name):\n # from OpenSSL.crypto.X509Name.__repr__\n result_buffer = pyOpenSSLutil.ffi.new(\"char[]\", 512)\n pyOpenSSLutil.lib.X509_NAME_oneline(\n x509name._name, result_buffer, len(result_buffer)\n )\n\n return ffi_buf_to_string(result_buffer)\n\n\ndef get_temp_key_info(ssl_object):\n # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()\n temp_key_p = pyOpenSSLutil.ffi.new(\"EVP_PKEY **\")\n if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):\n return None\n temp_key = temp_key_p[0]\n if temp_key == pyOpenSSLutil.ffi.NULL:\n return None\n temp_key = pyOpenSSLutil.ffi.gc(temp_key, pyOpenSSLutil.lib.EVP_PKEY_free)\n key_info = []\n key_type = pyOpenSSLutil.lib.EVP_PKEY_id(temp_key)\n if key_type == pyOpenSSLutil.lib.EVP_PKEY_RSA:\n key_info.append(\"RSA\")\n elif key_type == pyOpenSSLutil.lib.EVP_PKEY_DH:\n key_info.append(\"DH\")\n elif key_type == pyOpenSSLutil.lib.EVP_PKEY_EC:\n key_info.append(\"ECDH\")\n ec_key = pyOpenSSLutil.lib.EVP_PKEY_get1_EC_KEY(temp_key)\n ec_key = pyOpenSSLutil.ffi.gc(ec_key, pyOpenSSLutil.lib.EC_KEY_free)\n nid = pyOpenSSLutil.lib.EC_GROUP_get_curve_name(\n pyOpenSSLutil.lib.EC_KEY_get0_group(ec_key)\n )\n cname = pyOpenSSLutil.lib.EC_curve_nid2nist(nid)\n if cname == pyOpenSSLutil.ffi.NULL:\n cname = pyOpenSSLutil.lib.OBJ_nid2sn(nid)\n key_info.append(ffi_buf_to_string(cname))\n else:\n key_info.append(ffi_buf_to_string(pyOpenSSLutil.lib.OBJ_nid2sn(key_type)))\n key_info.append(f\"{pyOpenSSLutil.lib.EVP_PKEY_bits(temp_key)} bits\")\n return \", \".join(key_info)\n\n\ndef get_openssl_version():\n system_openssl = OpenSSL.SSL.SSLeay_version(OpenSSL.SSL.SSLEAY_VERSION).decode(\n \"ascii\", errors=\"replace\"\n )\n return f\"{OpenSSL.version.__version__} ({system_openssl})\"\n", "path": "scrapy/utils/ssl.py"}], "after_files": [{"content": "import OpenSSL._util as pyOpenSSLutil\nimport OpenSSL.SSL\n\nfrom scrapy.utils.python import to_unicode\n\n\ndef ffi_buf_to_string(buf):\n return to_unicode(pyOpenSSLutil.ffi.string(buf))\n\n\ndef x509name_to_string(x509name):\n # from OpenSSL.crypto.X509Name.__repr__\n result_buffer = pyOpenSSLutil.ffi.new(\"char[]\", 512)\n pyOpenSSLutil.lib.X509_NAME_oneline(\n x509name._name, result_buffer, len(result_buffer)\n )\n\n return ffi_buf_to_string(result_buffer)\n\n\ndef get_temp_key_info(ssl_object):\n # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()\n if not hasattr(pyOpenSSLutil.lib, \"SSL_get_server_tmp_key\"):\n # removed in cryptography 40.0.0\n return None\n temp_key_p = pyOpenSSLutil.ffi.new(\"EVP_PKEY **\")\n if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):\n return None\n temp_key = temp_key_p[0]\n if temp_key == pyOpenSSLutil.ffi.NULL:\n return None\n temp_key = pyOpenSSLutil.ffi.gc(temp_key, pyOpenSSLutil.lib.EVP_PKEY_free)\n key_info = []\n key_type = pyOpenSSLutil.lib.EVP_PKEY_id(temp_key)\n if key_type == pyOpenSSLutil.lib.EVP_PKEY_RSA:\n key_info.append(\"RSA\")\n elif key_type == pyOpenSSLutil.lib.EVP_PKEY_DH:\n key_info.append(\"DH\")\n elif key_type == pyOpenSSLutil.lib.EVP_PKEY_EC:\n key_info.append(\"ECDH\")\n ec_key = pyOpenSSLutil.lib.EVP_PKEY_get1_EC_KEY(temp_key)\n ec_key = pyOpenSSLutil.ffi.gc(ec_key, pyOpenSSLutil.lib.EC_KEY_free)\n nid = pyOpenSSLutil.lib.EC_GROUP_get_curve_name(\n pyOpenSSLutil.lib.EC_KEY_get0_group(ec_key)\n )\n cname = pyOpenSSLutil.lib.EC_curve_nid2nist(nid)\n if cname == pyOpenSSLutil.ffi.NULL:\n cname = pyOpenSSLutil.lib.OBJ_nid2sn(nid)\n key_info.append(ffi_buf_to_string(cname))\n else:\n key_info.append(ffi_buf_to_string(pyOpenSSLutil.lib.OBJ_nid2sn(key_type)))\n key_info.append(f\"{pyOpenSSLutil.lib.EVP_PKEY_bits(temp_key)} bits\")\n return \", \".join(key_info)\n\n\ndef get_openssl_version():\n system_openssl = OpenSSL.SSL.SSLeay_version(OpenSSL.SSL.SSLEAY_VERSION).decode(\n \"ascii\", errors=\"replace\"\n )\n return f\"{OpenSSL.version.__version__} ({system_openssl})\"\n", "path": "scrapy/utils/ssl.py"}]} | 1,019 | 155 |
gh_patches_debug_50223 | rasdani/github-patches | git_diff | pex-tool__pex-916 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.6
On the docket:
+ [x] Don't delete the root `__init__.py` when devendoring. #915
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '2.1.5'
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '2.1.5'
+__version__ = '2.1.6'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '2.1.5'\n+__version__ = '2.1.6'\n", "issue": "Release 2.1.6\nOn the docket:\r\n+ [x] Don't delete the root `__init__.py` when devendoring. #915\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.5'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.6'\n", "path": "pex/version.py"}]} | 345 | 94 |
gh_patches_debug_236 | rasdani/github-patches | git_diff | jazzband__pip-tools-2042 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken build due to failed `linkcheck` job
I've noticed that matrix badges are frequently inaccessible, see README:
<img width="893" alt="image" src="https://github.com/jazzband/pip-tools/assets/7377671/94c2d45a-12ef-4237-8a85-434ee1bd7c05">
Sometimes, a certain issue even results in CI builds [breaking](https://github.com/jazzband/pip-tools/actions/runs/5920050370/job/16051009863#step:10:446) (caught in #1973):
```
broken https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat - 408 Client Error: Request Timeout for url: https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat
```
Perhaps we should consider [ignoring](https://github.com/jazzband/pip-tools/blob/04d2235716bc43cad3c10288081a4d2b7ee56944/docs/conf.py#L55-L57) `https://img.shields.io/matrix` as well?
/cc @webknjaz
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # https://www.sphinx-doc.org/en/master/usage/configuration.html
2 """Configuration file for the Sphinx documentation builder."""
3
4 from __future__ import annotations
5
6 from importlib.metadata import version as get_version
7 from pathlib import Path
8
9 from sphinx.util import logging
10 from sphinx.util.console import bold
11
12 logger = logging.getLogger(__name__)
13
14 # -- Path setup --------------------------------------------------------------
15
16 PROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()
17
18
19 # -- Project information -----------------------------------------------------
20
21 project = "pip-tools"
22 author = f"{project} Contributors"
23 copyright = f"The {author}"
24
25 # The full version, including alpha/beta/rc tags
26 release = get_version(project)
27
28 # The short X.Y version
29 version = ".".join(release.split(".")[:3])
30
31 logger.info(bold("%s version: %s"), project, version)
32 logger.info(bold("%s release: %s"), project, release)
33
34 # -- General configuration ---------------------------------------------------
35
36 # Add any Sphinx extension module names here, as strings. They can be
37 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
38 # ones.
39 extensions = ["myst_parser", "sphinxcontrib.programoutput"]
40
41
42 # -- Options for HTML output -------------------------------------------------
43
44 # The theme to use for HTML and HTML Help pages. See the documentation for
45 # a list of builtin themes.
46 #
47 html_theme = "furo"
48 html_title = f"<nobr>{project}</nobr> documentation v{release}"
49
50
51 # -------------------------------------------------------------------------
52 default_role = "any"
53 nitpicky = True
54
55 linkcheck_ignore = [
56 r"^https://matrix\.to/#",
57 ]
58
59 suppress_warnings = ["myst.xref_missing"]
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -54,6 +54,7 @@
linkcheck_ignore = [
r"^https://matrix\.to/#",
+ r"^https://img.shields.io/matrix",
]
suppress_warnings = ["myst.xref_missing"]
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -54,6 +54,7 @@\n \n linkcheck_ignore = [\n r\"^https://matrix\\.to/#\",\n+ r\"^https://img.shields.io/matrix\",\n ]\n \n suppress_warnings = [\"myst.xref_missing\"]\n", "issue": "Broken build due to failed `linkcheck` job\nI've noticed that matrix badges are frequently inaccessible, see README:\r\n<img width=\"893\" alt=\"image\" src=\"https://github.com/jazzband/pip-tools/assets/7377671/94c2d45a-12ef-4237-8a85-434ee1bd7c05\">\r\n\r\nSometimes, a certain issue even results in CI builds [breaking](https://github.com/jazzband/pip-tools/actions/runs/5920050370/job/16051009863#step:10:446) (caught in #1973):\r\n\r\n```\r\nbroken https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat - 408 Client Error: Request Timeout for url: https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat\r\n```\r\n\r\nPerhaps we should consider [ignoring](https://github.com/jazzband/pip-tools/blob/04d2235716bc43cad3c10288081a4d2b7ee56944/docs/conf.py#L55-L57) `https://img.shields.io/matrix` as well?\r\n\r\n/cc @webknjaz \r\n\n", "before_files": [{"content": "# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nfrom __future__ import annotations\n\nfrom importlib.metadata import version as get_version\nfrom pathlib import Path\n\nfrom sphinx.util import logging\nfrom sphinx.util.console import bold\n\nlogger = logging.getLogger(__name__)\n\n# -- Path setup --------------------------------------------------------------\n\nPROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"pip-tools\"\nauthor = f\"{project} Contributors\"\ncopyright = f\"The {author}\"\n\n# The full version, including alpha/beta/rc tags\nrelease = get_version(project)\n\n# The short X.Y version\nversion = \".\".join(release.split(\".\")[:3])\n\nlogger.info(bold(\"%s version: %s\"), project, version)\nlogger.info(bold(\"%s release: %s\"), project, release)\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\"myst_parser\", \"sphinxcontrib.programoutput\"]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\nhtml_title = f\"<nobr>{project}</nobr> documentation v{release}\"\n\n\n# -------------------------------------------------------------------------\ndefault_role = \"any\"\nnitpicky = True\n\nlinkcheck_ignore = [\n r\"^https://matrix\\.to/#\",\n]\n\nsuppress_warnings = [\"myst.xref_missing\"]\n", "path": "docs/conf.py"}], "after_files": [{"content": "# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nfrom __future__ import annotations\n\nfrom importlib.metadata import version as get_version\nfrom pathlib import Path\n\nfrom sphinx.util import logging\nfrom sphinx.util.console import bold\n\nlogger = logging.getLogger(__name__)\n\n# -- Path setup --------------------------------------------------------------\n\nPROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"pip-tools\"\nauthor = f\"{project} Contributors\"\ncopyright = f\"The {author}\"\n\n# The full version, including alpha/beta/rc tags\nrelease = get_version(project)\n\n# The short X.Y version\nversion = \".\".join(release.split(\".\")[:3])\n\nlogger.info(bold(\"%s version: %s\"), project, version)\nlogger.info(bold(\"%s release: %s\"), project, release)\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\"myst_parser\", \"sphinxcontrib.programoutput\"]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\nhtml_title = f\"<nobr>{project}</nobr> documentation v{release}\"\n\n\n# -------------------------------------------------------------------------\ndefault_role = \"any\"\nnitpicky = True\n\nlinkcheck_ignore = [\n r\"^https://matrix\\.to/#\",\n r\"^https://img.shields.io/matrix\",\n]\n\nsuppress_warnings = [\"myst.xref_missing\"]\n", "path": "docs/conf.py"}]} | 1,114 | 77 |
gh_patches_debug_65929 | rasdani/github-patches | git_diff | iterative__dvc-985 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Trouble installing dvc with pip: No matching distribution found for futures>=3.2.0 (from dvc)
I'm on a fresh ubuntu 18.04 and I want to install DVC. But I run into some dependency problems. Never had that problem before.
```
➤ virtualenv -p python3 .venv
➤ source .venv/bin/activate.fish
➤ pip install dvc
Collecting dvc
Using cached https://files.pythonhosted.org/packages/d2/2d/117b6e99f4e7f0760d99944919d9dcaaeabfb6c6182a9c890b7260eec697/dvc-0.15.2-py2.py3-none-any.whl
Collecting pyasn1>=0.4.1 (from dvc)
Using cached https://files.pythonhosted.org/packages/d1/a1/7790cc85db38daa874f6a2e6308131b9953feb1367f2ae2d1123bb93a9f5/pyasn1-0.4.4-py2.py3-none-any.whl
Collecting ply>=3.9 (from dvc)
Using cached https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl
Collecting futures>=3.2.0 (from dvc)
Could not find a version that satisfies the requirement futures>=3.2.0 (from dvc) (from versions: 0.2.python3, 0.1, 0.2, 1.0, 2.0, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.1.0, 3.1.1)
No matching distribution found for futures>=3.2.0 (from dvc)
```
Here are all relevant version
```
➤ pip --version
pip 18.0 from /home/PATH/.venv/lib/python3.6/site-packages/pip (python 3.6)
➤ python --version
Python 3.6.5
➤ virtualenv --version
16.0.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import sys
2 import platform
3 from setuptools import setup, find_packages
4 from distutils.errors import DistutilsPlatformError
5 from dvc import VERSION
6
7
8 install_requires = [
9 "ply>=3.9", # See https://github.com/pyinstaller/pyinstaller/issues/1945
10 "configparser>=3.5.0",
11 "zc.lockfile>=1.2.1",
12 "future>=0.16.0",
13 "colorama>=0.3.9",
14 "configobj>=5.0.6",
15 "networkx==2.1",
16 "pyyaml>=3.12",
17 "gitpython>=2.1.8",
18 "ntfsutils>=0.1.4",
19 "setuptools>=34.0.0",
20 "nanotime>=0.5.2",
21 "pyasn1>=0.4.1",
22 "schema>=0.6.7",
23 "jsonpath-rw==1.4.0",
24 "reflink==0.2.0",
25 "requests>=2.18.4",
26 ]
27
28 if sys.version_info[0] == 2:
29 install_requires.append("futures>=3.2.0")
30
31 # Extra dependencies for remote integrations
32 gs = [
33 "google-cloud==0.32.0",
34 ]
35 s3 = [
36 "boto3==1.7.4",
37 ]
38 azure = [
39 "azure-storage-blob==1.3.0"
40 ]
41 ssh = [
42 "paramiko>=2.4.1",
43 ]
44 all_remotes = gs + s3 + azure + ssh
45
46 setup(
47 name='dvc',
48 version=VERSION,
49 description='Git for data scientists - manage your code and data together',
50 long_description=open('README.rst', 'r').read(),
51 author='Dmitry Petrov',
52 author_email='[email protected]',
53 download_url='https://github.com/iterative/dvc',
54 license='Apache License 2.0',
55 install_requires=install_requires,
56 extras_require={
57 'all': all_remotes,
58 'gs': gs,
59 's3': s3,
60 'azure': azure,
61 'ssh': ssh,
62 },
63 keywords='data science, data version control, machine learning',
64 classifiers=[
65 'Development Status :: 4 - Beta',
66 'Programming Language :: Python :: 2',
67 'Programming Language :: Python :: 3',
68 ],
69 packages=find_packages(exclude=['bin', 'tests', 'functests']),
70 include_package_data=True,
71 url='http://dataversioncontrol.com',
72 entry_points={
73 'console_scripts': ['dvc = dvc.main:main']
74 },
75 zip_safe=False
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,11 +23,9 @@
"jsonpath-rw==1.4.0",
"reflink==0.2.0",
"requests>=2.18.4",
+ 'futures; python_version == "2.7"',
]
-if sys.version_info[0] == 2:
- install_requires.append("futures>=3.2.0")
-
# Extra dependencies for remote integrations
gs = [
"google-cloud==0.32.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,11 +23,9 @@\n \"jsonpath-rw==1.4.0\",\n \"reflink==0.2.0\",\n \"requests>=2.18.4\",\n+ 'futures; python_version == \"2.7\"',\n ]\n \n-if sys.version_info[0] == 2:\n- install_requires.append(\"futures>=3.2.0\")\n-\n # Extra dependencies for remote integrations\n gs = [\n \"google-cloud==0.32.0\",\n", "issue": "Trouble installing dvc with pip: No matching distribution found for futures>=3.2.0 (from dvc)\nI'm on a fresh ubuntu 18.04 and I want to install DVC. But I run into some dependency problems. Never had that problem before.\r\n```\r\n\u27a4 virtualenv -p python3 .venv\r\n\u27a4 source .venv/bin/activate.fish\r\n\u27a4 pip install dvc\r\nCollecting dvc\r\n Using cached https://files.pythonhosted.org/packages/d2/2d/117b6e99f4e7f0760d99944919d9dcaaeabfb6c6182a9c890b7260eec697/dvc-0.15.2-py2.py3-none-any.whl\r\nCollecting pyasn1>=0.4.1 (from dvc)\r\n Using cached https://files.pythonhosted.org/packages/d1/a1/7790cc85db38daa874f6a2e6308131b9953feb1367f2ae2d1123bb93a9f5/pyasn1-0.4.4-py2.py3-none-any.whl\r\nCollecting ply>=3.9 (from dvc)\r\n Using cached https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl\r\nCollecting futures>=3.2.0 (from dvc)\r\n Could not find a version that satisfies the requirement futures>=3.2.0 (from dvc) (from versions: 0.2.python3, 0.1, 0.2, 1.0, 2.0, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.1.0, 3.1.1)\r\nNo matching distribution found for futures>=3.2.0 (from dvc)\r\n```\r\nHere are all relevant version\r\n```\r\n\u27a4 pip --version\r\npip 18.0 from /home/PATH/.venv/lib/python3.6/site-packages/pip (python 3.6)\r\n\u27a4 python --version\r\nPython 3.6.5\r\n\u27a4 virtualenv --version\r\n16.0.0\r\n```\n", "before_files": [{"content": "import sys\nimport platform\nfrom setuptools import setup, find_packages\nfrom distutils.errors import DistutilsPlatformError\nfrom dvc import VERSION\n\n\ninstall_requires = [\n \"ply>=3.9\", # See https://github.com/pyinstaller/pyinstaller/issues/1945\n \"configparser>=3.5.0\",\n \"zc.lockfile>=1.2.1\",\n \"future>=0.16.0\",\n \"colorama>=0.3.9\",\n \"configobj>=5.0.6\",\n \"networkx==2.1\",\n \"pyyaml>=3.12\",\n \"gitpython>=2.1.8\",\n \"ntfsutils>=0.1.4\",\n \"setuptools>=34.0.0\",\n \"nanotime>=0.5.2\",\n \"pyasn1>=0.4.1\",\n \"schema>=0.6.7\",\n \"jsonpath-rw==1.4.0\",\n \"reflink==0.2.0\",\n \"requests>=2.18.4\",\n]\n\nif sys.version_info[0] == 2:\n install_requires.append(\"futures>=3.2.0\")\n\n# Extra dependencies for remote integrations\ngs = [\n \"google-cloud==0.32.0\",\n]\ns3 = [\n \"boto3==1.7.4\",\n]\nazure = [\n \"azure-storage-blob==1.3.0\"\n]\nssh = [\n \"paramiko>=2.4.1\",\n]\nall_remotes = gs + s3 + azure + ssh\n\nsetup(\n name='dvc',\n version=VERSION,\n description='Git for data scientists - manage your code and data together',\n long_description=open('README.rst', 'r').read(),\n author='Dmitry Petrov',\n author_email='[email protected]',\n download_url='https://github.com/iterative/dvc',\n license='Apache License 2.0',\n install_requires=install_requires,\n extras_require={\n 'all': all_remotes,\n 'gs': gs,\n 's3': s3,\n 'azure': azure,\n 'ssh': ssh,\n },\n keywords='data science, data version control, machine learning',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n ],\n packages=find_packages(exclude=['bin', 'tests', 'functests']),\n include_package_data=True,\n url='http://dataversioncontrol.com',\n entry_points={\n 'console_scripts': ['dvc = dvc.main:main']\n },\n zip_safe=False\n)\n", "path": "setup.py"}], "after_files": [{"content": "import sys\nimport platform\nfrom setuptools import setup, find_packages\nfrom distutils.errors import DistutilsPlatformError\nfrom dvc import VERSION\n\n\ninstall_requires = [\n \"ply>=3.9\", # See https://github.com/pyinstaller/pyinstaller/issues/1945\n \"configparser>=3.5.0\",\n \"zc.lockfile>=1.2.1\",\n \"future>=0.16.0\",\n \"colorama>=0.3.9\",\n \"configobj>=5.0.6\",\n \"networkx==2.1\",\n \"pyyaml>=3.12\",\n \"gitpython>=2.1.8\",\n \"ntfsutils>=0.1.4\",\n \"setuptools>=34.0.0\",\n \"nanotime>=0.5.2\",\n \"pyasn1>=0.4.1\",\n \"schema>=0.6.7\",\n \"jsonpath-rw==1.4.0\",\n \"reflink==0.2.0\",\n \"requests>=2.18.4\",\n 'futures; python_version == \"2.7\"',\n]\n\n# Extra dependencies for remote integrations\ngs = [\n \"google-cloud==0.32.0\",\n]\ns3 = [\n \"boto3==1.7.4\",\n]\nazure = [\n \"azure-storage-blob==1.3.0\"\n]\nssh = [\n \"paramiko>=2.4.1\",\n]\nall_remotes = gs + s3 + azure + ssh\n\nsetup(\n name='dvc',\n version=VERSION,\n description='Git for data scientists - manage your code and data together',\n long_description=open('README.rst', 'r').read(),\n author='Dmitry Petrov',\n author_email='[email protected]',\n download_url='https://github.com/iterative/dvc',\n license='Apache License 2.0',\n install_requires=install_requires,\n extras_require={\n 'all': all_remotes,\n 'gs': gs,\n 's3': s3,\n 'azure': azure,\n 'ssh': ssh,\n },\n keywords='data science, data version control, machine learning',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n ],\n packages=find_packages(exclude=['bin', 'tests', 'functests']),\n include_package_data=True,\n url='http://dataversioncontrol.com',\n entry_points={\n 'console_scripts': ['dvc = dvc.main:main']\n },\n zip_safe=False\n)\n", "path": "setup.py"}]} | 1,655 | 134 |
gh_patches_debug_8904 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-3852 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Azure - c7n-Mailer Errors
About 50% of the time mailer runs, the following error results and messages aren't picked up, delivered:
```
Traceback (most recent call last):
File "/usr/local/bin/c7n-mailer", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/c7n_mailer/cli.py", line 227, in main
processor.run()
File "/usr/local/lib/python3.7/site-packages/c7n_mailer/azure/azure_queue_processor.py", line 62, in run
if (self.process_azure_queue_message(queue_message) or
File "/usr/local/lib/python3.7/site-packages/c7n_mailer/azure/azure_queue_processor.py", line 89, in process_azure_queue_message
SendGridDelivery(self.config, self.logger))
File "/usr/local/lib/python3.7/site-packages/c7n_mailer/azure/sendgrid_delivery.py", line 29, in __init__
sendgrid.SendGridAPIClient(apikey=self.config.get('sendgrid_api_key', ''))
TypeError: __init__() got an unexpected keyword argument 'apikey'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py`
Content:
```
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import sendgrid
16 import six
17 from c7n_mailer.utils import (get_message_subject, get_rendered_jinja)
18 from c7n_mailer.utils_email import is_email
19 from python_http_client import exceptions
20 from sendgrid.helpers.mail import Email, Content, Mail
21
22
23 class SendGridDelivery(object):
24
25 def __init__(self, config, logger):
26 self.config = config
27 self.logger = logger
28 self.sendgrid_client = \
29 sendgrid.SendGridAPIClient(apikey=self.config.get('sendgrid_api_key', ''))
30
31 def get_to_addrs_sendgrid_messages_map(self, queue_message):
32 # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }
33 to_addrs_to_resources_map = self.get_email_to_addrs_to_resources_map(queue_message)
34
35 to_addrs_to_content_map = {}
36 for to_addrs, resources in six.iteritems(to_addrs_to_resources_map):
37 to_addrs_to_content_map[to_addrs] = self.get_message_content(
38 queue_message,
39 resources,
40 list(to_addrs)
41 )
42 # eg: { ('[email protected]', '[email protected]'): message }
43 return to_addrs_to_content_map
44
45 # this function returns a dictionary with a tuple of emails as the key
46 # and the list of resources as the value. This helps ensure minimal emails
47 # are sent, while only ever sending emails to the respective parties.
48 def get_email_to_addrs_to_resources_map(self, queue_message):
49 email_to_addrs_to_resources_map = {}
50 targets = queue_message['action']['to']
51
52 for resource in queue_message['resources']:
53 # this is the list of emails that will be sent for this resource
54 resource_emails = []
55
56 for target in targets:
57 if target.startswith('tag:') and 'tags' in resource:
58 tag_name = target.split(':', 1)[1]
59 result = resource.get('tags', {}).get(tag_name, None)
60 if is_email(result):
61 resource_emails.append(result)
62 elif is_email(target):
63 resource_emails.append(target)
64
65 resource_emails = tuple(sorted(set(resource_emails)))
66
67 if resource_emails:
68 email_to_addrs_to_resources_map.setdefault(resource_emails, []).append(resource)
69
70 if email_to_addrs_to_resources_map == {}:
71 self.logger.debug('Found no email addresses, sending no emails.')
72 # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }
73 return email_to_addrs_to_resources_map
74
75 def get_message_content(self, queue_message, resources, to_addrs):
76 return get_rendered_jinja(
77 to_addrs, queue_message, resources, self.logger,
78 'template', 'default', self.config['templates_folders'])
79
80 def sendgrid_handler(self, queue_message, to_addrs_to_email_messages_map):
81 self.logger.info("Sending account:%s policy:%s %s:%s email:%s to %s" % (
82 queue_message.get('account', ''),
83 queue_message['policy']['name'],
84 queue_message['policy']['resource'],
85 str(len(queue_message['resources'])),
86 queue_message['action'].get('template', 'default'),
87 to_addrs_to_email_messages_map))
88
89 from_email = Email(self.config.get('from_address', ''))
90 subject = get_message_subject(queue_message)
91 email_format = queue_message['action'].get('template_format', None)
92 if not email_format:
93 email_format = queue_message['action'].get(
94 'template', 'default').endswith('html') and 'html' or 'plain'
95
96 for email_to_addrs, email_content in six.iteritems(to_addrs_to_email_messages_map):
97 for to_address in email_to_addrs:
98 to_email = Email(to_address)
99 content = Content("text/" + email_format, email_content)
100 mail = Mail(from_email, subject, to_email, content)
101 try:
102 self.sendgrid_client.client.mail.send.post(request_body=mail.get())
103 except (exceptions.UnauthorizedError, exceptions.BadRequestsError) as e:
104 self.logger.warning(
105 "\n**Error \nPolicy:%s \nAccount:%s \nSending to:%s \n\nRequest body:"
106 "\n%s\n\nRequest headers:\n%s\n\n mailer.yml: %s" % (
107 queue_message['policy'],
108 queue_message.get('account', ''),
109 email_to_addrs,
110 e.body,
111 e.headers,
112 self.config
113 )
114 )
115 return False
116 return True
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py b/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py
--- a/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py
+++ b/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py
@@ -26,7 +26,7 @@
self.config = config
self.logger = logger
self.sendgrid_client = \
- sendgrid.SendGridAPIClient(apikey=self.config.get('sendgrid_api_key', ''))
+ sendgrid.SendGridAPIClient(self.config.get('sendgrid_api_key', ''))
def get_to_addrs_sendgrid_messages_map(self, queue_message):
# eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }
| {"golden_diff": "diff --git a/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py b/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py\n--- a/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py\n+++ b/tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py\n@@ -26,7 +26,7 @@\n self.config = config\n self.logger = logger\n self.sendgrid_client = \\\n- sendgrid.SendGridAPIClient(apikey=self.config.get('sendgrid_api_key', ''))\n+ sendgrid.SendGridAPIClient(self.config.get('sendgrid_api_key', ''))\n \n def get_to_addrs_sendgrid_messages_map(self, queue_message):\n # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }\n", "issue": "Azure - c7n-Mailer Errors\nAbout 50% of the time mailer runs, the following error results and messages aren't picked up, delivered:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/c7n-mailer\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.7/site-packages/c7n_mailer/cli.py\", line 227, in main\r\n processor.run()\r\n File \"/usr/local/lib/python3.7/site-packages/c7n_mailer/azure/azure_queue_processor.py\", line 62, in run\r\n if (self.process_azure_queue_message(queue_message) or\r\n File \"/usr/local/lib/python3.7/site-packages/c7n_mailer/azure/azure_queue_processor.py\", line 89, in process_azure_queue_message\r\n SendGridDelivery(self.config, self.logger))\r\n File \"/usr/local/lib/python3.7/site-packages/c7n_mailer/azure/sendgrid_delivery.py\", line 29, in __init__\r\n sendgrid.SendGridAPIClient(apikey=self.config.get('sendgrid_api_key', ''))\r\nTypeError: __init__() got an unexpected keyword argument 'apikey'\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sendgrid\nimport six\nfrom c7n_mailer.utils import (get_message_subject, get_rendered_jinja)\nfrom c7n_mailer.utils_email import is_email\nfrom python_http_client import exceptions\nfrom sendgrid.helpers.mail import Email, Content, Mail\n\n\nclass SendGridDelivery(object):\n\n def __init__(self, config, logger):\n self.config = config\n self.logger = logger\n self.sendgrid_client = \\\n sendgrid.SendGridAPIClient(apikey=self.config.get('sendgrid_api_key', ''))\n\n def get_to_addrs_sendgrid_messages_map(self, queue_message):\n # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }\n to_addrs_to_resources_map = self.get_email_to_addrs_to_resources_map(queue_message)\n\n to_addrs_to_content_map = {}\n for to_addrs, resources in six.iteritems(to_addrs_to_resources_map):\n to_addrs_to_content_map[to_addrs] = self.get_message_content(\n queue_message,\n resources,\n list(to_addrs)\n )\n # eg: { ('[email protected]', '[email protected]'): message }\n return to_addrs_to_content_map\n\n # this function returns a dictionary with a tuple of emails as the key\n # and the list of resources as the value. This helps ensure minimal emails\n # are sent, while only ever sending emails to the respective parties.\n def get_email_to_addrs_to_resources_map(self, queue_message):\n email_to_addrs_to_resources_map = {}\n targets = queue_message['action']['to']\n\n for resource in queue_message['resources']:\n # this is the list of emails that will be sent for this resource\n resource_emails = []\n\n for target in targets:\n if target.startswith('tag:') and 'tags' in resource:\n tag_name = target.split(':', 1)[1]\n result = resource.get('tags', {}).get(tag_name, None)\n if is_email(result):\n resource_emails.append(result)\n elif is_email(target):\n resource_emails.append(target)\n\n resource_emails = tuple(sorted(set(resource_emails)))\n\n if resource_emails:\n email_to_addrs_to_resources_map.setdefault(resource_emails, []).append(resource)\n\n if email_to_addrs_to_resources_map == {}:\n self.logger.debug('Found no email addresses, sending no emails.')\n # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }\n return email_to_addrs_to_resources_map\n\n def get_message_content(self, queue_message, resources, to_addrs):\n return get_rendered_jinja(\n to_addrs, queue_message, resources, self.logger,\n 'template', 'default', self.config['templates_folders'])\n\n def sendgrid_handler(self, queue_message, to_addrs_to_email_messages_map):\n self.logger.info(\"Sending account:%s policy:%s %s:%s email:%s to %s\" % (\n queue_message.get('account', ''),\n queue_message['policy']['name'],\n queue_message['policy']['resource'],\n str(len(queue_message['resources'])),\n queue_message['action'].get('template', 'default'),\n to_addrs_to_email_messages_map))\n\n from_email = Email(self.config.get('from_address', ''))\n subject = get_message_subject(queue_message)\n email_format = queue_message['action'].get('template_format', None)\n if not email_format:\n email_format = queue_message['action'].get(\n 'template', 'default').endswith('html') and 'html' or 'plain'\n\n for email_to_addrs, email_content in six.iteritems(to_addrs_to_email_messages_map):\n for to_address in email_to_addrs:\n to_email = Email(to_address)\n content = Content(\"text/\" + email_format, email_content)\n mail = Mail(from_email, subject, to_email, content)\n try:\n self.sendgrid_client.client.mail.send.post(request_body=mail.get())\n except (exceptions.UnauthorizedError, exceptions.BadRequestsError) as e:\n self.logger.warning(\n \"\\n**Error \\nPolicy:%s \\nAccount:%s \\nSending to:%s \\n\\nRequest body:\"\n \"\\n%s\\n\\nRequest headers:\\n%s\\n\\n mailer.yml: %s\" % (\n queue_message['policy'],\n queue_message.get('account', ''),\n email_to_addrs,\n e.body,\n e.headers,\n self.config\n )\n )\n return False\n return True\n", "path": "tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py"}], "after_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sendgrid\nimport six\nfrom c7n_mailer.utils import (get_message_subject, get_rendered_jinja)\nfrom c7n_mailer.utils_email import is_email\nfrom python_http_client import exceptions\nfrom sendgrid.helpers.mail import Email, Content, Mail\n\n\nclass SendGridDelivery(object):\n\n def __init__(self, config, logger):\n self.config = config\n self.logger = logger\n self.sendgrid_client = \\\n sendgrid.SendGridAPIClient(self.config.get('sendgrid_api_key', ''))\n\n def get_to_addrs_sendgrid_messages_map(self, queue_message):\n # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }\n to_addrs_to_resources_map = self.get_email_to_addrs_to_resources_map(queue_message)\n\n to_addrs_to_content_map = {}\n for to_addrs, resources in six.iteritems(to_addrs_to_resources_map):\n to_addrs_to_content_map[to_addrs] = self.get_message_content(\n queue_message,\n resources,\n list(to_addrs)\n )\n # eg: { ('[email protected]', '[email protected]'): message }\n return to_addrs_to_content_map\n\n # this function returns a dictionary with a tuple of emails as the key\n # and the list of resources as the value. This helps ensure minimal emails\n # are sent, while only ever sending emails to the respective parties.\n def get_email_to_addrs_to_resources_map(self, queue_message):\n email_to_addrs_to_resources_map = {}\n targets = queue_message['action']['to']\n\n for resource in queue_message['resources']:\n # this is the list of emails that will be sent for this resource\n resource_emails = []\n\n for target in targets:\n if target.startswith('tag:') and 'tags' in resource:\n tag_name = target.split(':', 1)[1]\n result = resource.get('tags', {}).get(tag_name, None)\n if is_email(result):\n resource_emails.append(result)\n elif is_email(target):\n resource_emails.append(target)\n\n resource_emails = tuple(sorted(set(resource_emails)))\n\n if resource_emails:\n email_to_addrs_to_resources_map.setdefault(resource_emails, []).append(resource)\n\n if email_to_addrs_to_resources_map == {}:\n self.logger.debug('Found no email addresses, sending no emails.')\n # eg: { ('[email protected]', '[email protected]'): [resource1, resource2, etc] }\n return email_to_addrs_to_resources_map\n\n def get_message_content(self, queue_message, resources, to_addrs):\n return get_rendered_jinja(\n to_addrs, queue_message, resources, self.logger,\n 'template', 'default', self.config['templates_folders'])\n\n def sendgrid_handler(self, queue_message, to_addrs_to_email_messages_map):\n self.logger.info(\"Sending account:%s policy:%s %s:%s email:%s to %s\" % (\n queue_message.get('account', ''),\n queue_message['policy']['name'],\n queue_message['policy']['resource'],\n str(len(queue_message['resources'])),\n queue_message['action'].get('template', 'default'),\n to_addrs_to_email_messages_map))\n\n from_email = Email(self.config.get('from_address', ''))\n subject = get_message_subject(queue_message)\n email_format = queue_message['action'].get('template_format', None)\n if not email_format:\n email_format = queue_message['action'].get(\n 'template', 'default').endswith('html') and 'html' or 'plain'\n\n for email_to_addrs, email_content in six.iteritems(to_addrs_to_email_messages_map):\n for to_address in email_to_addrs:\n to_email = Email(to_address)\n content = Content(\"text/\" + email_format, email_content)\n mail = Mail(from_email, subject, to_email, content)\n try:\n self.sendgrid_client.client.mail.send.post(request_body=mail.get())\n except (exceptions.UnauthorizedError, exceptions.BadRequestsError) as e:\n self.logger.warning(\n \"\\n**Error \\nPolicy:%s \\nAccount:%s \\nSending to:%s \\n\\nRequest body:\"\n \"\\n%s\\n\\nRequest headers:\\n%s\\n\\n mailer.yml: %s\" % (\n queue_message['policy'],\n queue_message.get('account', ''),\n email_to_addrs,\n e.body,\n e.headers,\n self.config\n )\n )\n return False\n return True\n", "path": "tools/c7n_mailer/c7n_mailer/azure/sendgrid_delivery.py"}]} | 1,901 | 198 |
gh_patches_debug_23587 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
McDonald's
JSON endpoint: http://rl.mcdonalds.com/googleapps/GoogleSearchUSAction.do?method=searchLocation&searchTxtLatlng=(43.1272254%2C-87.9432837)&actionType=searchRestaurant&language=en&country=us
Search by lat/lon only? Looks like they geocode using Google Maps API and then call this endpoint with a lat/lon.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/mcdonalds_localizer.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 from locations.items import GeojsonPointItem
5
6
7 class McLocalizer(scrapy.Spider):
8
9 name = "mclocalizer"
10 allowed_domains = ["www.mcdonalds.com", "www.mcdonalds.com.pr", "www.mcdonalds.co.cr", "www.mcdonalds.com.ar"]
11 start_urls = (
12 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',
13 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',
14 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'
15 )
16
17 def parse(self, response):
18 data = response.body_as_unicode()
19 data.replace('" ', '"')
20 data.replace(' "', '"')
21 results = json.loads(data)
22 results = results["content"]["restaurants"]
23 for data in results:
24 properties = {
25 'ref': data['id'],
26 'lon': float(data['longitude']),
27 'lat': float(data['latitude']),
28
29 }
30
31 contact_info = data['name'][:data['name'].find("<br")]
32 name = contact_info[:contact_info.find("</br")]
33
34 properties["name"] = name
35 properties["addr_full"] = data['name'][data['name'].find("<small>"):-8][8:]
36 # = address[8:]
37
38 yield GeojsonPointItem(**properties)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/mcdonalds_localizer.py b/locations/spiders/mcdonalds_localizer.py
--- a/locations/spiders/mcdonalds_localizer.py
+++ b/locations/spiders/mcdonalds_localizer.py
@@ -7,11 +7,12 @@
class McLocalizer(scrapy.Spider):
name = "mclocalizer"
- allowed_domains = ["www.mcdonalds.com", "www.mcdonalds.com.pr", "www.mcdonalds.co.cr", "www.mcdonalds.com.ar"]
+ allowed_domains = ["www.mcdonalds.com", "www.mcdonalds.com.pr", "www.mcdonalds.co.cr", "www.mcdonalds.com.ar", "www.mcdonalds.com.pa"]
start_urls = (
'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',
'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',
- 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'
+ 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR',
+ 'http://www.mcdonalds.com.pa/api/restaurantsByCountry?country=PA'
)
def parse(self, response):
@@ -33,6 +34,5 @@
properties["name"] = name
properties["addr_full"] = data['name'][data['name'].find("<small>"):-8][8:]
- # = address[8:]
yield GeojsonPointItem(**properties)
\ No newline at end of file
| {"golden_diff": "diff --git a/locations/spiders/mcdonalds_localizer.py b/locations/spiders/mcdonalds_localizer.py\n--- a/locations/spiders/mcdonalds_localizer.py\n+++ b/locations/spiders/mcdonalds_localizer.py\n@@ -7,11 +7,12 @@\n class McLocalizer(scrapy.Spider):\n \n name = \"mclocalizer\"\n- allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\"]\n+ allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\", \"www.mcdonalds.com.pa\"]\n start_urls = (\n 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',\n 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',\n- 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'\n+ 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR',\n+ 'http://www.mcdonalds.com.pa/api/restaurantsByCountry?country=PA'\n )\n \n def parse(self, response):\n@@ -33,6 +34,5 @@\n \n properties[\"name\"] = name\n properties[\"addr_full\"] = data['name'][data['name'].find(\"<small>\"):-8][8:]\n- # = address[8:]\n \n yield GeojsonPointItem(**properties)\n\\ No newline at end of file\n", "issue": "McDonald's\nJSON endpoint: http://rl.mcdonalds.com/googleapps/GoogleSearchUSAction.do?method=searchLocation&searchTxtLatlng=(43.1272254%2C-87.9432837)&actionType=searchRestaurant&language=en&country=us\n\nSearch by lat/lon only? Looks like they geocode using Google Maps API and then call this endpoint with a lat/lon.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\n\n\nclass McLocalizer(scrapy.Spider):\n\n name = \"mclocalizer\"\n allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\"]\n start_urls = (\n 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',\n 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',\n 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'\n )\n\n def parse(self, response):\n data = response.body_as_unicode()\n data.replace('\" ', '\"')\n data.replace(' \"', '\"')\n results = json.loads(data)\n results = results[\"content\"][\"restaurants\"]\n for data in results:\n properties = {\n 'ref': data['id'],\n 'lon': float(data['longitude']),\n 'lat': float(data['latitude']),\n \n }\n\n contact_info = data['name'][:data['name'].find(\"<br\")]\n name = contact_info[:contact_info.find(\"</br\")]\n\n properties[\"name\"] = name\n properties[\"addr_full\"] = data['name'][data['name'].find(\"<small>\"):-8][8:]\n # = address[8:]\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/mcdonalds_localizer.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\n\n\nclass McLocalizer(scrapy.Spider):\n\n name = \"mclocalizer\"\n allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\", \"www.mcdonalds.com.pa\"]\n start_urls = (\n 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',\n 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',\n 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR',\n 'http://www.mcdonalds.com.pa/api/restaurantsByCountry?country=PA'\n )\n\n def parse(self, response):\n data = response.body_as_unicode()\n data.replace('\" ', '\"')\n data.replace(' \"', '\"')\n results = json.loads(data)\n results = results[\"content\"][\"restaurants\"]\n for data in results:\n properties = {\n 'ref': data['id'],\n 'lon': float(data['longitude']),\n 'lat': float(data['latitude']),\n \n }\n\n contact_info = data['name'][:data['name'].find(\"<br\")]\n name = contact_info[:contact_info.find(\"</br\")]\n\n properties[\"name\"] = name\n properties[\"addr_full\"] = data['name'][data['name'].find(\"<small>\"):-8][8:]\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/mcdonalds_localizer.py"}]} | 745 | 370 |
gh_patches_debug_8064 | rasdani/github-patches | git_diff | getsentry__sentry-37553 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot edit or delete alerts
### Environment
SaaS (https://sentry.io/)
### Version
_No response_
### Steps to Reproduce
1. Have alerts that were set up a while ago
2. Get a bunch of emails from one alert that is too touchy
3. Try to edit alert (fails)
4. Try to delete alert (fails)
### Expected Result
Can edit or delete alerts that I created on an account that I am the only user for
### Actual Result
Cannot edit or delete alerts


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/incidents/endpoints/bases.py`
Content:
```
1 from rest_framework.exceptions import PermissionDenied
2 from rest_framework.request import Request
3
4 from sentry import features
5 from sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint
6 from sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint
7 from sentry.api.exceptions import ResourceDoesNotExist
8 from sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction
9
10
11 class ProjectAlertRuleEndpoint(ProjectEndpoint):
12 permission_classes = (ProjectAlertRulePermission,)
13
14 def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):
15 args, kwargs = super().convert_args(request, *args, **kwargs)
16 project = kwargs["project"]
17
18 if not features.has("organizations:incidents", project.organization, actor=request.user):
19 raise ResourceDoesNotExist
20
21 if not request.access.has_project_access(project):
22 raise PermissionDenied
23
24 try:
25 kwargs["alert_rule"] = AlertRule.objects.get(
26 snuba_query__subscriptions__project=project, id=alert_rule_id
27 )
28 except AlertRule.DoesNotExist:
29 raise ResourceDoesNotExist
30
31 return args, kwargs
32
33
34 class OrganizationAlertRuleEndpoint(OrganizationEndpoint):
35 permission_classes = (OrganizationAlertRulePermission,)
36
37 def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):
38 args, kwargs = super().convert_args(request, *args, **kwargs)
39 organization = kwargs["organization"]
40
41 if not features.has("organizations:incidents", organization, actor=request.user):
42 raise ResourceDoesNotExist
43
44 try:
45 kwargs["alert_rule"] = AlertRule.objects.get(
46 organization=organization, id=alert_rule_id
47 )
48 except AlertRule.DoesNotExist:
49 raise ResourceDoesNotExist
50
51 return args, kwargs
52
53
54 class OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):
55 def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):
56 args, kwargs = super().convert_args(request, *args, **kwargs)
57 organization = kwargs["organization"]
58 alert_rule = kwargs["alert_rule"]
59
60 if not features.has("organizations:incidents", organization, actor=request.user):
61 raise ResourceDoesNotExist
62
63 try:
64 kwargs["alert_rule_trigger"] = AlertRuleTrigger.objects.get(
65 alert_rule=alert_rule, id=alert_rule_trigger_id
66 )
67 except AlertRuleTrigger.DoesNotExist:
68 raise ResourceDoesNotExist
69
70 return args, kwargs
71
72
73 class OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):
74 def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):
75 args, kwargs = super().convert_args(request, *args, **kwargs)
76 organization = kwargs["organization"]
77 trigger = kwargs["alert_rule_trigger"]
78
79 if not features.has("organizations:incidents", organization, actor=request.user):
80 raise ResourceDoesNotExist
81
82 try:
83 kwargs["alert_rule_trigger_action"] = AlertRuleTriggerAction.objects.get(
84 alert_rule_trigger=trigger, id=alert_rule_trigger_action_id
85 )
86 except AlertRuleTriggerAction.DoesNotExist:
87 raise ResourceDoesNotExist
88
89 return args, kwargs
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/incidents/endpoints/bases.py b/src/sentry/incidents/endpoints/bases.py
--- a/src/sentry/incidents/endpoints/bases.py
+++ b/src/sentry/incidents/endpoints/bases.py
@@ -38,7 +38,10 @@
args, kwargs = super().convert_args(request, *args, **kwargs)
organization = kwargs["organization"]
- if not features.has("organizations:incidents", organization, actor=request.user):
+ # Allow orgs that have downgraded plans to delete metric alerts
+ if request.method != "DELETE" and not features.has(
+ "organizations:incidents", organization, actor=request.user
+ ):
raise ResourceDoesNotExist
try:
| {"golden_diff": "diff --git a/src/sentry/incidents/endpoints/bases.py b/src/sentry/incidents/endpoints/bases.py\n--- a/src/sentry/incidents/endpoints/bases.py\n+++ b/src/sentry/incidents/endpoints/bases.py\n@@ -38,7 +38,10 @@\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n \n- if not features.has(\"organizations:incidents\", organization, actor=request.user):\n+ # Allow orgs that have downgraded plans to delete metric alerts\n+ if request.method != \"DELETE\" and not features.has(\n+ \"organizations:incidents\", organization, actor=request.user\n+ ):\n raise ResourceDoesNotExist\n \n try:\n", "issue": "Cannot edit or delete alerts\n### Environment\n\nSaaS (https://sentry.io/)\n\n### Version\n\n_No response_\n\n### Steps to Reproduce\n\n1. Have alerts that were set up a while ago\r\n2. Get a bunch of emails from one alert that is too touchy\r\n3. Try to edit alert (fails)\r\n4. Try to delete alert (fails)\n\n### Expected Result\n\nCan edit or delete alerts that I created on an account that I am the only user for\n\n### Actual Result\n\nCannot edit or delete alerts\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from rest_framework.exceptions import PermissionDenied\nfrom rest_framework.request import Request\n\nfrom sentry import features\nfrom sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint\nfrom sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction\n\n\nclass ProjectAlertRuleEndpoint(ProjectEndpoint):\n permission_classes = (ProjectAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n project = kwargs[\"project\"]\n\n if not features.has(\"organizations:incidents\", project.organization, actor=request.user):\n raise ResourceDoesNotExist\n\n if not request.access.has_project_access(project):\n raise PermissionDenied\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n snuba_query__subscriptions__project=project, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleEndpoint(OrganizationEndpoint):\n permission_classes = (OrganizationAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n organization=organization, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n alert_rule = kwargs[\"alert_rule\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger\"] = AlertRuleTrigger.objects.get(\n alert_rule=alert_rule, id=alert_rule_trigger_id\n )\n except AlertRuleTrigger.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n trigger = kwargs[\"alert_rule_trigger\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger_action\"] = AlertRuleTriggerAction.objects.get(\n alert_rule_trigger=trigger, id=alert_rule_trigger_action_id\n )\n except AlertRuleTriggerAction.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n", "path": "src/sentry/incidents/endpoints/bases.py"}], "after_files": [{"content": "from rest_framework.exceptions import PermissionDenied\nfrom rest_framework.request import Request\n\nfrom sentry import features\nfrom sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint\nfrom sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction\n\n\nclass ProjectAlertRuleEndpoint(ProjectEndpoint):\n permission_classes = (ProjectAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n project = kwargs[\"project\"]\n\n if not features.has(\"organizations:incidents\", project.organization, actor=request.user):\n raise ResourceDoesNotExist\n\n if not request.access.has_project_access(project):\n raise PermissionDenied\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n snuba_query__subscriptions__project=project, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleEndpoint(OrganizationEndpoint):\n permission_classes = (OrganizationAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n\n # Allow orgs that have downgraded plans to delete metric alerts\n if request.method != \"DELETE\" and not features.has(\n \"organizations:incidents\", organization, actor=request.user\n ):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n organization=organization, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n alert_rule = kwargs[\"alert_rule\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger\"] = AlertRuleTrigger.objects.get(\n alert_rule=alert_rule, id=alert_rule_trigger_id\n )\n except AlertRuleTrigger.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n trigger = kwargs[\"alert_rule_trigger\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger_action\"] = AlertRuleTriggerAction.objects.get(\n alert_rule_trigger=trigger, id=alert_rule_trigger_action_id\n )\n except AlertRuleTriggerAction.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n", "path": "src/sentry/incidents/endpoints/bases.py"}]} | 1,383 | 164 |
gh_patches_debug_15468 | rasdani/github-patches | git_diff | codespell-project__codespell-2477 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Build] QA warning about codespell_lib.tests being installed as data
While packaging v2.2.0 for Gentoo Linux, I got a QA notice about this:
```
* QA Notice: setuptools warnings detected:
*
* Installing 'codespell_lib.tests' as data is deprecated, please list it in `packages`.
```
The actual setuptools warning is as (here shown for Python 3.11, but same for 3.10)
```
/usr/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Instal
ling 'codespell_lib.tests' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'codespell_lib.tests' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'codespell_lib.tests' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'codespell_lib.tests' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
```
Find attached the full build log.
[codespell-2.2.0:20220818-083735.log](https://github.com/codespell-project/codespell/files/9371941/codespell-2.2.0.20220818-083735.log)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2
3 # adapted from mne-python
4
5 import os
6
7 from setuptools import setup
8
9 from codespell_lib import __version__
10
11 DISTNAME = 'codespell'
12 DESCRIPTION = """Codespell"""
13 MAINTAINER = 'Lucas De Marchi'
14 MAINTAINER_EMAIL = '[email protected]'
15 URL = 'https://github.com/codespell-project/codespell/'
16 LICENSE = 'GPL v2'
17 DOWNLOAD_URL = 'https://github.com/codespell-project/codespell/'
18 with open('README.rst', 'r') as f:
19 LONG_DESCRIPTION = f.read()
20
21 if __name__ == "__main__":
22 if os.path.exists('MANIFEST'):
23 os.remove('MANIFEST')
24
25 setup(name=DISTNAME,
26 maintainer=MAINTAINER,
27 include_package_data=True,
28 maintainer_email=MAINTAINER_EMAIL,
29 description=DESCRIPTION,
30 license=LICENSE,
31 url=URL,
32 version=__version__,
33 download_url=DOWNLOAD_URL,
34 long_description=LONG_DESCRIPTION,
35 long_description_content_type='text/x-rst',
36 zip_safe=False,
37 classifiers=['Intended Audience :: Developers',
38 'License :: OSI Approved',
39 'Programming Language :: Python',
40 'Topic :: Software Development',
41 'Operating System :: Microsoft :: Windows',
42 'Operating System :: POSIX',
43 'Operating System :: Unix',
44 'Operating System :: MacOS'],
45 platforms='any',
46 python_requires='>=3.6',
47 packages=[
48 'codespell_lib',
49 'codespell_lib.data',
50 ],
51 package_data={'codespell_lib': [
52 os.path.join('data', 'dictionary*.txt'),
53 os.path.join('data', 'linux-kernel.exclude'),
54 ]},
55 exclude_package_data={'codespell_lib': [
56 os.path.join('tests', '*'),
57 ]},
58 entry_points={
59 'console_scripts': [
60 'codespell = codespell_lib:_script_main'
61 ],
62 },
63 extras_require={
64 "dev": ["check-manifest", "flake8", "pytest", "pytest-cov",
65 "pytest-dependency"],
66 "hard-encoding-detection": ["chardet"],
67 }
68 )
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,15 +46,13 @@
python_requires='>=3.6',
packages=[
'codespell_lib',
+ 'codespell_lib.tests',
'codespell_lib.data',
],
package_data={'codespell_lib': [
os.path.join('data', 'dictionary*.txt'),
os.path.join('data', 'linux-kernel.exclude'),
]},
- exclude_package_data={'codespell_lib': [
- os.path.join('tests', '*'),
- ]},
entry_points={
'console_scripts': [
'codespell = codespell_lib:_script_main'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,15 +46,13 @@\n python_requires='>=3.6',\n packages=[\n 'codespell_lib',\n+ 'codespell_lib.tests',\n 'codespell_lib.data',\n ],\n package_data={'codespell_lib': [\n os.path.join('data', 'dictionary*.txt'),\n os.path.join('data', 'linux-kernel.exclude'),\n ]},\n- exclude_package_data={'codespell_lib': [\n- os.path.join('tests', '*'),\n- ]},\n entry_points={\n 'console_scripts': [\n 'codespell = codespell_lib:_script_main'\n", "issue": "[Build] QA warning about codespell_lib.tests being installed as data\nWhile packaging v2.2.0 for Gentoo Linux, I got a QA notice about this:\r\n\r\n```\r\n* QA Notice: setuptools warnings detected:\r\n * \r\n * Installing 'codespell_lib.tests' as data is deprecated, please list it in `packages`.\r\n```\r\n\r\nThe actual setuptools warning is as (here shown for Python 3.11, but same for 3.10)\r\n\r\n```\r\n/usr/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Instal\r\nling 'codespell_lib.tests' as data is deprecated, please list it in `packages`.\r\n !!\r\n\r\n\r\n ############################\r\n # Package would be ignored #\r\n ############################\r\n Python recognizes 'codespell_lib.tests' as an importable package,\r\n but it is not listed in the `packages` configuration of setuptools.\r\n\r\n 'codespell_lib.tests' has been automatically added to the distribution only\r\n because it may contain data files, but this behavior is likely to change\r\n in future versions of setuptools (and therefore is considered deprecated).\r\n\r\n Please make sure that 'codespell_lib.tests' is included as a package by using\r\n the `packages` configuration field or the proper discovery methods\r\n (for example by using `find_namespace_packages(...)`/`find_namespace:`\r\n instead of `find_packages(...)`/`find:`).\r\n\r\n You can read more about \"package discovery\" and \"data files\" on setuptools\r\n documentation page.\r\n\r\n\r\n!!\r\n\r\n check.warn(importable)\r\n```\r\n\r\nFind attached the full build log.\r\n[codespell-2.2.0:20220818-083735.log](https://github.com/codespell-project/codespell/files/9371941/codespell-2.2.0.20220818-083735.log)\r\n\n", "before_files": [{"content": "#! /usr/bin/env python\n\n# adapted from mne-python\n\nimport os\n\nfrom setuptools import setup\n\nfrom codespell_lib import __version__\n\nDISTNAME = 'codespell'\nDESCRIPTION = \"\"\"Codespell\"\"\"\nMAINTAINER = 'Lucas De Marchi'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://github.com/codespell-project/codespell/'\nLICENSE = 'GPL v2'\nDOWNLOAD_URL = 'https://github.com/codespell-project/codespell/'\nwith open('README.rst', 'r') as f:\n LONG_DESCRIPTION = f.read()\n\nif __name__ == \"__main__\":\n if os.path.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n include_package_data=True,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=__version__,\n download_url=DOWNLOAD_URL,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/x-rst',\n zip_safe=False,\n classifiers=['Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS'],\n platforms='any',\n python_requires='>=3.6',\n packages=[\n 'codespell_lib',\n 'codespell_lib.data',\n ],\n package_data={'codespell_lib': [\n os.path.join('data', 'dictionary*.txt'),\n os.path.join('data', 'linux-kernel.exclude'),\n ]},\n exclude_package_data={'codespell_lib': [\n os.path.join('tests', '*'),\n ]},\n entry_points={\n 'console_scripts': [\n 'codespell = codespell_lib:_script_main'\n ],\n },\n extras_require={\n \"dev\": [\"check-manifest\", \"flake8\", \"pytest\", \"pytest-cov\",\n \"pytest-dependency\"],\n \"hard-encoding-detection\": [\"chardet\"],\n }\n )\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\n# adapted from mne-python\n\nimport os\n\nfrom setuptools import setup\n\nfrom codespell_lib import __version__\n\nDISTNAME = 'codespell'\nDESCRIPTION = \"\"\"Codespell\"\"\"\nMAINTAINER = 'Lucas De Marchi'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://github.com/codespell-project/codespell/'\nLICENSE = 'GPL v2'\nDOWNLOAD_URL = 'https://github.com/codespell-project/codespell/'\nwith open('README.rst', 'r') as f:\n LONG_DESCRIPTION = f.read()\n\nif __name__ == \"__main__\":\n if os.path.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n include_package_data=True,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=__version__,\n download_url=DOWNLOAD_URL,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/x-rst',\n zip_safe=False,\n classifiers=['Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS'],\n platforms='any',\n python_requires='>=3.6',\n packages=[\n 'codespell_lib',\n 'codespell_lib.tests',\n 'codespell_lib.data',\n ],\n package_data={'codespell_lib': [\n os.path.join('data', 'dictionary*.txt'),\n os.path.join('data', 'linux-kernel.exclude'),\n ]},\n entry_points={\n 'console_scripts': [\n 'codespell = codespell_lib:_script_main'\n ],\n },\n extras_require={\n \"dev\": [\"check-manifest\", \"flake8\", \"pytest\", \"pytest-cov\",\n \"pytest-dependency\"],\n \"hard-encoding-detection\": [\"chardet\"],\n }\n )\n", "path": "setup.py"}]} | 1,275 | 153 |
gh_patches_debug_9172 | rasdani/github-patches | git_diff | Gallopsled__pwntools-201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pwnlib loads very slowly
On my system it takes two thirds of a second to load pwnlib:
```
~> time python -c "import pwn"
real 0m0.641s
user 0m0.576s
sys 0m0.044s
```
I've tracked down the culprit: `pwnlib.util.web` imports the `requests` module which takes forever (https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L3).
I suggest we load `requests` lazily in `pwnlib.util.web.wget()`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwnlib/util/web.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import os, tempfile, logging
3 from requests import *
4 from .misc import size
5 log = logging.getLogger(__name__)
6
7 def wget(url, save=None, timeout=5, **kwargs):
8 """wget(url, save=None, timeout=5) -> str
9
10 Downloads a file via HTTP/HTTPS.
11
12 Args:
13 url (str): URL to download
14 save (str or bool): Name to save as. Any truthy value
15 will auto-generate a name based on the URL.
16 timeout (int): Timeout, in seconds
17
18 Example:
19
20 >>> url = 'http://httpbin.org/robots.txt'
21 >>> with context.local(log_level='ERROR'): result = wget(url)
22 >>> result
23 'User-agent: *\nDisallow: /deny\n'
24 >>> with context.local(log_level='ERROR'): wget(url, True)
25 >>> result == file('robots.txt').read()
26 True
27 """
28 with log.progress("Downloading '%s'" % url) as w:
29 w.status("Making request...")
30
31 response = get(url, stream=True, **kwargs)
32
33 if not response.ok:
34 w.failure("Got code %s" % response.status_code)
35 return
36
37 total_size = int(response.headers.get('content-length',0))
38
39 w.status('0 / %s' % size(total_size))
40
41 # Find out the next largest size we can represent as
42 chunk_size = 1
43 while chunk_size < (total_size/10):
44 chunk_size *= 1000
45
46 # Count chunks as they're received
47 total_data = ''
48
49 # Loop until we have all of the data
50 for chunk in response.iter_content(chunk_size = 2**10):
51 total_data += chunk
52 if total_size:
53 w.status('%s / %s' % (size(total_data), size(total_size)))
54 else:
55 w.status('%s' % size(total_data))
56
57 # Save to the target file if provided
58 if save:
59 if not isinstance(save, (str, unicode)):
60 save = os.path.basename(url)
61 save = save or NamedTemporaryFile(dir='.', delete=False).name
62 with file(save,'wb+') as f:
63 f.write(total_data)
64 w.success('Saved %r (%s)' % (f.name, size(total_data)))
65 else:
66 w.success('%s' % size(total_data))
67
68 return total_data
69
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwnlib/util/web.py b/pwnlib/util/web.py
--- a/pwnlib/util/web.py
+++ b/pwnlib/util/web.py
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
import os, tempfile, logging
-from requests import *
from .misc import size
log = logging.getLogger(__name__)
@@ -25,6 +24,8 @@
>>> result == file('robots.txt').read()
True
"""
+ from requests import *
+
with log.progress("Downloading '%s'" % url) as w:
w.status("Making request...")
| {"golden_diff": "diff --git a/pwnlib/util/web.py b/pwnlib/util/web.py\n--- a/pwnlib/util/web.py\n+++ b/pwnlib/util/web.py\n@@ -1,6 +1,5 @@\n # -*- coding: utf-8 -*-\n import os, tempfile, logging\n-from requests import *\n from .misc import size\n log = logging.getLogger(__name__)\n \n@@ -25,6 +24,8 @@\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n+ from requests import *\n+\n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n", "issue": "Pwnlib loads very slowly\nOn my system it takes two thirds of a second to load pwnlib:\n\n```\n~> time python -c \"import pwn\"\n\nreal 0m0.641s\nuser 0m0.576s\nsys 0m0.044s\n```\n\nI've tracked down the culprit: `pwnlib.util.web` imports the `requests` module which takes forever (https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L3).\n\nI suggest we load `requests` lazily in `pwnlib.util.web.wget()`.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os, tempfile, logging\nfrom requests import *\nfrom .misc import size\nlog = logging.getLogger(__name__)\n\ndef wget(url, save=None, timeout=5, **kwargs):\n \"\"\"wget(url, save=None, timeout=5) -> str\n\n Downloads a file via HTTP/HTTPS.\n\n Args:\n url (str): URL to download\n save (str or bool): Name to save as. Any truthy value\n will auto-generate a name based on the URL.\n timeout (int): Timeout, in seconds\n\n Example:\n\n >>> url = 'http://httpbin.org/robots.txt'\n >>> with context.local(log_level='ERROR'): result = wget(url)\n >>> result\n 'User-agent: *\\nDisallow: /deny\\n'\n >>> with context.local(log_level='ERROR'): wget(url, True)\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n\n response = get(url, stream=True, **kwargs)\n\n if not response.ok:\n w.failure(\"Got code %s\" % response.status_code)\n return\n\n total_size = int(response.headers.get('content-length',0))\n\n w.status('0 / %s' % size(total_size))\n\n # Find out the next largest size we can represent as\n chunk_size = 1\n while chunk_size < (total_size/10):\n chunk_size *= 1000\n\n # Count chunks as they're received\n total_data = ''\n\n # Loop until we have all of the data\n for chunk in response.iter_content(chunk_size = 2**10):\n total_data += chunk\n if total_size:\n w.status('%s / %s' % (size(total_data), size(total_size)))\n else:\n w.status('%s' % size(total_data))\n\n # Save to the target file if provided\n if save:\n if not isinstance(save, (str, unicode)):\n save = os.path.basename(url)\n save = save or NamedTemporaryFile(dir='.', delete=False).name\n with file(save,'wb+') as f:\n f.write(total_data)\n w.success('Saved %r (%s)' % (f.name, size(total_data)))\n else:\n w.success('%s' % size(total_data))\n\n return total_data\n\n", "path": "pwnlib/util/web.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport os, tempfile, logging\nfrom .misc import size\nlog = logging.getLogger(__name__)\n\ndef wget(url, save=None, timeout=5, **kwargs):\n \"\"\"wget(url, save=None, timeout=5) -> str\n\n Downloads a file via HTTP/HTTPS.\n\n Args:\n url (str): URL to download\n save (str or bool): Name to save as. Any truthy value\n will auto-generate a name based on the URL.\n timeout (int): Timeout, in seconds\n\n Example:\n\n >>> url = 'http://httpbin.org/robots.txt'\n >>> with context.local(log_level='ERROR'): result = wget(url)\n >>> result\n 'User-agent: *\\nDisallow: /deny\\n'\n >>> with context.local(log_level='ERROR'): wget(url, True)\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n from requests import *\n\n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n\n response = get(url, stream=True, **kwargs)\n\n if not response.ok:\n w.failure(\"Got code %s\" % response.status_code)\n return\n\n total_size = int(response.headers.get('content-length',0))\n\n w.status('0 / %s' % size(total_size))\n\n # Find out the next largest size we can represent as\n chunk_size = 1\n while chunk_size < (total_size/10):\n chunk_size *= 1000\n\n # Count chunks as they're received\n total_data = ''\n\n # Loop until we have all of the data\n for chunk in response.iter_content(chunk_size = 2**10):\n total_data += chunk\n if total_size:\n w.status('%s / %s' % (size(total_data), size(total_size)))\n else:\n w.status('%s' % size(total_data))\n\n # Save to the target file if provided\n if save:\n if not isinstance(save, (str, unicode)):\n save = os.path.basename(url)\n save = save or NamedTemporaryFile(dir='.', delete=False).name\n with file(save,'wb+') as f:\n f.write(total_data)\n w.success('Saved %r (%s)' % (f.name, size(total_data)))\n else:\n w.success('%s' % size(total_data))\n\n return total_data\n\n", "path": "pwnlib/util/web.py"}]} | 1,068 | 136 |
gh_patches_debug_34055 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-1552 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add readthedocs documentation for kafka python instrumentation
Part of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Instrument `kafka-python` to report instrumentation-kafka produced and consumed messages
17
18 Usage
19 -----
20
21 ..code:: python
22
23 from opentelemetry.instrumentation.kafka import KafkaInstrumentor
24 from kafka import KafkaProducer, KafkaConsumer
25
26 # Instrument kafka
27 KafkaInstrumentor().instrument()
28
29 # report a span of type producer with the default settings
30 producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
31 producer.send('my-topic', b'raw_bytes')
32
33
34 # report a span of type consumer with the default settings
35 consumer = KafkaConsumer('my-topic',
36 group_id='my-group',
37 bootstrap_servers=['localhost:9092'])
38 for message in consumer:
39 # process message
40
41 The `_instrument` method accepts the following keyword args:
42 tracer_provider (TracerProvider) - an optional tracer provider
43 produce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message
44 this function signature is:
45 def produce_hook(span: Span, args, kwargs)
46 consume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message
47 this function signature is:
48 def consume
49 _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)
50 for example:
51 .. code: python
52 from opentelemetry.instrumentation.kafka import KafkaInstrumentor
53 from kafka import KafkaProducer, KafkaConsumer
54
55 def produce_hook(span, args, kwargs):
56 if span and span.is_recording():
57 span.set_attribute("custom_user_attribute_from_produce_hook", "some-value")
58 def consume_hook(span, record, args, kwargs):
59 if span and span.is_recording():
60 span.set_attribute("custom_user_attribute_from_consume_hook", "some-value")
61
62 # instrument kafka with produce and consume hooks
63 KafkaInstrumentor().instrument(produce_hook=produce_hook, consume_hook=consume_hook)
64
65 # Using kafka as normal now will automatically generate spans,
66 # including user custom attributes added from the hooks
67 producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
68 producer.send('my-topic', b'raw_bytes')
69
70 API
71 ___
72 """
73 from typing import Collection
74
75 import kafka
76 from wrapt import wrap_function_wrapper
77
78 from opentelemetry import trace
79 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
80 from opentelemetry.instrumentation.kafka.package import _instruments
81 from opentelemetry.instrumentation.kafka.utils import _wrap_next, _wrap_send
82 from opentelemetry.instrumentation.kafka.version import __version__
83 from opentelemetry.instrumentation.utils import unwrap
84
85
86 class KafkaInstrumentor(BaseInstrumentor):
87 """An instrumentor for kafka module
88 See `BaseInstrumentor`
89 """
90
91 def instrumentation_dependencies(self) -> Collection[str]:
92 return _instruments
93
94 def _instrument(self, **kwargs):
95 """Instruments the kafka module
96
97 Args:
98 **kwargs: Optional arguments
99 ``tracer_provider``: a TracerProvider, defaults to global.
100 ``produce_hook``: a callable to be executed just before producing a message
101 ``consume_hook``: a callable to be executed just after consuming a message
102 """
103 tracer_provider = kwargs.get("tracer_provider")
104 produce_hook = kwargs.get("produce_hook")
105 consume_hook = kwargs.get("consume_hook")
106
107 tracer = trace.get_tracer(
108 __name__, __version__, tracer_provider=tracer_provider
109 )
110
111 wrap_function_wrapper(
112 kafka.KafkaProducer, "send", _wrap_send(tracer, produce_hook)
113 )
114 wrap_function_wrapper(
115 kafka.KafkaConsumer,
116 "__next__",
117 _wrap_next(tracer, consume_hook),
118 )
119
120 def _uninstrument(self, **kwargs):
121 unwrap(kafka.KafkaProducer, "send")
122 unwrap(kafka.KafkaConsumer, "__next__")
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py
@@ -13,7 +13,7 @@
# limitations under the License.
"""
-Instrument `kafka-python` to report instrumentation-kafka produced and consumed messages
+Instrument kafka-python to report instrumentation-kafka produced and consumed messages
Usage
-----
@@ -30,24 +30,21 @@
producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
producer.send('my-topic', b'raw_bytes')
-
# report a span of type consumer with the default settings
- consumer = KafkaConsumer('my-topic',
- group_id='my-group',
- bootstrap_servers=['localhost:9092'])
+ consumer = KafkaConsumer('my-topic', group_id='my-group', bootstrap_servers=['localhost:9092'])
for message in consumer:
- # process message
+ # process message
-The `_instrument` method accepts the following keyword args:
+The _instrument() method accepts the following keyword args:
tracer_provider (TracerProvider) - an optional tracer provider
produce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message
- this function signature is:
- def produce_hook(span: Span, args, kwargs)
+this function signature is:
+def produce_hook(span: Span, args, kwargs)
consume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message
- this function signature is:
- def consume
- _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)
+this function signature is:
+def consume_hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)
for example:
+
.. code: python
from opentelemetry.instrumentation.kafka import KafkaInstrumentor
from kafka import KafkaProducer, KafkaConsumer
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py\n@@ -13,7 +13,7 @@\n # limitations under the License.\n \n \"\"\"\n-Instrument `kafka-python` to report instrumentation-kafka produced and consumed messages\n+Instrument kafka-python to report instrumentation-kafka produced and consumed messages\n \n Usage\n -----\n@@ -30,24 +30,21 @@\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n \n-\n # report a span of type consumer with the default settings\n- consumer = KafkaConsumer('my-topic',\n- group_id='my-group',\n- bootstrap_servers=['localhost:9092'])\n+ consumer = KafkaConsumer('my-topic', group_id='my-group', bootstrap_servers=['localhost:9092'])\n for message in consumer:\n- # process message\n+ # process message\n \n-The `_instrument` method accepts the following keyword args:\n+The _instrument() method accepts the following keyword args:\n tracer_provider (TracerProvider) - an optional tracer provider\n produce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message\n- this function signature is:\n- def produce_hook(span: Span, args, kwargs)\n+this function signature is:\n+def produce_hook(span: Span, args, kwargs)\n consume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message\n- this function signature is:\n- def consume\n- _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\n+this function signature is:\n+def consume_hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\n for example:\n+\n .. code: python\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n", "issue": "Add readthedocs documentation for kafka python instrumentation\nPart of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nInstrument `kafka-python` to report instrumentation-kafka produced and consumed messages\n\nUsage\n-----\n\n..code:: python\n\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n\n # Instrument kafka\n KafkaInstrumentor().instrument()\n\n # report a span of type producer with the default settings\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n\n\n # report a span of type consumer with the default settings\n consumer = KafkaConsumer('my-topic',\n group_id='my-group',\n bootstrap_servers=['localhost:9092'])\n for message in consumer:\n # process message\n\nThe `_instrument` method accepts the following keyword args:\ntracer_provider (TracerProvider) - an optional tracer provider\nproduce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message\n this function signature is:\n def produce_hook(span: Span, args, kwargs)\nconsume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message\n this function signature is:\n def consume\n _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\nfor example:\n.. code: python\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n\n def produce_hook(span, args, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_produce_hook\", \"some-value\")\n def consume_hook(span, record, args, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_consume_hook\", \"some-value\")\n\n # instrument kafka with produce and consume hooks\n KafkaInstrumentor().instrument(produce_hook=produce_hook, consume_hook=consume_hook)\n\n # Using kafka as normal now will automatically generate spans,\n # including user custom attributes added from the hooks\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n\nAPI\n___\n\"\"\"\nfrom typing import Collection\n\nimport kafka\nfrom wrapt import wrap_function_wrapper\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.kafka.package import _instruments\nfrom opentelemetry.instrumentation.kafka.utils import _wrap_next, _wrap_send\nfrom opentelemetry.instrumentation.kafka.version import __version__\nfrom opentelemetry.instrumentation.utils import unwrap\n\n\nclass KafkaInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for kafka module\n See `BaseInstrumentor`\n \"\"\"\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Instruments the kafka module\n\n Args:\n **kwargs: Optional arguments\n ``tracer_provider``: a TracerProvider, defaults to global.\n ``produce_hook``: a callable to be executed just before producing a message\n ``consume_hook``: a callable to be executed just after consuming a message\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n produce_hook = kwargs.get(\"produce_hook\")\n consume_hook = kwargs.get(\"consume_hook\")\n\n tracer = trace.get_tracer(\n __name__, __version__, tracer_provider=tracer_provider\n )\n\n wrap_function_wrapper(\n kafka.KafkaProducer, \"send\", _wrap_send(tracer, produce_hook)\n )\n wrap_function_wrapper(\n kafka.KafkaConsumer,\n \"__next__\",\n _wrap_next(tracer, consume_hook),\n )\n\n def _uninstrument(self, **kwargs):\n unwrap(kafka.KafkaProducer, \"send\")\n unwrap(kafka.KafkaConsumer, \"__next__\")\n", "path": "instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nInstrument kafka-python to report instrumentation-kafka produced and consumed messages\n\nUsage\n-----\n\n..code:: python\n\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n\n # Instrument kafka\n KafkaInstrumentor().instrument()\n\n # report a span of type producer with the default settings\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n\n # report a span of type consumer with the default settings\n consumer = KafkaConsumer('my-topic', group_id='my-group', bootstrap_servers=['localhost:9092'])\n for message in consumer:\n # process message\n\nThe _instrument() method accepts the following keyword args:\ntracer_provider (TracerProvider) - an optional tracer provider\nproduce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message\nthis function signature is:\ndef produce_hook(span: Span, args, kwargs)\nconsume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message\nthis function signature is:\ndef consume_hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\nfor example:\n\n.. code: python\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n\n def produce_hook(span, args, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_produce_hook\", \"some-value\")\n def consume_hook(span, record, args, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_consume_hook\", \"some-value\")\n\n # instrument kafka with produce and consume hooks\n KafkaInstrumentor().instrument(produce_hook=produce_hook, consume_hook=consume_hook)\n\n # Using kafka as normal now will automatically generate spans,\n # including user custom attributes added from the hooks\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n\nAPI\n___\n\"\"\"\nfrom typing import Collection\n\nimport kafka\nfrom wrapt import wrap_function_wrapper\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.kafka.package import _instruments\nfrom opentelemetry.instrumentation.kafka.utils import _wrap_next, _wrap_send\nfrom opentelemetry.instrumentation.kafka.version import __version__\nfrom opentelemetry.instrumentation.utils import unwrap\n\n\nclass KafkaInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for kafka module\n See `BaseInstrumentor`\n \"\"\"\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Instruments the kafka module\n\n Args:\n **kwargs: Optional arguments\n ``tracer_provider``: a TracerProvider, defaults to global.\n ``produce_hook``: a callable to be executed just before producing a message\n ``consume_hook``: a callable to be executed just after consuming a message\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n produce_hook = kwargs.get(\"produce_hook\")\n consume_hook = kwargs.get(\"consume_hook\")\n\n tracer = trace.get_tracer(\n __name__, __version__, tracer_provider=tracer_provider\n )\n\n wrap_function_wrapper(\n kafka.KafkaProducer, \"send\", _wrap_send(tracer, produce_hook)\n )\n wrap_function_wrapper(\n kafka.KafkaConsumer,\n \"__next__\",\n _wrap_next(tracer, consume_hook),\n )\n\n def _uninstrument(self, **kwargs):\n unwrap(kafka.KafkaProducer, \"send\")\n unwrap(kafka.KafkaConsumer, \"__next__\")\n", "path": "instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py"}]} | 1,529 | 503 |
gh_patches_debug_37763 | rasdani/github-patches | git_diff | kornia__kornia-1853 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Loftr does not work with some image size (not a memory issue)
### Describe the bug
LoFTR incorrectly does something with positional embeddings
```
RuntimeError Traceback (most recent call last)
[<ipython-input-1-54d246337ab1>](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in <module>()
10 "image1": torch.rand(1,1, 1704, 2272).cuda()}
11 with torch.no_grad():
---> 12 correspondences = matcher(input_dict)
3 frames
[/usr/local/lib/python3.7/dist-packages/kornia/feature/loftr/utils/position_encoding.py](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in forward(self, x)
39 x: [N, C, H, W]
40 """
---> 41 return x + self.pe[:, :, :x.size(2), :x.size(3)]
RuntimeError: The size of tensor a (284) must match the size of tensor b (256) at non-singleton dimension 3
```
### Reproduction steps
```bash
import kornia as K
import kornia.feature as KF
import numpy as np
import torch
matcher = KF.LoFTR(pretrained='outdoor').cuda()
input_dict = {"image0": torch.rand(1,1, 1704, 2272),
"image1": torch.rand(1,1, 1704, 2272)}
with torch.no_grad():
correspondences = matcher(input_dict)
```
### Expected behavior
Not an error
### Environment
```shell
not relevant
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/feature/loftr/utils/position_encoding.py`
Content:
```
1 import math
2
3 import torch
4 from torch import nn
5
6
7 class PositionEncodingSine(nn.Module):
8 """This is a sinusoidal position encoding that generalized to 2-dimensional images."""
9
10 def __init__(self, d_model, max_shape=(256, 256), temp_bug_fix=True):
11 """
12 Args:
13 max_shape (tuple): for 1/8 featmap, the max length of 256 corresponds to 2048 pixels
14 temp_bug_fix (bool): As noted in this [issue](https://github.com/zju3dv/LoFTR/issues/41),
15 the original implementation of LoFTR includes a bug in the pos-enc impl, which has little impact
16 on the final performance. For now, we keep both impls for backward compatibility.
17 We will remove the buggy impl after re-training all variants of our released models.
18 """
19 super().__init__()
20
21 pe = torch.zeros((d_model, *max_shape))
22 y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)
23 x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)
24 if temp_bug_fix:
25 div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))
26 else: # a buggy implementation (for backward compatibility only)
27 div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))
28 div_term = div_term[:, None, None] # [C//4, 1, 1]
29 pe[0::4, :, :] = torch.sin(x_position * div_term)
30 pe[1::4, :, :] = torch.cos(x_position * div_term)
31 pe[2::4, :, :] = torch.sin(y_position * div_term)
32 pe[3::4, :, :] = torch.cos(y_position * div_term)
33
34 self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]
35
36 def forward(self, x):
37 """
38 Args:
39 x: [N, C, H, W]
40 """
41 return x + self.pe[:, :, : x.size(2), : x.size(3)]
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/feature/loftr/utils/position_encoding.py b/kornia/feature/loftr/utils/position_encoding.py
--- a/kornia/feature/loftr/utils/position_encoding.py
+++ b/kornia/feature/loftr/utils/position_encoding.py
@@ -17,25 +17,51 @@
We will remove the buggy impl after re-training all variants of our released models.
"""
super().__init__()
+ self.d_model = d_model
+ self.temp_bug_fix = temp_bug_fix
- pe = torch.zeros((d_model, *max_shape))
+ pe = self._create_position_encoding(max_shape)
+ self.register_buffer('pe', pe, persistent=False) # [1, C, H, W]
+
+ def _create_position_encoding(self, max_shape):
+ """Creates a position encoding from scratch.
+
+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape
+ should be (H//8, W//8).
+ """
+ pe = torch.zeros((self.d_model, *max_shape))
y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)
x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)
- if temp_bug_fix:
- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))
+ if self.temp_bug_fix:
+ div_term = torch.exp(
+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / (self.d_model // 2))
+ )
else: # a buggy implementation (for backward compatibility only)
- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))
+ div_term = torch.exp(
+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / self.d_model // 2)
+ )
div_term = div_term[:, None, None] # [C//4, 1, 1]
pe[0::4, :, :] = torch.sin(x_position * div_term)
pe[1::4, :, :] = torch.cos(x_position * div_term)
pe[2::4, :, :] = torch.sin(y_position * div_term)
pe[3::4, :, :] = torch.cos(y_position * div_term)
+ return pe.unsqueeze(0)
- self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]
+ def update_position_encoding_size(self, max_shape):
+ """Updates position encoding to new max_shape.
+
+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape
+ should be (H//8, W//8).
+ """
+ self.pe = self._create_position_encoding(max_shape).to(self.pe.device)
def forward(self, x):
"""
Args:
x: [N, C, H, W]
"""
+ if x.size(2) > self.pe.size(2) or x.size(3) > self.pe.size(3):
+ max_shape = (max(x.size(2), self.pe.size(2)), max(x.size(3), self.pe.size(3)))
+ self.update_position_encoding_size(max_shape)
+
return x + self.pe[:, :, : x.size(2), : x.size(3)]
| {"golden_diff": "diff --git a/kornia/feature/loftr/utils/position_encoding.py b/kornia/feature/loftr/utils/position_encoding.py\n--- a/kornia/feature/loftr/utils/position_encoding.py\n+++ b/kornia/feature/loftr/utils/position_encoding.py\n@@ -17,25 +17,51 @@\n We will remove the buggy impl after re-training all variants of our released models.\n \"\"\"\n super().__init__()\n+ self.d_model = d_model\n+ self.temp_bug_fix = temp_bug_fix\n \n- pe = torch.zeros((d_model, *max_shape))\n+ pe = self._create_position_encoding(max_shape)\n+ self.register_buffer('pe', pe, persistent=False) # [1, C, H, W]\n+\n+ def _create_position_encoding(self, max_shape):\n+ \"\"\"Creates a position encoding from scratch.\n+\n+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape\n+ should be (H//8, W//8).\n+ \"\"\"\n+ pe = torch.zeros((self.d_model, *max_shape))\n y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)\n x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)\n- if temp_bug_fix:\n- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))\n+ if self.temp_bug_fix:\n+ div_term = torch.exp(\n+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / (self.d_model // 2))\n+ )\n else: # a buggy implementation (for backward compatibility only)\n- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))\n+ div_term = torch.exp(\n+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / self.d_model // 2)\n+ )\n div_term = div_term[:, None, None] # [C//4, 1, 1]\n pe[0::4, :, :] = torch.sin(x_position * div_term)\n pe[1::4, :, :] = torch.cos(x_position * div_term)\n pe[2::4, :, :] = torch.sin(y_position * div_term)\n pe[3::4, :, :] = torch.cos(y_position * div_term)\n+ return pe.unsqueeze(0)\n \n- self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]\n+ def update_position_encoding_size(self, max_shape):\n+ \"\"\"Updates position encoding to new max_shape.\n+\n+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape\n+ should be (H//8, W//8).\n+ \"\"\"\n+ self.pe = self._create_position_encoding(max_shape).to(self.pe.device)\n \n def forward(self, x):\n \"\"\"\n Args:\n x: [N, C, H, W]\n \"\"\"\n+ if x.size(2) > self.pe.size(2) or x.size(3) > self.pe.size(3):\n+ max_shape = (max(x.size(2), self.pe.size(2)), max(x.size(3), self.pe.size(3)))\n+ self.update_position_encoding_size(max_shape)\n+\n return x + self.pe[:, :, : x.size(2), : x.size(3)]\n", "issue": "Loftr does not work with some image size (not a memory issue)\n### Describe the bug\n\nLoFTR incorrectly does something with positional embeddings\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-1-54d246337ab1>](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in <module>()\r\n 10 \"image1\": torch.rand(1,1, 1704, 2272).cuda()}\r\n 11 with torch.no_grad():\r\n---> 12 correspondences = matcher(input_dict)\r\n\r\n3 frames\r\n[/usr/local/lib/python3.7/dist-packages/kornia/feature/loftr/utils/position_encoding.py](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in forward(self, x)\r\n 39 x: [N, C, H, W]\r\n 40 \"\"\"\r\n---> 41 return x + self.pe[:, :, :x.size(2), :x.size(3)]\r\n\r\nRuntimeError: The size of tensor a (284) must match the size of tensor b (256) at non-singleton dimension 3\r\n```\n\n### Reproduction steps\n\n```bash\nimport kornia as K\r\nimport kornia.feature as KF\r\nimport numpy as np\r\nimport torch\r\n\r\nmatcher = KF.LoFTR(pretrained='outdoor').cuda()\r\n\r\ninput_dict = {\"image0\": torch.rand(1,1, 1704, 2272),\r\n \"image1\": torch.rand(1,1, 1704, 2272)}\r\nwith torch.no_grad():\r\n correspondences = matcher(input_dict)\n```\n\n\n### Expected behavior\n\nNot an error \n\n### Environment\n\n```shell\nnot relevant\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import math\n\nimport torch\nfrom torch import nn\n\n\nclass PositionEncodingSine(nn.Module):\n \"\"\"This is a sinusoidal position encoding that generalized to 2-dimensional images.\"\"\"\n\n def __init__(self, d_model, max_shape=(256, 256), temp_bug_fix=True):\n \"\"\"\n Args:\n max_shape (tuple): for 1/8 featmap, the max length of 256 corresponds to 2048 pixels\n temp_bug_fix (bool): As noted in this [issue](https://github.com/zju3dv/LoFTR/issues/41),\n the original implementation of LoFTR includes a bug in the pos-enc impl, which has little impact\n on the final performance. For now, we keep both impls for backward compatibility.\n We will remove the buggy impl after re-training all variants of our released models.\n \"\"\"\n super().__init__()\n\n pe = torch.zeros((d_model, *max_shape))\n y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)\n x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)\n if temp_bug_fix:\n div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))\n else: # a buggy implementation (for backward compatibility only)\n div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))\n div_term = div_term[:, None, None] # [C//4, 1, 1]\n pe[0::4, :, :] = torch.sin(x_position * div_term)\n pe[1::4, :, :] = torch.cos(x_position * div_term)\n pe[2::4, :, :] = torch.sin(y_position * div_term)\n pe[3::4, :, :] = torch.cos(y_position * div_term)\n\n self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]\n\n def forward(self, x):\n \"\"\"\n Args:\n x: [N, C, H, W]\n \"\"\"\n return x + self.pe[:, :, : x.size(2), : x.size(3)]\n", "path": "kornia/feature/loftr/utils/position_encoding.py"}], "after_files": [{"content": "import math\n\nimport torch\nfrom torch import nn\n\n\nclass PositionEncodingSine(nn.Module):\n \"\"\"This is a sinusoidal position encoding that generalized to 2-dimensional images.\"\"\"\n\n def __init__(self, d_model, max_shape=(256, 256), temp_bug_fix=True):\n \"\"\"\n Args:\n max_shape (tuple): for 1/8 featmap, the max length of 256 corresponds to 2048 pixels\n temp_bug_fix (bool): As noted in this [issue](https://github.com/zju3dv/LoFTR/issues/41),\n the original implementation of LoFTR includes a bug in the pos-enc impl, which has little impact\n on the final performance. For now, we keep both impls for backward compatibility.\n We will remove the buggy impl after re-training all variants of our released models.\n \"\"\"\n super().__init__()\n self.d_model = d_model\n self.temp_bug_fix = temp_bug_fix\n\n pe = self._create_position_encoding(max_shape)\n self.register_buffer('pe', pe, persistent=False) # [1, C, H, W]\n\n def _create_position_encoding(self, max_shape):\n \"\"\"Creates a position encoding from scratch.\n\n For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape\n should be (H//8, W//8).\n \"\"\"\n pe = torch.zeros((self.d_model, *max_shape))\n y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)\n x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)\n if self.temp_bug_fix:\n div_term = torch.exp(\n torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / (self.d_model // 2))\n )\n else: # a buggy implementation (for backward compatibility only)\n div_term = torch.exp(\n torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / self.d_model // 2)\n )\n div_term = div_term[:, None, None] # [C//4, 1, 1]\n pe[0::4, :, :] = torch.sin(x_position * div_term)\n pe[1::4, :, :] = torch.cos(x_position * div_term)\n pe[2::4, :, :] = torch.sin(y_position * div_term)\n pe[3::4, :, :] = torch.cos(y_position * div_term)\n return pe.unsqueeze(0)\n\n def update_position_encoding_size(self, max_shape):\n \"\"\"Updates position encoding to new max_shape.\n\n For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape\n should be (H//8, W//8).\n \"\"\"\n self.pe = self._create_position_encoding(max_shape).to(self.pe.device)\n\n def forward(self, x):\n \"\"\"\n Args:\n x: [N, C, H, W]\n \"\"\"\n if x.size(2) > self.pe.size(2) or x.size(3) > self.pe.size(3):\n max_shape = (max(x.size(2), self.pe.size(2)), max(x.size(3), self.pe.size(3)))\n self.update_position_encoding_size(max_shape)\n\n return x + self.pe[:, :, : x.size(2), : x.size(3)]\n", "path": "kornia/feature/loftr/utils/position_encoding.py"}]} | 1,397 | 864 |
gh_patches_debug_8026 | rasdani/github-patches | git_diff | dmlc__dgl-3696 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] FileExistsError when sometimes importing dgl from multiprocess training
## 🐛 Bug
Sometimes, when I launch my Pytorch distributed trainer (which spawns multiple trainer processes, eg once for each GPU for multi-gpu model training), my training job fails with the following error:
```
# pardon the possibly out-of-order stack trace, multiple processes are interleaving the stdout
import dgl
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dgl/__init__.py", line 13, in <module>
from .backend import load_backend, backend_name
File "/usr/local/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
File "trainer/utils/cli.py", line 137, in <module>
locals()["run_" + args.which](args, extra)
File "/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py", line 107, in <module>
load_backend(get_preferred_backend())
File "trainer/utils/cli.py", line 27, in run_local
trainer_class = locate(args.trainer)
FileExistsError: [Errno 17] File exists: '/root/.dgl'
File "/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py", line 103, in get_preferred_backend
set_default_backend(default_dir, 'pytorch')
FileExistsError: [Errno 17] File exists: '/root/.dgl'
```
I see this occur fairly often, say ~10-20% of the time. Usually, retrying the train command fixes things.
For what it's worth: I am running this within a Docker container, using a DGL nightly build from `2021-10-18`
## To Reproduce
Steps to reproduce the behavior:
I don't have a repro script. But, hopefully this stack trace can point out a diagnosis + fix.
## Expected behavior
Importing dgl shouldn't cause an error.
## Environment
- DGL Version (e.g., 1.0): >0.7 (Nightly build from 2021-10-18).
- Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3):
- OS (e.g., Linux): Linux
- How you installed DGL (`conda`, `pip`, source): From nightly
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version (if applicable):
- GPU models and configuration (e.g. V100):
- Any other relevant information:
## Additional context
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/backend/set_default_backend.py`
Content:
```
1 import argparse
2 import os
3 import json
4
5 def set_default_backend(default_dir, backend_name):
6 if not os.path.exists(default_dir):
7 os.makedirs(default_dir)
8 config_path = os.path.join(default_dir, 'config.json')
9 with open(config_path, "w") as config_file:
10 json.dump({'backend': backend_name.lower()}, config_file)
11 print('Setting the default backend to "{}". You can change it in the '
12 '~/.dgl/config.json file or export the DGLBACKEND environment variable. '
13 'Valid options are: pytorch, mxnet, tensorflow (all lowercase)'.format(
14 backend_name))
15
16 if __name__ == "__main__":
17 parser = argparse.ArgumentParser()
18 parser.add_argument("default_dir", type=str, default=os.path.join(os.path.expanduser('~'), '.dgl'))
19 parser.add_argument("backend", nargs=1, type=str, choices=[
20 'pytorch', 'tensorflow', 'mxnet'], help="Set default backend")
21 args = parser.parse_args()
22 set_default_backend(args.default_dir, args.backend[0])
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/dgl/backend/set_default_backend.py b/python/dgl/backend/set_default_backend.py
--- a/python/dgl/backend/set_default_backend.py
+++ b/python/dgl/backend/set_default_backend.py
@@ -3,8 +3,8 @@
import json
def set_default_backend(default_dir, backend_name):
- if not os.path.exists(default_dir):
- os.makedirs(default_dir)
+ # the exists_ok requires python >= 3.2
+ os.makedirs(default_dir, exists_ok=True)
config_path = os.path.join(default_dir, 'config.json')
with open(config_path, "w") as config_file:
json.dump({'backend': backend_name.lower()}, config_file)
| {"golden_diff": "diff --git a/python/dgl/backend/set_default_backend.py b/python/dgl/backend/set_default_backend.py\n--- a/python/dgl/backend/set_default_backend.py\n+++ b/python/dgl/backend/set_default_backend.py\n@@ -3,8 +3,8 @@\n import json\n \n def set_default_backend(default_dir, backend_name):\n- if not os.path.exists(default_dir):\n- os.makedirs(default_dir)\n+ # the exists_ok requires python >= 3.2\n+ os.makedirs(default_dir, exists_ok=True)\n config_path = os.path.join(default_dir, 'config.json')\n with open(config_path, \"w\") as config_file: \n json.dump({'backend': backend_name.lower()}, config_file)\n", "issue": "[Bug] FileExistsError when sometimes importing dgl from multiprocess training\n## \ud83d\udc1b Bug\r\nSometimes, when I launch my Pytorch distributed trainer (which spawns multiple trainer processes, eg once for each GPU for multi-gpu model training), my training job fails with the following error:\r\n\r\n```\r\n# pardon the possibly out-of-order stack trace, multiple processes are interleaving the stdout\r\n import dgl\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/dgl/__init__.py\", line 13, in <module>\r\n from .backend import load_backend, backend_name\r\n File \"/usr/local/lib/python3.7/os.py\", line 221, in makedirs\r\n mkdir(name, mode)\r\n File \"trainer/utils/cli.py\", line 137, in <module>\r\n locals()[\"run_\" + args.which](args, extra)\r\n File \"/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py\", line 107, in <module>\r\n load_backend(get_preferred_backend())\r\n File \"trainer/utils/cli.py\", line 27, in run_local\r\n trainer_class = locate(args.trainer)\r\nFileExistsError: [Errno 17] File exists: '/root/.dgl'\r\n File \"/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py\", line 103, in get_preferred_backend\r\n set_default_backend(default_dir, 'pytorch')\r\nFileExistsError: [Errno 17] File exists: '/root/.dgl'\r\n```\r\n\r\nI see this occur fairly often, say ~10-20% of the time. Usually, retrying the train command fixes things.\r\n\r\nFor what it's worth: I am running this within a Docker container, using a DGL nightly build from `2021-10-18`\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nI don't have a repro script. But, hopefully this stack trace can point out a diagnosis + fix.\r\n\r\n## Expected behavior\r\n\r\nImporting dgl shouldn't cause an error.\r\n\r\n## Environment\r\n\r\n - DGL Version (e.g., 1.0): >0.7 (Nightly build from 2021-10-18).\r\n - Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3):\r\n - OS (e.g., Linux): Linux\r\n - How you installed DGL (`conda`, `pip`, source): From nightly\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7\r\n - CUDA/cuDNN version (if applicable):\r\n - GPU models and configuration (e.g. V100):\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\n", "before_files": [{"content": "import argparse\nimport os\nimport json\n\ndef set_default_backend(default_dir, backend_name):\n if not os.path.exists(default_dir):\n os.makedirs(default_dir)\n config_path = os.path.join(default_dir, 'config.json')\n with open(config_path, \"w\") as config_file: \n json.dump({'backend': backend_name.lower()}, config_file)\n print('Setting the default backend to \"{}\". You can change it in the '\n '~/.dgl/config.json file or export the DGLBACKEND environment variable. '\n 'Valid options are: pytorch, mxnet, tensorflow (all lowercase)'.format(\n backend_name))\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\"default_dir\", type=str, default=os.path.join(os.path.expanduser('~'), '.dgl'))\n parser.add_argument(\"backend\", nargs=1, type=str, choices=[\n 'pytorch', 'tensorflow', 'mxnet'], help=\"Set default backend\")\n args = parser.parse_args()\n set_default_backend(args.default_dir, args.backend[0])\n", "path": "python/dgl/backend/set_default_backend.py"}], "after_files": [{"content": "import argparse\nimport os\nimport json\n\ndef set_default_backend(default_dir, backend_name):\n # the exists_ok requires python >= 3.2\n os.makedirs(default_dir, exists_ok=True)\n config_path = os.path.join(default_dir, 'config.json')\n with open(config_path, \"w\") as config_file: \n json.dump({'backend': backend_name.lower()}, config_file)\n print('Setting the default backend to \"{}\". You can change it in the '\n '~/.dgl/config.json file or export the DGLBACKEND environment variable. '\n 'Valid options are: pytorch, mxnet, tensorflow (all lowercase)'.format(\n backend_name))\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\"default_dir\", type=str, default=os.path.join(os.path.expanduser('~'), '.dgl'))\n parser.add_argument(\"backend\", nargs=1, type=str, choices=[\n 'pytorch', 'tensorflow', 'mxnet'], help=\"Set default backend\")\n args = parser.parse_args()\n set_default_backend(args.default_dir, args.backend[0])\n", "path": "python/dgl/backend/set_default_backend.py"}]} | 1,130 | 150 |
gh_patches_debug_43122 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1627 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False alarm from new W4002
*cfn-lint version: 0.34.0*
[Here](https://gist.github.com/schmiddy/44a779032a930995d22ee2722a18f163) is an example template which causes a false alarm like this:
```
$ cfn-lint /tmp/example.yml
W4002 As the resource "metadata" section contains reference to a "NoEcho" parameter DBUser, CloudFormation will display the parameter value in plaintext
/tmp/example.yml:21:7
W4002 As the resource "metadata" section contains reference to a "NoEcho" parameter DBPass, CloudFormation will display the parameter value in plaintext
/tmp/example.yml:21:7
```
The problem seems to be that the rule is looking for any mention of the parameter name, even as a text description that is not actually referencing the parameter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/NoEcho.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.helpers import bool_compare
6 from cfnlint.rules import CloudFormationLintRule
7 from cfnlint.rules import RuleMatch
8
9
10 class NoEcho(CloudFormationLintRule):
11 id = 'W4002'
12 shortdesc = 'Check for NoEcho References'
13 description = 'Check if there is a NoEcho enabled parameter referenced within a resources Metadata section'
14 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'
15 tags = ['resources', 'NoEcho']
16
17 def match(self, cfn):
18 matches = []
19 no_echo_params = []
20 parameters = cfn.get_parameters()
21 for parameter_name, parameter_value in parameters.items():
22 noecho = parameter_value.get('NoEcho', default=False)
23 if bool_compare(noecho, True):
24 no_echo_params.append(parameter_name)
25
26 if not no_echo_params:
27 return no_echo_params
28
29 resource_properties = cfn.get_resources()
30 resource_dict = {key: resource_properties[key] for key in resource_properties if
31 isinstance(resource_properties[key], dict)}
32 for resource_name, resource_values in resource_dict.items():
33 resource_values = {key: resource_values[key] for key in resource_values if
34 isinstance(resource_values[key], dict)}
35 metadata = resource_values.get('Metadata', {})
36 if metadata is not None:
37 for prop_name, properties in metadata.items():
38 if isinstance(properties, dict):
39 for property_value in properties.values():
40 for param in no_echo_params and no_echo_params:
41 if str(property_value).find(str(param)) > -1:
42 path = ['Resources', resource_name, 'Metadata', prop_name]
43 matches.append(RuleMatch(path, 'As the resource "metadata" section contains '
44 'reference to a "NoEcho" parameter ' + str(param)
45 + ', CloudFormation will display the parameter value in '
46 'plaintext'))
47 return matches
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/NoEcho.py b/src/cfnlint/rules/resources/NoEcho.py
--- a/src/cfnlint/rules/resources/NoEcho.py
+++ b/src/cfnlint/rules/resources/NoEcho.py
@@ -2,6 +2,7 @@
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
"""
+import six
from cfnlint.helpers import bool_compare
from cfnlint.rules import CloudFormationLintRule
from cfnlint.rules import RuleMatch
@@ -14,34 +15,58 @@
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'
tags = ['resources', 'NoEcho']
- def match(self, cfn):
- matches = []
+ def _get_no_echo_params(self, cfn):
+ """ Get no Echo Params"""
no_echo_params = []
- parameters = cfn.get_parameters()
- for parameter_name, parameter_value in parameters.items():
+ for parameter_name, parameter_value in cfn.get_parameters().items():
noecho = parameter_value.get('NoEcho', default=False)
if bool_compare(noecho, True):
no_echo_params.append(parameter_name)
+ return no_echo_params
+
+ def _check_ref(self, cfn, no_echo_params):
+ """ Check Refs """
+ matches = []
+ refs = cfn.search_deep_keys('Ref')
+ for ref in refs:
+ if ref[-1] in no_echo_params:
+ if len(ref) > 3:
+ if ref[0] == 'Resources' and ref[2] == 'Metadata':
+ matches.append(RuleMatch(ref, 'As the resource "metadata" section contains ' +
+ 'reference to a "NoEcho" parameter ' +
+ str(ref[-1]) +
+ ', CloudFormation will display the parameter value in ' +
+ 'plaintext'))
+
+ return matches
+
+ def _check_sub(self, cfn, no_echo_params):
+ """ Check Subs """
+ matches = []
+ subs = cfn.search_deep_keys('Fn::Sub')
+ for sub in subs:
+ if isinstance(sub[-1], six.string_types):
+ params = cfn.get_sub_parameters(sub[-1])
+ for param in params:
+ if param in no_echo_params:
+ if len(sub) > 2:
+ if sub[0] == 'Resources' and sub[2] == 'Metadata':
+
+ matches.append(RuleMatch(sub[:-1], 'As the resource "metadata" section contains ' +
+ 'reference to a "NoEcho" parameter ' +
+ str(param) +
+ ', CloudFormation will display the parameter value in ' +
+ 'plaintext'))
+
+ return matches
+
+ def match(self, cfn):
+ matches = []
+ no_echo_params = self._get_no_echo_params(cfn)
if not no_echo_params:
- return no_echo_params
-
- resource_properties = cfn.get_resources()
- resource_dict = {key: resource_properties[key] for key in resource_properties if
- isinstance(resource_properties[key], dict)}
- for resource_name, resource_values in resource_dict.items():
- resource_values = {key: resource_values[key] for key in resource_values if
- isinstance(resource_values[key], dict)}
- metadata = resource_values.get('Metadata', {})
- if metadata is not None:
- for prop_name, properties in metadata.items():
- if isinstance(properties, dict):
- for property_value in properties.values():
- for param in no_echo_params and no_echo_params:
- if str(property_value).find(str(param)) > -1:
- path = ['Resources', resource_name, 'Metadata', prop_name]
- matches.append(RuleMatch(path, 'As the resource "metadata" section contains '
- 'reference to a "NoEcho" parameter ' + str(param)
- + ', CloudFormation will display the parameter value in '
- 'plaintext'))
+ return matches
+ matches.extend(self._check_ref(cfn, no_echo_params))
+ matches.extend(self._check_sub(cfn, no_echo_params))
+
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/NoEcho.py b/src/cfnlint/rules/resources/NoEcho.py\n--- a/src/cfnlint/rules/resources/NoEcho.py\n+++ b/src/cfnlint/rules/resources/NoEcho.py\n@@ -2,6 +2,7 @@\n Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n SPDX-License-Identifier: MIT-0\n \"\"\"\n+import six\n from cfnlint.helpers import bool_compare\n from cfnlint.rules import CloudFormationLintRule\n from cfnlint.rules import RuleMatch\n@@ -14,34 +15,58 @@\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'\n tags = ['resources', 'NoEcho']\n \n- def match(self, cfn):\n- matches = []\n+ def _get_no_echo_params(self, cfn):\n+ \"\"\" Get no Echo Params\"\"\"\n no_echo_params = []\n- parameters = cfn.get_parameters()\n- for parameter_name, parameter_value in parameters.items():\n+ for parameter_name, parameter_value in cfn.get_parameters().items():\n noecho = parameter_value.get('NoEcho', default=False)\n if bool_compare(noecho, True):\n no_echo_params.append(parameter_name)\n \n+ return no_echo_params\n+\n+ def _check_ref(self, cfn, no_echo_params):\n+ \"\"\" Check Refs \"\"\"\n+ matches = []\n+ refs = cfn.search_deep_keys('Ref')\n+ for ref in refs:\n+ if ref[-1] in no_echo_params:\n+ if len(ref) > 3:\n+ if ref[0] == 'Resources' and ref[2] == 'Metadata':\n+ matches.append(RuleMatch(ref, 'As the resource \"metadata\" section contains ' +\n+ 'reference to a \"NoEcho\" parameter ' +\n+ str(ref[-1]) +\n+ ', CloudFormation will display the parameter value in ' +\n+ 'plaintext'))\n+\n+ return matches\n+\n+ def _check_sub(self, cfn, no_echo_params):\n+ \"\"\" Check Subs \"\"\"\n+ matches = []\n+ subs = cfn.search_deep_keys('Fn::Sub')\n+ for sub in subs:\n+ if isinstance(sub[-1], six.string_types):\n+ params = cfn.get_sub_parameters(sub[-1])\n+ for param in params:\n+ if param in no_echo_params:\n+ if len(sub) > 2:\n+ if sub[0] == 'Resources' and sub[2] == 'Metadata':\n+\n+ matches.append(RuleMatch(sub[:-1], 'As the resource \"metadata\" section contains ' +\n+ 'reference to a \"NoEcho\" parameter ' +\n+ str(param) +\n+ ', CloudFormation will display the parameter value in ' +\n+ 'plaintext'))\n+\n+ return matches\n+\n+ def match(self, cfn):\n+ matches = []\n+ no_echo_params = self._get_no_echo_params(cfn)\n if not no_echo_params:\n- return no_echo_params\n-\n- resource_properties = cfn.get_resources()\n- resource_dict = {key: resource_properties[key] for key in resource_properties if\n- isinstance(resource_properties[key], dict)}\n- for resource_name, resource_values in resource_dict.items():\n- resource_values = {key: resource_values[key] for key in resource_values if\n- isinstance(resource_values[key], dict)}\n- metadata = resource_values.get('Metadata', {})\n- if metadata is not None:\n- for prop_name, properties in metadata.items():\n- if isinstance(properties, dict):\n- for property_value in properties.values():\n- for param in no_echo_params and no_echo_params:\n- if str(property_value).find(str(param)) > -1:\n- path = ['Resources', resource_name, 'Metadata', prop_name]\n- matches.append(RuleMatch(path, 'As the resource \"metadata\" section contains '\n- 'reference to a \"NoEcho\" parameter ' + str(param)\n- + ', CloudFormation will display the parameter value in '\n- 'plaintext'))\n+ return matches\n+ matches.extend(self._check_ref(cfn, no_echo_params))\n+ matches.extend(self._check_sub(cfn, no_echo_params))\n+\n return matches\n", "issue": "False alarm from new W4002\n*cfn-lint version: 0.34.0*\r\n\r\n[Here](https://gist.github.com/schmiddy/44a779032a930995d22ee2722a18f163) is an example template which causes a false alarm like this:\r\n\r\n```\r\n$ cfn-lint /tmp/example.yml \r\nW4002 As the resource \"metadata\" section contains reference to a \"NoEcho\" parameter DBUser, CloudFormation will display the parameter value in plaintext\r\n/tmp/example.yml:21:7\r\n\r\nW4002 As the resource \"metadata\" section contains reference to a \"NoEcho\" parameter DBPass, CloudFormation will display the parameter value in plaintext\r\n/tmp/example.yml:21:7\r\n```\r\n\r\nThe problem seems to be that the rule is looking for any mention of the parameter name, even as a text description that is not actually referencing the parameter.\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.helpers import bool_compare\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass NoEcho(CloudFormationLintRule):\n id = 'W4002'\n shortdesc = 'Check for NoEcho References'\n description = 'Check if there is a NoEcho enabled parameter referenced within a resources Metadata section'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'\n tags = ['resources', 'NoEcho']\n\n def match(self, cfn):\n matches = []\n no_echo_params = []\n parameters = cfn.get_parameters()\n for parameter_name, parameter_value in parameters.items():\n noecho = parameter_value.get('NoEcho', default=False)\n if bool_compare(noecho, True):\n no_echo_params.append(parameter_name)\n\n if not no_echo_params:\n return no_echo_params\n\n resource_properties = cfn.get_resources()\n resource_dict = {key: resource_properties[key] for key in resource_properties if\n isinstance(resource_properties[key], dict)}\n for resource_name, resource_values in resource_dict.items():\n resource_values = {key: resource_values[key] for key in resource_values if\n isinstance(resource_values[key], dict)}\n metadata = resource_values.get('Metadata', {})\n if metadata is not None:\n for prop_name, properties in metadata.items():\n if isinstance(properties, dict):\n for property_value in properties.values():\n for param in no_echo_params and no_echo_params:\n if str(property_value).find(str(param)) > -1:\n path = ['Resources', resource_name, 'Metadata', prop_name]\n matches.append(RuleMatch(path, 'As the resource \"metadata\" section contains '\n 'reference to a \"NoEcho\" parameter ' + str(param)\n + ', CloudFormation will display the parameter value in '\n 'plaintext'))\n return matches\n", "path": "src/cfnlint/rules/resources/NoEcho.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.helpers import bool_compare\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass NoEcho(CloudFormationLintRule):\n id = 'W4002'\n shortdesc = 'Check for NoEcho References'\n description = 'Check if there is a NoEcho enabled parameter referenced within a resources Metadata section'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'\n tags = ['resources', 'NoEcho']\n\n def _get_no_echo_params(self, cfn):\n \"\"\" Get no Echo Params\"\"\"\n no_echo_params = []\n for parameter_name, parameter_value in cfn.get_parameters().items():\n noecho = parameter_value.get('NoEcho', default=False)\n if bool_compare(noecho, True):\n no_echo_params.append(parameter_name)\n\n return no_echo_params\n\n def _check_ref(self, cfn, no_echo_params):\n \"\"\" Check Refs \"\"\"\n matches = []\n refs = cfn.search_deep_keys('Ref')\n for ref in refs:\n if ref[-1] in no_echo_params:\n if len(ref) > 3:\n if ref[0] == 'Resources' and ref[2] == 'Metadata':\n matches.append(RuleMatch(ref, 'As the resource \"metadata\" section contains ' +\n 'reference to a \"NoEcho\" parameter ' +\n str(ref[-1]) +\n ', CloudFormation will display the parameter value in ' +\n 'plaintext'))\n\n return matches\n\n def _check_sub(self, cfn, no_echo_params):\n \"\"\" Check Subs \"\"\"\n matches = []\n subs = cfn.search_deep_keys('Fn::Sub')\n for sub in subs:\n if isinstance(sub[-1], six.string_types):\n params = cfn.get_sub_parameters(sub[-1])\n for param in params:\n if param in no_echo_params:\n if len(sub) > 2:\n if sub[0] == 'Resources' and sub[2] == 'Metadata':\n\n matches.append(RuleMatch(sub[:-1], 'As the resource \"metadata\" section contains ' +\n 'reference to a \"NoEcho\" parameter ' +\n str(param) +\n ', CloudFormation will display the parameter value in ' +\n 'plaintext'))\n\n return matches\n\n def match(self, cfn):\n matches = []\n no_echo_params = self._get_no_echo_params(cfn)\n if not no_echo_params:\n return matches\n matches.extend(self._check_ref(cfn, no_echo_params))\n matches.extend(self._check_sub(cfn, no_echo_params))\n\n return matches\n", "path": "src/cfnlint/rules/resources/NoEcho.py"}]} | 1,005 | 939 |
gh_patches_debug_9061 | rasdani/github-patches | git_diff | modin-project__modin-506 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Numpy 1.16 support for future read_hdf
Due to this issue (https://github.com/PyTables/PyTables/issues/717) it seems that, at least on my host machine, the latest version of numpy is needed to store and play with large datasets using hdf5. Naturally, I would love to use modin (ray) for these purposes and but realized that modin runs with numpy<=1.15.
I downloaded the source of Ray from github to test to see if numpy 1.15+ was supported and it seems that tests were failing for numpy 1.16.1. I was curious if modin planned to support higher versions of numpy in the near term as would be required to interplay with py tables.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from setuptools import setup, find_packages
6
7 with open("README.md", "r", encoding="utf8") as fh:
8 long_description = fh.read()
9
10 setup(
11 name="modin",
12 version="0.4.0",
13 description="Modin: Make your pandas code run faster by changing one line of code.",
14 packages=find_packages(),
15 url="https://github.com/modin-project/modin",
16 long_description=long_description,
17 long_description_content_type="text/markdown",
18 install_requires=["pandas==0.24.1", "ray==0.6.2", "numpy<=1.15.0", "typing"],
19 extras_require={
20 # can be installed by pip install modin[dask]
21 "dask": ["dask==1.0.0", "distributed==1.25.0"],
22 },
23 )
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,9 +15,9 @@
url="https://github.com/modin-project/modin",
long_description=long_description,
long_description_content_type="text/markdown",
- install_requires=["pandas==0.24.1", "ray==0.6.2", "numpy<=1.15.0", "typing"],
+ install_requires=["pandas==0.24.1", "ray==0.6.2", "typing"],
extras_require={
# can be installed by pip install modin[dask]
- "dask": ["dask==1.0.0", "distributed==1.25.0"],
+ "dask": ["dask==1.1.0", "distributed==1.25.0"],
},
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,9 +15,9 @@\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n- install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"numpy<=1.15.0\", \"typing\"],\n+ install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"typing\"],\n extras_require={\n # can be installed by pip install modin[dask]\n- \"dask\": [\"dask==1.0.0\", \"distributed==1.25.0\"],\n+ \"dask\": [\"dask==1.1.0\", \"distributed==1.25.0\"],\n },\n )\n", "issue": "Numpy 1.16 support for future read_hdf\nDue to this issue (https://github.com/PyTables/PyTables/issues/717) it seems that, at least on my host machine, the latest version of numpy is needed to store and play with large datasets using hdf5. Naturally, I would love to use modin (ray) for these purposes and but realized that modin runs with numpy<=1.15.\r\n\r\nI downloaded the source of Ray from github to test to see if numpy 1.15+ was supported and it seems that tests were failing for numpy 1.16.1. I was curious if modin planned to support higher versions of numpy in the near term as would be required to interplay with py tables.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf8\") as fh:\n long_description = fh.read()\n\nsetup(\n name=\"modin\",\n version=\"0.4.0\",\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"numpy<=1.15.0\", \"typing\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": [\"dask==1.0.0\", \"distributed==1.25.0\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf8\") as fh:\n long_description = fh.read()\n\nsetup(\n name=\"modin\",\n version=\"0.4.0\",\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"typing\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": [\"dask==1.1.0\", \"distributed==1.25.0\"],\n },\n)\n", "path": "setup.py"}]} | 666 | 198 |
gh_patches_debug_55117 | rasdani/github-patches | git_diff | netbox-community__netbox-15725 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PROTECTION_RULES: Custom Validator does not show error message on object deletion
### Deployment Type
Self-hosted
### NetBox Version
v4.0-beta1 (commit c7f6c206cf5068f890b89da9ca04d4d3583f5107)
### Python Version
3.11
### Steps to Reproduce
1. Create a custom validator with the following code:
```python
from extras.validators import CustomValidator
from utilities.exceptions import AbortRequest
class IPAddressDeleteValidator(CustomValidator):
def validate(self, instance, request):
raise AbortRequest("Do not delete IP addresses!")
```
and store as `/opt/netbox/validators/test.py`
2. Add the custom validator as a protect rule for `IPAddress` objects:
```python
PROTECTION_RULES = {
"ipam.ipaddress": [
"validators.test.IPAddressDeleteValidator",
]
}
```
3. Navigate to IPAM/IP Addresses
4. Create an arbitrary IP address
5. Click on "Delete" in the new address's detail view and confirm deletion
### Expected Behavior
The IP address is not deleted, an error message is shown saying "Do not delete IP addresses!"
### Observed Behavior
The IP address is not deleted, but there is no error message.
The error message is, however, displayed when one tries to delete an IP address using the bulk edit view:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/utilities/htmx.py`
Content:
```
1 __all__ = (
2 'htmx_partial',
3 )
4
5 PAGE_CONTAINER_ID = 'page-content'
6
7
8 def htmx_partial(request):
9 """
10 Determines whether to render partial (versus complete) HTML content
11 in response to an HTMX request, based on the target element.
12 """
13 return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/utilities/htmx.py b/netbox/utilities/htmx.py
--- a/netbox/utilities/htmx.py
+++ b/netbox/utilities/htmx.py
@@ -2,12 +2,10 @@
'htmx_partial',
)
-PAGE_CONTAINER_ID = 'page-content'
-
def htmx_partial(request):
"""
Determines whether to render partial (versus complete) HTML content
in response to an HTMX request, based on the target element.
"""
- return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID
+ return request.htmx and not request.htmx.boosted
| {"golden_diff": "diff --git a/netbox/utilities/htmx.py b/netbox/utilities/htmx.py\n--- a/netbox/utilities/htmx.py\n+++ b/netbox/utilities/htmx.py\n@@ -2,12 +2,10 @@\n 'htmx_partial',\n )\n \n-PAGE_CONTAINER_ID = 'page-content'\n-\n \n def htmx_partial(request):\n \"\"\"\n Determines whether to render partial (versus complete) HTML content\n in response to an HTMX request, based on the target element.\n \"\"\"\n- return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID\n+ return request.htmx and not request.htmx.boosted\n", "issue": "PROTECTION_RULES: Custom Validator does not show error message on object deletion\n### Deployment Type\r\n\r\nSelf-hosted\r\n\r\n### NetBox Version\r\n\r\nv4.0-beta1 (commit c7f6c206cf5068f890b89da9ca04d4d3583f5107)\r\n\r\n### Python Version\r\n\r\n3.11\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create a custom validator with the following code:\r\n```python\r\nfrom extras.validators import CustomValidator\r\nfrom utilities.exceptions import AbortRequest\r\n\r\n\r\nclass IPAddressDeleteValidator(CustomValidator):\r\n\r\n def validate(self, instance, request):\r\n raise AbortRequest(\"Do not delete IP addresses!\")\r\n```\r\nand store as `/opt/netbox/validators/test.py`\r\n\r\n2. Add the custom validator as a protect rule for `IPAddress` objects:\r\n```python\r\nPROTECTION_RULES = {\r\n \"ipam.ipaddress\": [\r\n \"validators.test.IPAddressDeleteValidator\",\r\n ]\r\n}\r\n```\r\n3. Navigate to IPAM/IP Addresses\r\n4. Create an arbitrary IP address\r\n5. Click on \"Delete\" in the new address's detail view and confirm deletion\r\n\r\n### Expected Behavior\r\n\r\nThe IP address is not deleted, an error message is shown saying \"Do not delete IP addresses!\"\r\n\r\n### Observed Behavior\r\n\r\nThe IP address is not deleted, but there is no error message. \r\n\r\nThe error message is, however, displayed when one tries to delete an IP address using the bulk edit view:\r\n\r\n\n", "before_files": [{"content": "__all__ = (\n 'htmx_partial',\n)\n\nPAGE_CONTAINER_ID = 'page-content'\n\n\ndef htmx_partial(request):\n \"\"\"\n Determines whether to render partial (versus complete) HTML content\n in response to an HTMX request, based on the target element.\n \"\"\"\n return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID\n", "path": "netbox/utilities/htmx.py"}], "after_files": [{"content": "__all__ = (\n 'htmx_partial',\n)\n\n\ndef htmx_partial(request):\n \"\"\"\n Determines whether to render partial (versus complete) HTML content\n in response to an HTMX request, based on the target element.\n \"\"\"\n return request.htmx and not request.htmx.boosted\n", "path": "netbox/utilities/htmx.py"}]} | 728 | 150 |
gh_patches_debug_20687 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-3208 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CI: jupyter-server build fails since late september
The `test_singleuser_auth` step fails with the following error ([example failure](https://travis-ci.org/github/jupyterhub/jupyterhub/jobs/729518444))
```
404 Client Error: Not Found for url: http://127.0.0.1:59471/@/space%20word/user/nandy/api/spec.yaml?redirects=2
```
Has something change with regards to `@` symbols or spaces in words like `space word`? Yes it has, in `jupyter-server` it seems, because there have been releases in this time span.

## References
- [jupyter-server changelog](https://github.com/jupyter/jupyter_server/blob/master/CHANGELOG.md)
- [The only PR that I saw in the changelog with clear potential to cause our CI error](https://github.com/jupyter/jupyter_server/pull/304)
- [A seemingly related PR by, @minrk](https://github.com/jupyterhub/jupyterhub/pull/3168)
- [Another seemingly related PR, by @danlester](https://github.com/jupyterhub/jupyterhub/pull/3167)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/traitlets.py`
Content:
```
1 """
2 Traitlets that are used in JupyterHub
3 """
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6 import entrypoints
7 from traitlets import Integer
8 from traitlets import List
9 from traitlets import TraitError
10 from traitlets import TraitType
11 from traitlets import Type
12 from traitlets import Unicode
13
14
15 class URLPrefix(Unicode):
16 def validate(self, obj, value):
17 u = super().validate(obj, value)
18 if not u.startswith('/'):
19 u = '/' + u
20 if not u.endswith('/'):
21 u = u + '/'
22 return u
23
24
25 class Command(List):
26 """Traitlet for a command that should be a list of strings,
27 but allows it to be specified as a single string.
28 """
29
30 def __init__(self, default_value=None, **kwargs):
31 kwargs.setdefault('minlen', 1)
32 if isinstance(default_value, str):
33 default_value = [default_value]
34 super().__init__(Unicode(), default_value, **kwargs)
35
36 def validate(self, obj, value):
37 if isinstance(value, str):
38 value = [value]
39 return super().validate(obj, value)
40
41
42 class ByteSpecification(Integer):
43 """
44 Allow easily specifying bytes in units of 1024 with suffixes
45
46 Suffixes allowed are:
47 - K -> Kilobyte
48 - M -> Megabyte
49 - G -> Gigabyte
50 - T -> Terabyte
51 """
52
53 UNIT_SUFFIXES = {
54 'K': 1024,
55 'M': 1024 * 1024,
56 'G': 1024 * 1024 * 1024,
57 'T': 1024 * 1024 * 1024 * 1024,
58 }
59
60 # Default to allowing None as a value
61 allow_none = True
62
63 def validate(self, obj, value):
64 """
65 Validate that the passed in value is a valid memory specification
66
67 It could either be a pure int, when it is taken as a byte value.
68 If it has one of the suffixes, it is converted into the appropriate
69 pure byte value.
70 """
71 if isinstance(value, (int, float)):
72 return int(value)
73
74 try:
75 num = float(value[:-1])
76 except ValueError:
77 raise TraitError(
78 '{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(
79 val=value
80 )
81 )
82 suffix = value[-1]
83 if suffix not in self.UNIT_SUFFIXES:
84 raise TraitError(
85 '{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(
86 val=value
87 )
88 )
89 else:
90 return int(float(num) * self.UNIT_SUFFIXES[suffix])
91
92
93 class Callable(TraitType):
94 """
95 A trait which is callable.
96
97 Classes are callable, as are instances
98 with a __call__() method.
99 """
100
101 info_text = 'a callable'
102
103 def validate(self, obj, value):
104 if callable(value):
105 return value
106 else:
107 self.error(obj, value)
108
109
110 class EntryPointType(Type):
111 """Entry point-extended Type
112
113 classes can be registered via entry points
114 in addition to standard 'mypackage.MyClass' strings
115 """
116
117 _original_help = ''
118
119 def __init__(self, *args, entry_point_group, **kwargs):
120 self.entry_point_group = entry_point_group
121 super().__init__(*args, **kwargs)
122
123 @property
124 def help(self):
125 """Extend help by listing currently installed choices"""
126 chunks = [self._original_help]
127 chunks.append("Currently installed: ")
128 for key, entry_point in self.load_entry_points().items():
129 chunks.append(
130 " - {}: {}.{}".format(
131 key, entry_point.module_name, entry_point.object_name
132 )
133 )
134 return '\n'.join(chunks)
135
136 @help.setter
137 def help(self, value):
138 self._original_help = value
139
140 def load_entry_points(self):
141 """Load my entry point group"""
142 # load the group
143 group = entrypoints.get_group_named(self.entry_point_group)
144 # make it case-insensitive
145 return {key.lower(): value for key, value in group.items()}
146
147 def validate(self, obj, value):
148 if isinstance(value, str):
149 # first, look up in entry point registry
150 registry = self.load_entry_points()
151 key = value.lower()
152 if key in registry:
153 value = registry[key].load()
154 return super().validate(obj, value)
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/traitlets.py b/jupyterhub/traitlets.py
--- a/jupyterhub/traitlets.py
+++ b/jupyterhub/traitlets.py
@@ -9,6 +9,7 @@
from traitlets import TraitError
from traitlets import TraitType
from traitlets import Type
+from traitlets import Undefined
from traitlets import Unicode
@@ -27,11 +28,15 @@
but allows it to be specified as a single string.
"""
- def __init__(self, default_value=None, **kwargs):
+ def __init__(self, default_value=Undefined, **kwargs):
kwargs.setdefault('minlen', 1)
if isinstance(default_value, str):
default_value = [default_value]
- super().__init__(Unicode(), default_value, **kwargs)
+ if default_value is not Undefined and (
+ not (default_value is None and not kwargs.get("allow_none", False))
+ ):
+ kwargs["default_value"] = default_value
+ super().__init__(Unicode(), **kwargs)
def validate(self, obj, value):
if isinstance(value, str):
| {"golden_diff": "diff --git a/jupyterhub/traitlets.py b/jupyterhub/traitlets.py\n--- a/jupyterhub/traitlets.py\n+++ b/jupyterhub/traitlets.py\n@@ -9,6 +9,7 @@\n from traitlets import TraitError\n from traitlets import TraitType\n from traitlets import Type\n+from traitlets import Undefined\n from traitlets import Unicode\n \n \n@@ -27,11 +28,15 @@\n but allows it to be specified as a single string.\n \"\"\"\n \n- def __init__(self, default_value=None, **kwargs):\n+ def __init__(self, default_value=Undefined, **kwargs):\n kwargs.setdefault('minlen', 1)\n if isinstance(default_value, str):\n default_value = [default_value]\n- super().__init__(Unicode(), default_value, **kwargs)\n+ if default_value is not Undefined and (\n+ not (default_value is None and not kwargs.get(\"allow_none\", False))\n+ ):\n+ kwargs[\"default_value\"] = default_value\n+ super().__init__(Unicode(), **kwargs)\n \n def validate(self, obj, value):\n if isinstance(value, str):\n", "issue": "CI: jupyter-server build fails since late september\nThe `test_singleuser_auth` step fails with the following error ([example failure](https://travis-ci.org/github/jupyterhub/jupyterhub/jobs/729518444))\r\n\r\n```\r\n404 Client Error: Not Found for url: http://127.0.0.1:59471/@/space%20word/user/nandy/api/spec.yaml?redirects=2\r\n```\r\n\r\nHas something change with regards to `@` symbols or spaces in words like `space word`? Yes it has, in `jupyter-server` it seems, because there have been releases in this time span.\r\n\r\n\r\n\r\n## References\r\n- [jupyter-server changelog](https://github.com/jupyter/jupyter_server/blob/master/CHANGELOG.md)\r\n- [The only PR that I saw in the changelog with clear potential to cause our CI error](https://github.com/jupyter/jupyter_server/pull/304)\r\n- [A seemingly related PR by, @minrk](https://github.com/jupyterhub/jupyterhub/pull/3168)\r\n- [Another seemingly related PR, by @danlester](https://github.com/jupyterhub/jupyterhub/pull/3167)\n", "before_files": [{"content": "\"\"\"\nTraitlets that are used in JupyterHub\n\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport entrypoints\nfrom traitlets import Integer\nfrom traitlets import List\nfrom traitlets import TraitError\nfrom traitlets import TraitType\nfrom traitlets import Type\nfrom traitlets import Unicode\n\n\nclass URLPrefix(Unicode):\n def validate(self, obj, value):\n u = super().validate(obj, value)\n if not u.startswith('/'):\n u = '/' + u\n if not u.endswith('/'):\n u = u + '/'\n return u\n\n\nclass Command(List):\n \"\"\"Traitlet for a command that should be a list of strings,\n but allows it to be specified as a single string.\n \"\"\"\n\n def __init__(self, default_value=None, **kwargs):\n kwargs.setdefault('minlen', 1)\n if isinstance(default_value, str):\n default_value = [default_value]\n super().__init__(Unicode(), default_value, **kwargs)\n\n def validate(self, obj, value):\n if isinstance(value, str):\n value = [value]\n return super().validate(obj, value)\n\n\nclass ByteSpecification(Integer):\n \"\"\"\n Allow easily specifying bytes in units of 1024 with suffixes\n\n Suffixes allowed are:\n - K -> Kilobyte\n - M -> Megabyte\n - G -> Gigabyte\n - T -> Terabyte\n \"\"\"\n\n UNIT_SUFFIXES = {\n 'K': 1024,\n 'M': 1024 * 1024,\n 'G': 1024 * 1024 * 1024,\n 'T': 1024 * 1024 * 1024 * 1024,\n }\n\n # Default to allowing None as a value\n allow_none = True\n\n def validate(self, obj, value):\n \"\"\"\n Validate that the passed in value is a valid memory specification\n\n It could either be a pure int, when it is taken as a byte value.\n If it has one of the suffixes, it is converted into the appropriate\n pure byte value.\n \"\"\"\n if isinstance(value, (int, float)):\n return int(value)\n\n try:\n num = float(value[:-1])\n except ValueError:\n raise TraitError(\n '{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(\n val=value\n )\n )\n suffix = value[-1]\n if suffix not in self.UNIT_SUFFIXES:\n raise TraitError(\n '{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(\n val=value\n )\n )\n else:\n return int(float(num) * self.UNIT_SUFFIXES[suffix])\n\n\nclass Callable(TraitType):\n \"\"\"\n A trait which is callable.\n\n Classes are callable, as are instances\n with a __call__() method.\n \"\"\"\n\n info_text = 'a callable'\n\n def validate(self, obj, value):\n if callable(value):\n return value\n else:\n self.error(obj, value)\n\n\nclass EntryPointType(Type):\n \"\"\"Entry point-extended Type\n\n classes can be registered via entry points\n in addition to standard 'mypackage.MyClass' strings\n \"\"\"\n\n _original_help = ''\n\n def __init__(self, *args, entry_point_group, **kwargs):\n self.entry_point_group = entry_point_group\n super().__init__(*args, **kwargs)\n\n @property\n def help(self):\n \"\"\"Extend help by listing currently installed choices\"\"\"\n chunks = [self._original_help]\n chunks.append(\"Currently installed: \")\n for key, entry_point in self.load_entry_points().items():\n chunks.append(\n \" - {}: {}.{}\".format(\n key, entry_point.module_name, entry_point.object_name\n )\n )\n return '\\n'.join(chunks)\n\n @help.setter\n def help(self, value):\n self._original_help = value\n\n def load_entry_points(self):\n \"\"\"Load my entry point group\"\"\"\n # load the group\n group = entrypoints.get_group_named(self.entry_point_group)\n # make it case-insensitive\n return {key.lower(): value for key, value in group.items()}\n\n def validate(self, obj, value):\n if isinstance(value, str):\n # first, look up in entry point registry\n registry = self.load_entry_points()\n key = value.lower()\n if key in registry:\n value = registry[key].load()\n return super().validate(obj, value)\n", "path": "jupyterhub/traitlets.py"}], "after_files": [{"content": "\"\"\"\nTraitlets that are used in JupyterHub\n\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport entrypoints\nfrom traitlets import Integer\nfrom traitlets import List\nfrom traitlets import TraitError\nfrom traitlets import TraitType\nfrom traitlets import Type\nfrom traitlets import Undefined\nfrom traitlets import Unicode\n\n\nclass URLPrefix(Unicode):\n def validate(self, obj, value):\n u = super().validate(obj, value)\n if not u.startswith('/'):\n u = '/' + u\n if not u.endswith('/'):\n u = u + '/'\n return u\n\n\nclass Command(List):\n \"\"\"Traitlet for a command that should be a list of strings,\n but allows it to be specified as a single string.\n \"\"\"\n\n def __init__(self, default_value=Undefined, **kwargs):\n kwargs.setdefault('minlen', 1)\n if isinstance(default_value, str):\n default_value = [default_value]\n if default_value is not Undefined and (\n not (default_value is None and not kwargs.get(\"allow_none\", False))\n ):\n kwargs[\"default_value\"] = default_value\n super().__init__(Unicode(), **kwargs)\n\n def validate(self, obj, value):\n if isinstance(value, str):\n value = [value]\n return super().validate(obj, value)\n\n\nclass ByteSpecification(Integer):\n \"\"\"\n Allow easily specifying bytes in units of 1024 with suffixes\n\n Suffixes allowed are:\n - K -> Kilobyte\n - M -> Megabyte\n - G -> Gigabyte\n - T -> Terabyte\n \"\"\"\n\n UNIT_SUFFIXES = {\n 'K': 1024,\n 'M': 1024 * 1024,\n 'G': 1024 * 1024 * 1024,\n 'T': 1024 * 1024 * 1024 * 1024,\n }\n\n # Default to allowing None as a value\n allow_none = True\n\n def validate(self, obj, value):\n \"\"\"\n Validate that the passed in value is a valid memory specification\n\n It could either be a pure int, when it is taken as a byte value.\n If it has one of the suffixes, it is converted into the appropriate\n pure byte value.\n \"\"\"\n if isinstance(value, (int, float)):\n return int(value)\n\n try:\n num = float(value[:-1])\n except ValueError:\n raise TraitError(\n '{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(\n val=value\n )\n )\n suffix = value[-1]\n if suffix not in self.UNIT_SUFFIXES:\n raise TraitError(\n '{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(\n val=value\n )\n )\n else:\n return int(float(num) * self.UNIT_SUFFIXES[suffix])\n\n\nclass Callable(TraitType):\n \"\"\"\n A trait which is callable.\n\n Classes are callable, as are instances\n with a __call__() method.\n \"\"\"\n\n info_text = 'a callable'\n\n def validate(self, obj, value):\n if callable(value):\n return value\n else:\n self.error(obj, value)\n\n\nclass EntryPointType(Type):\n \"\"\"Entry point-extended Type\n\n classes can be registered via entry points\n in addition to standard 'mypackage.MyClass' strings\n \"\"\"\n\n _original_help = ''\n\n def __init__(self, *args, entry_point_group, **kwargs):\n self.entry_point_group = entry_point_group\n super().__init__(*args, **kwargs)\n\n @property\n def help(self):\n \"\"\"Extend help by listing currently installed choices\"\"\"\n chunks = [self._original_help]\n chunks.append(\"Currently installed: \")\n for key, entry_point in self.load_entry_points().items():\n chunks.append(\n \" - {}: {}.{}\".format(\n key, entry_point.module_name, entry_point.object_name\n )\n )\n return '\\n'.join(chunks)\n\n @help.setter\n def help(self, value):\n self._original_help = value\n\n def load_entry_points(self):\n \"\"\"Load my entry point group\"\"\"\n # load the group\n group = entrypoints.get_group_named(self.entry_point_group)\n # make it case-insensitive\n return {key.lower(): value for key, value in group.items()}\n\n def validate(self, obj, value):\n if isinstance(value, str):\n # first, look up in entry point registry\n registry = self.load_entry_points()\n key = value.lower()\n if key in registry:\n value = registry[key].load()\n return super().validate(obj, value)\n", "path": "jupyterhub/traitlets.py"}]} | 2,000 | 263 |
gh_patches_debug_16018 | rasdani/github-patches | git_diff | fonttools__fonttools-1839 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Importing a TTFont from XML fails when LC_TIME is set
Importing a font from XML while LC_TIME locale is set to non-English, causes an error.
### How to reproduce?
This might be easy when a non-English locale is available in the system. I came across this, while using a package on top. The corresponding issue in their package is amueller/word_cloud#530. There is a script to reproduce, which only throws an error, when a non-English locale like 'de_DE' is set with e. g. `locale.setlocale(locale.LC_TIME, 'de_DE.UTF-8')` or just by opening Spyder-IDE.
**A simplified test is:**
```python
import locale
locale.setlocale(locale.LC_TIME, 'de_DE.UTF-8') # works if de_DE is available
from fontTools.misc.timeTools import timestampFromString,timestampToString,timestampNow
ts_now = timestampNow()
str_now = timestampToString(ts_now)
timestampFromString(str_now) # ValueError
```
Let's go into the cause of the error.
### Basics
The locale for LC_TIME can be checked with
```python
import locale
print(locale.getlocale(locale.LC_TIME))
```
This outputs `('de_DE', 'UTF-8')` in my case.
With this locale the following fails:
```python
import time
time.strptime('Mon', '%a')
# ValueError: unconverted data remains: n
```
`'Mo'` is the localized abbreviation in de_DE for Monday.
### TTFont
The method [`importXML`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/ttLib/ttFont.py#L318) in `TTFont` receives the font object as XML. This can contain created and modified dates. The XML is parsed by the `XMLReader`, which somehow uses the [`fromXML`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/ttLib/tables/_h_e_a_d.py#L107) method in `table__h_e_a_d`. There the created and modified dates are parsed using [`timestampFromString`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/misc/timeTools.py#L46) from timeTools. This helper function uses `time.strptime(value)`.
In my test case `value` is initialized from the 'created' attribute of a font as `'Mon Jan 8 12:28:04 2007'`, which throws the following error:
```
ValueError: time data 'Mon Jan 8 12:28:04 2007' does not match format '%a %b %d %H:%M:%S %Y'
```
### How to resolve?
I think the parsing should be done without locale, since the XML attribute is likely to be non-local. In the opposite function [`timestampToString`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/misc/timeTools.py#L43) `asctime` is used, which uses a fixed list of abbreviated week days and months. So that is not localized. Hence [`timestampFromString`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/misc/timeTools.py#L46) shouldn't be localized as well.
A simple solution could be
```python
def timestampFromString(value):
import locale
l = locale.getlocale(locale.LC_TIME)
locale.setlocale(locale.LC_TIME, 'C')
try:
t = time.strptime(value)
finally:
locale.setlocale(locale.LC_TIME, l)
return calendar.timegm(t) - epoch_diff
```
However, changing the locale is not recommended. It's better to use a function that can parse a date with specified locale without changing it. You could use [dateparser](https://dateparser.readthedocs.io/en/latest/) for example, but I don't know about your dependencies and how you handle it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/misc/timeTools.py`
Content:
```
1 """fontTools.misc.timeTools.py -- tools for working with OpenType timestamps.
2 """
3
4 from fontTools.misc.py23 import *
5 import os
6 import time
7 import calendar
8
9
10 epoch_diff = calendar.timegm((1904, 1, 1, 0, 0, 0, 0, 0, 0))
11
12 DAYNAMES = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
13 MONTHNAMES = [None, "Jan", "Feb", "Mar", "Apr", "May", "Jun",
14 "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
15
16
17 def asctime(t=None):
18 """
19 Convert a tuple or struct_time representing a time as returned by gmtime()
20 or localtime() to a 24-character string of the following form:
21
22 >>> asctime(time.gmtime(0))
23 'Thu Jan 1 00:00:00 1970'
24
25 If t is not provided, the current time as returned by localtime() is used.
26 Locale information is not used by asctime().
27
28 This is meant to normalise the output of the built-in time.asctime() across
29 different platforms and Python versions.
30 In Python 3.x, the day of the month is right-justified, whereas on Windows
31 Python 2.7 it is padded with zeros.
32
33 See https://github.com/fonttools/fonttools/issues/455
34 """
35 if t is None:
36 t = time.localtime()
37 s = "%s %s %2s %s" % (
38 DAYNAMES[t.tm_wday], MONTHNAMES[t.tm_mon], t.tm_mday,
39 time.strftime("%H:%M:%S %Y", t))
40 return s
41
42
43 def timestampToString(value):
44 return asctime(time.gmtime(max(0, value + epoch_diff)))
45
46 def timestampFromString(value):
47 return calendar.timegm(time.strptime(value)) - epoch_diff
48
49 def timestampNow():
50 # https://reproducible-builds.org/specs/source-date-epoch/
51 source_date_epoch = os.environ.get("SOURCE_DATE_EPOCH")
52 if source_date_epoch is not None:
53 return int(source_date_epoch) - epoch_diff
54 return int(time.time() - epoch_diff)
55
56 def timestampSinceEpoch(value):
57 return int(value - epoch_diff)
58
59
60 if __name__ == "__main__":
61 import sys
62 import doctest
63 sys.exit(doctest.testmod().failed)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/misc/timeTools.py b/Lib/fontTools/misc/timeTools.py
--- a/Lib/fontTools/misc/timeTools.py
+++ b/Lib/fontTools/misc/timeTools.py
@@ -4,6 +4,7 @@
from fontTools.misc.py23 import *
import os
import time
+from datetime import datetime, timezone
import calendar
@@ -44,7 +45,12 @@
return asctime(time.gmtime(max(0, value + epoch_diff)))
def timestampFromString(value):
- return calendar.timegm(time.strptime(value)) - epoch_diff
+ wkday, mnth = value[:7].split()
+ t = datetime.strptime(value[7:], ' %d %H:%M:%S %Y')
+ t = t.replace(month=MONTHNAMES.index(mnth), tzinfo=timezone.utc)
+ wkday_idx = DAYNAMES.index(wkday)
+ assert t.weekday() == wkday_idx, '"' + value + '" has inconsistent weekday'
+ return int(t.timestamp()) - epoch_diff
def timestampNow():
# https://reproducible-builds.org/specs/source-date-epoch/
| {"golden_diff": "diff --git a/Lib/fontTools/misc/timeTools.py b/Lib/fontTools/misc/timeTools.py\n--- a/Lib/fontTools/misc/timeTools.py\n+++ b/Lib/fontTools/misc/timeTools.py\n@@ -4,6 +4,7 @@\n from fontTools.misc.py23 import *\n import os\n import time\n+from datetime import datetime, timezone\n import calendar\n \n \n@@ -44,7 +45,12 @@\n \treturn asctime(time.gmtime(max(0, value + epoch_diff)))\n \n def timestampFromString(value):\n-\treturn calendar.timegm(time.strptime(value)) - epoch_diff\n+\twkday, mnth = value[:7].split()\n+\tt = datetime.strptime(value[7:], ' %d %H:%M:%S %Y')\n+\tt = t.replace(month=MONTHNAMES.index(mnth), tzinfo=timezone.utc)\n+\twkday_idx = DAYNAMES.index(wkday)\n+\tassert t.weekday() == wkday_idx, '\"' + value + '\" has inconsistent weekday'\n+\treturn int(t.timestamp()) - epoch_diff\n \n def timestampNow():\n \t# https://reproducible-builds.org/specs/source-date-epoch/\n", "issue": "Importing a TTFont from XML fails when LC_TIME is set\nImporting a font from XML while LC_TIME locale is set to non-English, causes an error.\r\n\r\n### How to reproduce?\r\n\r\nThis might be easy when a non-English locale is available in the system. I came across this, while using a package on top. The corresponding issue in their package is amueller/word_cloud#530. There is a script to reproduce, which only throws an error, when a non-English locale like 'de_DE' is set with e. g. `locale.setlocale(locale.LC_TIME, 'de_DE.UTF-8')` or just by opening Spyder-IDE.\r\n\r\n**A simplified test is:**\r\n```python\r\nimport locale\r\nlocale.setlocale(locale.LC_TIME, 'de_DE.UTF-8') # works if de_DE is available\r\n\r\nfrom fontTools.misc.timeTools import timestampFromString,timestampToString,timestampNow\r\nts_now = timestampNow()\r\nstr_now = timestampToString(ts_now)\r\ntimestampFromString(str_now) # ValueError\r\n```\r\n\r\nLet's go into the cause of the error.\r\n\r\n### Basics\r\n\r\nThe locale for LC_TIME can be checked with\r\n```python\r\nimport locale\r\nprint(locale.getlocale(locale.LC_TIME))\r\n```\r\nThis outputs `('de_DE', 'UTF-8')` in my case.\r\n\r\nWith this locale the following fails:\r\n```python\r\nimport time\r\ntime.strptime('Mon', '%a')\r\n# ValueError: unconverted data remains: n\r\n```\r\n`'Mo'` is the localized abbreviation in de_DE for Monday.\r\n\r\n### TTFont\r\n\r\nThe method [`importXML`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/ttLib/ttFont.py#L318) in `TTFont` receives the font object as XML. This can contain created and modified dates. The XML is parsed by the `XMLReader`, which somehow uses the [`fromXML`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/ttLib/tables/_h_e_a_d.py#L107) method in `table__h_e_a_d`. There the created and modified dates are parsed using [`timestampFromString`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/misc/timeTools.py#L46) from timeTools. This helper function uses `time.strptime(value)`.\r\n\r\nIn my test case `value` is initialized from the 'created' attribute of a font as `'Mon Jan 8 12:28:04 2007'`, which throws the following error:\r\n```\r\nValueError: time data 'Mon Jan 8 12:28:04 2007' does not match format '%a %b %d %H:%M:%S %Y'\r\n```\r\n\r\n\r\n### How to resolve?\r\n\r\nI think the parsing should be done without locale, since the XML attribute is likely to be non-local. In the opposite function [`timestampToString`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/misc/timeTools.py#L43) `asctime` is used, which uses a fixed list of abbreviated week days and months. So that is not localized. Hence [`timestampFromString`](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/misc/timeTools.py#L46) shouldn't be localized as well.\r\n\r\nA simple solution could be\r\n```python\r\ndef timestampFromString(value):\r\n\timport locale\r\n\tl = locale.getlocale(locale.LC_TIME)\r\n\tlocale.setlocale(locale.LC_TIME, 'C')\r\n\ttry:\r\n\t\tt = time.strptime(value)\r\n\tfinally:\r\n\t\tlocale.setlocale(locale.LC_TIME, l)\r\n\treturn calendar.timegm(t) - epoch_diff\r\n```\r\n\r\nHowever, changing the locale is not recommended. It's better to use a function that can parse a date with specified locale without changing it. You could use [dateparser](https://dateparser.readthedocs.io/en/latest/) for example, but I don't know about your dependencies and how you handle it.\n", "before_files": [{"content": "\"\"\"fontTools.misc.timeTools.py -- tools for working with OpenType timestamps.\n\"\"\"\n\nfrom fontTools.misc.py23 import *\nimport os\nimport time\nimport calendar\n\n\nepoch_diff = calendar.timegm((1904, 1, 1, 0, 0, 0, 0, 0, 0))\n\nDAYNAMES = [\"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\", \"Sun\"]\nMONTHNAMES = [None, \"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n\t\t\t \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\"]\n\n\ndef asctime(t=None):\n\t\"\"\"\n\tConvert a tuple or struct_time representing a time as returned by gmtime()\n\tor localtime() to a 24-character string of the following form:\n\n\t>>> asctime(time.gmtime(0))\n\t'Thu Jan 1 00:00:00 1970'\n\n\tIf t is not provided, the current time as returned by localtime() is used.\n\tLocale information is not used by asctime().\n\n\tThis is meant to normalise the output of the built-in time.asctime() across\n\tdifferent platforms and Python versions.\n\tIn Python 3.x, the day of the month is right-justified, whereas on Windows\n\tPython 2.7 it is padded with zeros.\n\n\tSee https://github.com/fonttools/fonttools/issues/455\n\t\"\"\"\n\tif t is None:\n\t\tt = time.localtime()\n\ts = \"%s %s %2s %s\" % (\n\t\tDAYNAMES[t.tm_wday], MONTHNAMES[t.tm_mon], t.tm_mday,\n\t\ttime.strftime(\"%H:%M:%S %Y\", t))\n\treturn s\n\n\ndef timestampToString(value):\n\treturn asctime(time.gmtime(max(0, value + epoch_diff)))\n\ndef timestampFromString(value):\n\treturn calendar.timegm(time.strptime(value)) - epoch_diff\n\ndef timestampNow():\n\t# https://reproducible-builds.org/specs/source-date-epoch/\n\tsource_date_epoch = os.environ.get(\"SOURCE_DATE_EPOCH\")\n\tif source_date_epoch is not None:\n\t\treturn int(source_date_epoch) - epoch_diff\n\treturn int(time.time() - epoch_diff)\n\ndef timestampSinceEpoch(value):\n\treturn int(value - epoch_diff)\n\n\nif __name__ == \"__main__\":\n\timport sys\n\timport doctest\n\tsys.exit(doctest.testmod().failed)\n", "path": "Lib/fontTools/misc/timeTools.py"}], "after_files": [{"content": "\"\"\"fontTools.misc.timeTools.py -- tools for working with OpenType timestamps.\n\"\"\"\n\nfrom fontTools.misc.py23 import *\nimport os\nimport time\nfrom datetime import datetime, timezone\nimport calendar\n\n\nepoch_diff = calendar.timegm((1904, 1, 1, 0, 0, 0, 0, 0, 0))\n\nDAYNAMES = [\"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\", \"Sun\"]\nMONTHNAMES = [None, \"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n\t\t\t \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\"]\n\n\ndef asctime(t=None):\n\t\"\"\"\n\tConvert a tuple or struct_time representing a time as returned by gmtime()\n\tor localtime() to a 24-character string of the following form:\n\n\t>>> asctime(time.gmtime(0))\n\t'Thu Jan 1 00:00:00 1970'\n\n\tIf t is not provided, the current time as returned by localtime() is used.\n\tLocale information is not used by asctime().\n\n\tThis is meant to normalise the output of the built-in time.asctime() across\n\tdifferent platforms and Python versions.\n\tIn Python 3.x, the day of the month is right-justified, whereas on Windows\n\tPython 2.7 it is padded with zeros.\n\n\tSee https://github.com/fonttools/fonttools/issues/455\n\t\"\"\"\n\tif t is None:\n\t\tt = time.localtime()\n\ts = \"%s %s %2s %s\" % (\n\t\tDAYNAMES[t.tm_wday], MONTHNAMES[t.tm_mon], t.tm_mday,\n\t\ttime.strftime(\"%H:%M:%S %Y\", t))\n\treturn s\n\n\ndef timestampToString(value):\n\treturn asctime(time.gmtime(max(0, value + epoch_diff)))\n\ndef timestampFromString(value):\n\twkday, mnth = value[:7].split()\n\tt = datetime.strptime(value[7:], ' %d %H:%M:%S %Y')\n\tt = t.replace(month=MONTHNAMES.index(mnth), tzinfo=timezone.utc)\n\twkday_idx = DAYNAMES.index(wkday)\n\tassert t.weekday() == wkday_idx, '\"' + value + '\" has inconsistent weekday'\n\treturn int(t.timestamp()) - epoch_diff\n\ndef timestampNow():\n\t# https://reproducible-builds.org/specs/source-date-epoch/\n\tsource_date_epoch = os.environ.get(\"SOURCE_DATE_EPOCH\")\n\tif source_date_epoch is not None:\n\t\treturn int(source_date_epoch) - epoch_diff\n\treturn int(time.time() - epoch_diff)\n\ndef timestampSinceEpoch(value):\n\treturn int(value - epoch_diff)\n\n\nif __name__ == \"__main__\":\n\timport sys\n\timport doctest\n\tsys.exit(doctest.testmod().failed)\n", "path": "Lib/fontTools/misc/timeTools.py"}]} | 1,779 | 250 |
gh_patches_debug_3077 | rasdani/github-patches | git_diff | SeldonIO__MLServer-1168 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expected XGBoost model file "model.bst" extension is undocumented?
On https://github.com/SeldonIO/MLServer/blob/master/runtimes/xgboost/mlserver_xgboost/xgboost.py#L21 you can see that MLServer is looking for an XGBoost model file called "model.bst". However, I cannot find any reference to that file extension in the XGBoost documentation. As far as I can see, XGBoost's documented file extensions are:
- ".json" added in 1.0.0, an "open format that can be easily reused"
- ".ubj" for Universal Binary JSON format, available in 1.6.0
- ".model" for the "old binary internal format" prior to 1.0.0, as shown in examples
Where does MLServer get the ".bst" extension from, and what model format does it use? Shouldn't it use one of the extensions mentioned in the XGBoost documentation instead, to avoid ambiguity?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `runtimes/xgboost/mlserver_xgboost/xgboost.py`
Content:
```
1 import xgboost as xgb
2
3 from typing import List
4 from xgboost.sklearn import XGBModel
5
6 from mlserver.errors import InferenceError
7 from mlserver.model import MLModel
8 from mlserver.utils import get_model_uri
9 from mlserver.codecs import NumpyRequestCodec, NumpyCodec
10 from mlserver.types import (
11 InferenceRequest,
12 InferenceResponse,
13 RequestOutput,
14 ResponseOutput,
15 )
16
17 PREDICT_OUTPUT = "predict"
18 PREDICT_PROBA_OUTPUT = "predict_proba"
19 VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
20
21 WELLKNOWN_MODEL_FILENAMES = ["model.bst", "model.json"]
22
23
24 def _load_sklearn_interface(model_uri: str) -> XGBModel:
25 try:
26 regressor = xgb.XGBRegressor()
27 regressor.load_model(model_uri)
28 return regressor
29 except TypeError:
30 # If there was an error, it's likely due to the model being a
31 # classifier
32 classifier = xgb.XGBClassifier()
33 classifier.load_model(model_uri)
34 return classifier
35
36
37 class XGBoostModel(MLModel):
38 """
39 Implementationof the MLModel interface to load and serve `xgboost` models.
40 """
41
42 async def load(self) -> bool:
43 model_uri = await get_model_uri(
44 self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES
45 )
46
47 self._model = _load_sklearn_interface(model_uri)
48
49 return True
50
51 def _check_request(self, payload: InferenceRequest) -> InferenceRequest:
52 if not payload.outputs:
53 # By default, only return the result of `predict()`
54 payload.outputs = [RequestOutput(name=PREDICT_OUTPUT)]
55 else:
56 for request_output in payload.outputs:
57 if request_output.name not in VALID_OUTPUTS:
58 raise InferenceError(
59 f"XGBoostModel only supports '{PREDICT_OUTPUT}' and "
60 f"'{PREDICT_PROBA_OUTPUT}' as outputs "
61 f"({request_output.name} was received)"
62 )
63
64 # Regression models do not support `predict_proba`
65 if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:
66 if isinstance(self._model, xgb.XGBRegressor):
67 raise InferenceError(
68 f"XGBRegressor models do not support '{PREDICT_PROBA_OUTPUT}"
69 )
70
71 return payload
72
73 def _get_model_outputs(self, payload: InferenceRequest) -> List[ResponseOutput]:
74 decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)
75
76 outputs = []
77 for request_output in payload.outputs: # type: ignore
78 predict_fn = getattr(self._model, request_output.name)
79 y = predict_fn(decoded_request)
80
81 output = self.encode(y, request_output, default_codec=NumpyCodec)
82 outputs.append(output)
83
84 return outputs
85
86 async def predict(self, payload: InferenceRequest) -> InferenceResponse:
87 payload = self._check_request(payload)
88 outputs = self._get_model_outputs(payload)
89
90 return InferenceResponse(
91 model_name=self.name,
92 model_version=self.version,
93 outputs=outputs,
94 )
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/runtimes/xgboost/mlserver_xgboost/xgboost.py b/runtimes/xgboost/mlserver_xgboost/xgboost.py
--- a/runtimes/xgboost/mlserver_xgboost/xgboost.py
+++ b/runtimes/xgboost/mlserver_xgboost/xgboost.py
@@ -18,7 +18,7 @@
PREDICT_PROBA_OUTPUT = "predict_proba"
VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
-WELLKNOWN_MODEL_FILENAMES = ["model.bst", "model.json"]
+WELLKNOWN_MODEL_FILENAMES = ["model.bst", "model.json", "model.ubj"]
def _load_sklearn_interface(model_uri: str) -> XGBModel:
| {"golden_diff": "diff --git a/runtimes/xgboost/mlserver_xgboost/xgboost.py b/runtimes/xgboost/mlserver_xgboost/xgboost.py\n--- a/runtimes/xgboost/mlserver_xgboost/xgboost.py\n+++ b/runtimes/xgboost/mlserver_xgboost/xgboost.py\n@@ -18,7 +18,7 @@\n PREDICT_PROBA_OUTPUT = \"predict_proba\"\n VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n \n-WELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\"]\n+WELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\", \"model.ubj\"]\n \n \n def _load_sklearn_interface(model_uri: str) -> XGBModel:\n", "issue": "Expected XGBoost model file \"model.bst\" extension is undocumented? \nOn https://github.com/SeldonIO/MLServer/blob/master/runtimes/xgboost/mlserver_xgboost/xgboost.py#L21 you can see that MLServer is looking for an XGBoost model file called \"model.bst\". However, I cannot find any reference to that file extension in the XGBoost documentation. As far as I can see, XGBoost's documented file extensions are:\r\n\r\n- \".json\" added in 1.0.0, an \"open format that can be easily reused\"\r\n- \".ubj\" for Universal Binary JSON format, available in 1.6.0\r\n- \".model\" for the \"old binary internal format\" prior to 1.0.0, as shown in examples\r\n\r\nWhere does MLServer get the \".bst\" extension from, and what model format does it use? Shouldn't it use one of the extensions mentioned in the XGBoost documentation instead, to avoid ambiguity?\n", "before_files": [{"content": "import xgboost as xgb\n\nfrom typing import List\nfrom xgboost.sklearn import XGBModel\n\nfrom mlserver.errors import InferenceError\nfrom mlserver.model import MLModel\nfrom mlserver.utils import get_model_uri\nfrom mlserver.codecs import NumpyRequestCodec, NumpyCodec\nfrom mlserver.types import (\n InferenceRequest,\n InferenceResponse,\n RequestOutput,\n ResponseOutput,\n)\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\"]\n\n\ndef _load_sklearn_interface(model_uri: str) -> XGBModel:\n try:\n regressor = xgb.XGBRegressor()\n regressor.load_model(model_uri)\n return regressor\n except TypeError:\n # If there was an error, it's likely due to the model being a\n # classifier\n classifier = xgb.XGBClassifier()\n classifier.load_model(model_uri)\n return classifier\n\n\nclass XGBoostModel(MLModel):\n \"\"\"\n Implementationof the MLModel interface to load and serve `xgboost` models.\n \"\"\"\n\n async def load(self) -> bool:\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n\n self._model = _load_sklearn_interface(model_uri)\n\n return True\n\n def _check_request(self, payload: InferenceRequest) -> InferenceRequest:\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"XGBoostModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n # Regression models do not support `predict_proba`\n if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:\n if isinstance(self._model, xgb.XGBRegressor):\n raise InferenceError(\n f\"XGBRegressor models do not support '{PREDICT_PROBA_OUTPUT}\"\n )\n\n return payload\n\n def _get_model_outputs(self, payload: InferenceRequest) -> List[ResponseOutput]:\n decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(decoded_request)\n\n output = self.encode(y, request_output, default_codec=NumpyCodec)\n outputs.append(output)\n\n return outputs\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n payload = self._check_request(payload)\n outputs = self._get_model_outputs(payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=outputs,\n )\n", "path": "runtimes/xgboost/mlserver_xgboost/xgboost.py"}], "after_files": [{"content": "import xgboost as xgb\n\nfrom typing import List\nfrom xgboost.sklearn import XGBModel\n\nfrom mlserver.errors import InferenceError\nfrom mlserver.model import MLModel\nfrom mlserver.utils import get_model_uri\nfrom mlserver.codecs import NumpyRequestCodec, NumpyCodec\nfrom mlserver.types import (\n InferenceRequest,\n InferenceResponse,\n RequestOutput,\n ResponseOutput,\n)\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\", \"model.ubj\"]\n\n\ndef _load_sklearn_interface(model_uri: str) -> XGBModel:\n try:\n regressor = xgb.XGBRegressor()\n regressor.load_model(model_uri)\n return regressor\n except TypeError:\n # If there was an error, it's likely due to the model being a\n # classifier\n classifier = xgb.XGBClassifier()\n classifier.load_model(model_uri)\n return classifier\n\n\nclass XGBoostModel(MLModel):\n \"\"\"\n Implementationof the MLModel interface to load and serve `xgboost` models.\n \"\"\"\n\n async def load(self) -> bool:\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n\n self._model = _load_sklearn_interface(model_uri)\n\n return True\n\n def _check_request(self, payload: InferenceRequest) -> InferenceRequest:\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"XGBoostModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n # Regression models do not support `predict_proba`\n if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:\n if isinstance(self._model, xgb.XGBRegressor):\n raise InferenceError(\n f\"XGBRegressor models do not support '{PREDICT_PROBA_OUTPUT}\"\n )\n\n return payload\n\n def _get_model_outputs(self, payload: InferenceRequest) -> List[ResponseOutput]:\n decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(decoded_request)\n\n output = self.encode(y, request_output, default_codec=NumpyCodec)\n outputs.append(output)\n\n return outputs\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n payload = self._check_request(payload)\n outputs = self._get_model_outputs(payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=outputs,\n )\n", "path": "runtimes/xgboost/mlserver_xgboost/xgboost.py"}]} | 1,354 | 170 |
gh_patches_debug_18799 | rasdani/github-patches | git_diff | mindee__doctr-30 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[documents] Add basic document reader
For documents to be analyzed, we first need to add a utility for document reading (PDF mostly). The following specs would be nice to have:
- inherit for a shared reader class ("DocumentReader" for instance)
- to be located in the `doctr.documents.reader` module
The following formats should be handled:
- [x] PDF (#8, #25): this resource would be nice to check: https://github.com/pymupdf/PyMuPDF
- [x] PNG (#30)
- [x] JPG (#30)
cc @charlesmindee
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doctr/documents/reader.py`
Content:
```
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import fitz
7 import numpy as np
8 import cv2
9 from typing import List, Tuple, Optional, Any
10
11 __all__ = ['read_pdf']
12
13
14 def read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:
15 """Read a PDF file and convert it into an image in numpy format
16
17 Example::
18 >>> from doctr.documents import read_pdf
19 >>> doc = read_pdf("path/to/your/doc.pdf")
20
21 Args:
22 file_path: the path to the PDF file
23 Returns:
24 the list of pages decoded as numpy ndarray of shape H x W x 3
25 """
26
27 # Read pages with fitz and convert them to numpy ndarrays
28 return [convert_page_to_numpy(page, **kwargs) for page in fitz.open(file_path)]
29
30
31 def convert_page_to_numpy(
32 page: fitz.fitz.Page,
33 output_size: Optional[Tuple[int, int]] = None,
34 rgb_output: bool = True,
35 ) -> np.ndarray:
36 """Convert a fitz page to a numpy-formatted image
37
38 Args:
39 page: the page of a file read with PyMuPDF
40 output_size: the expected output size of each page in format H x W
41 rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
42
43 Returns:
44 the rendered image in numpy format
45 """
46
47 transform_matrix = None
48
49 # If no output size is specified, keep the origin one
50 if output_size is not None:
51 scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])
52 transform_matrix = fitz.Matrix(*scales)
53
54 # Generate the pixel map using the transformation matrix
55 stream = page.getPixmap(matrix=transform_matrix).getImageData()
56 # Decode it into a numpy
57 img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
58
59 # Switch the channel order
60 if rgb_output:
61 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
62
63 return img
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doctr/documents/reader.py b/doctr/documents/reader.py
--- a/doctr/documents/reader.py
+++ b/doctr/documents/reader.py
@@ -8,7 +8,36 @@
import cv2
from typing import List, Tuple, Optional, Any
-__all__ = ['read_pdf']
+__all__ = ['read_pdf', 'read_img']
+
+
+def read_img(
+ file_path: str,
+ output_size: Optional[Tuple[int, int]] = None,
+ rgb_output: bool = True,
+) -> np.ndarray:
+ """Read an image file into numpy format
+
+ Example::
+ >>> from doctr.documents import read_img
+ >>> page = read_img("path/to/your/doc.jpg")
+
+ Args:
+ file_path: the path to the image file
+ output_size: the expected output size of each page in format H x W
+ rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
+ Returns:
+ the page decoded as numpy ndarray of shape H x W x 3
+ """
+
+ img = cv2.imread(file_path, cv2.IMREAD_COLOR)
+ # Resizing
+ if isinstance(output_size, tuple):
+ img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)
+ # Switch the channel order
+ if rgb_output:
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+ return img
def read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:
| {"golden_diff": "diff --git a/doctr/documents/reader.py b/doctr/documents/reader.py\n--- a/doctr/documents/reader.py\n+++ b/doctr/documents/reader.py\n@@ -8,7 +8,36 @@\n import cv2\n from typing import List, Tuple, Optional, Any\n \n-__all__ = ['read_pdf']\n+__all__ = ['read_pdf', 'read_img']\n+\n+\n+def read_img(\n+ file_path: str,\n+ output_size: Optional[Tuple[int, int]] = None,\n+ rgb_output: bool = True,\n+) -> np.ndarray:\n+ \"\"\"Read an image file into numpy format\n+\n+ Example::\n+ >>> from doctr.documents import read_img\n+ >>> page = read_img(\"path/to/your/doc.jpg\")\n+\n+ Args:\n+ file_path: the path to the image file\n+ output_size: the expected output size of each page in format H x W\n+ rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n+ Returns:\n+ the page decoded as numpy ndarray of shape H x W x 3\n+ \"\"\"\n+\n+ img = cv2.imread(file_path, cv2.IMREAD_COLOR)\n+ # Resizing\n+ if isinstance(output_size, tuple):\n+ img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)\n+ # Switch the channel order\n+ if rgb_output:\n+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n+ return img\n \n \n def read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:\n", "issue": "[documents] Add basic document reader\nFor documents to be analyzed, we first need to add a utility for document reading (PDF mostly). The following specs would be nice to have:\r\n- inherit for a shared reader class (\"DocumentReader\" for instance)\r\n- to be located in the `doctr.documents.reader` module\r\n\r\nThe following formats should be handled:\r\n- [x] PDF (#8, #25): this resource would be nice to check: https://github.com/pymupdf/PyMuPDF\r\n- [x] PNG (#30)\r\n- [x] JPG (#30)\r\n\r\n\r\ncc @charlesmindee \n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport fitz\nimport numpy as np\nimport cv2\nfrom typing import List, Tuple, Optional, Any\n\n__all__ = ['read_pdf']\n\n\ndef read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:\n \"\"\"Read a PDF file and convert it into an image in numpy format\n\n Example::\n >>> from doctr.documents import read_pdf\n >>> doc = read_pdf(\"path/to/your/doc.pdf\")\n\n Args:\n file_path: the path to the PDF file\n Returns:\n the list of pages decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n\n # Read pages with fitz and convert them to numpy ndarrays\n return [convert_page_to_numpy(page, **kwargs) for page in fitz.open(file_path)]\n\n\ndef convert_page_to_numpy(\n page: fitz.fitz.Page,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n) -> np.ndarray:\n \"\"\"Convert a fitz page to a numpy-formatted image\n\n Args:\n page: the page of a file read with PyMuPDF\n output_size: the expected output size of each page in format H x W\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n\n Returns:\n the rendered image in numpy format\n \"\"\"\n\n transform_matrix = None\n\n # If no output size is specified, keep the origin one\n if output_size is not None:\n scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])\n transform_matrix = fitz.Matrix(*scales)\n\n # Generate the pixel map using the transformation matrix\n stream = page.getPixmap(matrix=transform_matrix).getImageData()\n # Decode it into a numpy\n img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)\n\n # Switch the channel order\n if rgb_output:\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n return img\n", "path": "doctr/documents/reader.py"}], "after_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport fitz\nimport numpy as np\nimport cv2\nfrom typing import List, Tuple, Optional, Any\n\n__all__ = ['read_pdf', 'read_img']\n\n\ndef read_img(\n file_path: str,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n) -> np.ndarray:\n \"\"\"Read an image file into numpy format\n\n Example::\n >>> from doctr.documents import read_img\n >>> page = read_img(\"path/to/your/doc.jpg\")\n\n Args:\n file_path: the path to the image file\n output_size: the expected output size of each page in format H x W\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n Returns:\n the page decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n\n img = cv2.imread(file_path, cv2.IMREAD_COLOR)\n # Resizing\n if isinstance(output_size, tuple):\n img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)\n # Switch the channel order\n if rgb_output:\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n return img\n\n\ndef read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:\n \"\"\"Read a PDF file and convert it into an image in numpy format\n\n Example::\n >>> from doctr.documents import read_pdf\n >>> doc = read_pdf(\"path/to/your/doc.pdf\")\n\n Args:\n file_path: the path to the PDF file\n Returns:\n the list of pages decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n\n # Read pages with fitz and convert them to numpy ndarrays\n return [convert_page_to_numpy(page, **kwargs) for page in fitz.open(file_path)]\n\n\ndef convert_page_to_numpy(\n page: fitz.fitz.Page,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n) -> np.ndarray:\n \"\"\"Convert a fitz page to a numpy-formatted image\n\n Args:\n page: the page of a file read with PyMuPDF\n output_size: the expected output size of each page in format H x W\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n\n Returns:\n the rendered image in numpy format\n \"\"\"\n\n transform_matrix = None\n\n # If no output size is specified, keep the origin one\n if output_size is not None:\n scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])\n transform_matrix = fitz.Matrix(*scales)\n\n # Generate the pixel map using the transformation matrix\n stream = page.getPixmap(matrix=transform_matrix).getImageData()\n # Decode it into a numpy\n img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)\n\n # Switch the channel order\n if rgb_output:\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n return img\n", "path": "doctr/documents/reader.py"}]} | 1,021 | 352 |
gh_patches_debug_11528 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyTorch security vulnerability
See https://github.com/advisories/GHSA-47fc-vmwq-366v
Need to upgrade to PyTorch 1.13.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import sys, re
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 try:
13 with open("README.md") as readme_file:
14 readme = readme_file.read()
15 except Exception as error:
16 readme = "No README information found."
17 sys.stderr.write("Warning: Could not open '%s' due %s\n" % ("README.md", error))
18
19
20 class CustomInstallCommand(install):
21 def run(self):
22 install.run(self)
23
24
25 class CustomDevelopCommand(develop):
26 def run(self):
27 develop.run(self)
28
29
30 class CustomEggInfoCommand(egg_info):
31 def run(self):
32 egg_info.run(self)
33
34
35 try:
36 filepath = "GANDLF/version.py"
37 version_file = open(filepath)
38 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
39
40 except Exception as error:
41 __version__ = "0.0.1"
42 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
43
44 requirements = [
45 "black",
46 "numpy==1.22.0",
47 "scipy",
48 "SimpleITK!=2.0.*",
49 "SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
50 "torchvision",
51 "tqdm",
52 "torchio==0.18.75",
53 "pandas",
54 "scikit-learn>=0.23.2",
55 "scikit-image>=0.19.1",
56 "setuptools",
57 "seaborn",
58 "pyyaml",
59 "tiffslide",
60 "matplotlib",
61 "requests>=2.25.0",
62 "pytest",
63 "coverage",
64 "pytest-cov",
65 "psutil",
66 "medcam",
67 "opencv-python",
68 "torchmetrics==0.5.1", # newer versions have changed api for f1 invocation
69 "OpenPatchMiner==0.1.8",
70 "zarr==2.10.3",
71 "pydicom",
72 "onnx",
73 "torchinfo==1.7.0",
74 "segmentation-models-pytorch==0.3.0",
75 "ACSConv==0.1.1",
76 "docker",
77 "dicom-anonymizer",
78 "twine",
79 "zarr",
80 "keyring",
81 ]
82
83 # pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389
84 if sys.platform == "darwin":
85 requirements.append("torch==1.11.0")
86 else:
87 requirements.append("torch==1.11.0")
88
89 if __name__ == "__main__":
90 setup(
91 name="GANDLF",
92 version=__version__,
93 author="MLCommons",
94 author_email="[email protected]",
95 python_requires=">=3.8",
96 packages=find_packages(),
97 cmdclass={
98 "install": CustomInstallCommand,
99 "develop": CustomDevelopCommand,
100 "egg_info": CustomEggInfoCommand,
101 },
102 scripts=[
103 "gandlf_run",
104 "gandlf_constructCSV",
105 "gandlf_collectStats",
106 "gandlf_patchMiner",
107 "gandlf_preprocess",
108 "gandlf_anonymizer",
109 "gandlf_verifyInstall",
110 "gandlf_configGenerator",
111 "gandlf_recoverConfig",
112 "gandlf_deploy",
113 ],
114 classifiers=[
115 "Development Status :: 3 - Alpha",
116 "Intended Audience :: Science/Research",
117 "License :: OSI Approved :: Apache Software License",
118 "Natural Language :: English",
119 "Operating System :: OS Independent",
120 "Programming Language :: Python :: 3.8",
121 "Programming Language :: Python :: 3.9",
122 "Programming Language :: Python :: 3.10",
123 "Topic :: Scientific/Engineering :: Medical Science Apps",
124 ],
125 description=(
126 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
127 ),
128 install_requires=requirements,
129 license="Apache-2.0",
130 long_description=readme,
131 long_description_content_type="text/markdown",
132 include_package_data=True,
133 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
134 zip_safe=False,
135 )
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,6 +42,7 @@
sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
requirements = [
+ "torch==1.13.1",
"black",
"numpy==1.22.0",
"scipy",
@@ -80,12 +81,6 @@
"keyring",
]
-# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389
-if sys.platform == "darwin":
- requirements.append("torch==1.11.0")
-else:
- requirements.append("torch==1.11.0")
-
if __name__ == "__main__":
setup(
name="GANDLF",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,6 +42,7 @@\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n \n requirements = [\n+ \"torch==1.13.1\",\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n@@ -80,12 +81,6 @@\n \"keyring\",\n ]\n \n-# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389\n-if sys.platform == \"darwin\":\n- requirements.append(\"torch==1.11.0\")\n-else:\n- requirements.append(\"torch==1.11.0\")\n-\n if __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n", "issue": "PyTorch security vulnerability\nSee https://github.com/advisories/GHSA-47fc-vmwq-366v\r\n\r\nNeed to upgrade to PyTorch 1.13.1\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport sys, re\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\ntry:\n with open(\"README.md\") as readme_file:\n readme = readme_file.read()\nexcept Exception as error:\n readme = \"No README information found.\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (\"README.md\", error))\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n \"pandas\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.5.1\", # newer versions have changed api for f1 invocation\n \"OpenPatchMiner==0.1.8\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.0\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer\",\n \"twine\",\n \"zarr\",\n \"keyring\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.11.0\")\nelse:\n requirements.append(\"torch==1.11.0\")\n\nif __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">=3.8\",\n packages=find_packages(),\n cmdclass={\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n \"gandlf_recoverConfig\",\n \"gandlf_deploy\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport sys, re\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\ntry:\n with open(\"README.md\") as readme_file:\n readme = readme_file.read()\nexcept Exception as error:\n readme = \"No README information found.\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (\"README.md\", error))\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"torch==1.13.1\",\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n \"pandas\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.5.1\", # newer versions have changed api for f1 invocation\n \"OpenPatchMiner==0.1.8\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.0\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer\",\n \"twine\",\n \"zarr\",\n \"keyring\",\n]\n\nif __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">=3.8\",\n packages=find_packages(),\n cmdclass={\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n \"gandlf_recoverConfig\",\n \"gandlf_deploy\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n )\n", "path": "setup.py"}]} | 1,610 | 197 |
gh_patches_debug_16711 | rasdani/github-patches | git_diff | google__TensorNetwork-489 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tn.set_default_backend should raise exception
`tn.set_default_backend(backend_name)` should raise if `backend_name` is not a valid backend.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensornetwork/backend_contextmanager.py`
Content:
```
1 from typing import Text, Union
2 from tensornetwork.backends.base_backend import BaseBackend
3
4 class DefaultBackend():
5 """Context manager for setting up backend for nodes"""
6
7 def __init__(self, backend: Union[Text, BaseBackend]) -> None:
8 if not isinstance(backend, (Text, BaseBackend)):
9 raise ValueError("Item passed to DefaultBackend "
10 "must be Text or BaseBackend")
11 self.backend = backend
12
13 def __enter__(self):
14 _default_backend_stack.stack.append(self)
15
16 def __exit__(self, exc_type, exc_val, exc_tb):
17 _default_backend_stack.stack.pop()
18
19 class _DefaultBackendStack():
20 """A stack to keep track default backends context manager"""
21
22 def __init__(self):
23 self.stack = []
24 self.default_backend = "numpy"
25
26 def get_current_backend(self):
27 return self.stack[-1].backend if self.stack else self.default_backend
28
29 _default_backend_stack = _DefaultBackendStack()
30
31 def get_default_backend():
32 return _default_backend_stack.get_current_backend()
33
34 def set_default_backend(backend: Union[Text, BaseBackend]) -> None:
35 if _default_backend_stack.stack:
36 raise AssertionError("The default backend should not be changed "
37 "inside the backend context manager")
38 if not isinstance(backend, (Text, BaseBackend)):
39 raise ValueError("Item passed to set_default_backend "
40 "must be Text or BaseBackend")
41 _default_backend_stack.default_backend = backend
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensornetwork/backend_contextmanager.py b/tensornetwork/backend_contextmanager.py
--- a/tensornetwork/backend_contextmanager.py
+++ b/tensornetwork/backend_contextmanager.py
@@ -1,5 +1,6 @@
from typing import Text, Union
from tensornetwork.backends.base_backend import BaseBackend
+from tensornetwork.backends import backend_factory
class DefaultBackend():
"""Context manager for setting up backend for nodes"""
@@ -38,4 +39,6 @@
if not isinstance(backend, (Text, BaseBackend)):
raise ValueError("Item passed to set_default_backend "
"must be Text or BaseBackend")
+ if isinstance(backend, Text) and backend not in backend_factory._BACKENDS:
+ raise ValueError(f"Backend '{backend}' was not found.")
_default_backend_stack.default_backend = backend
| {"golden_diff": "diff --git a/tensornetwork/backend_contextmanager.py b/tensornetwork/backend_contextmanager.py\n--- a/tensornetwork/backend_contextmanager.py\n+++ b/tensornetwork/backend_contextmanager.py\n@@ -1,5 +1,6 @@\n from typing import Text, Union\n from tensornetwork.backends.base_backend import BaseBackend\n+from tensornetwork.backends import backend_factory\n \n class DefaultBackend():\n \"\"\"Context manager for setting up backend for nodes\"\"\"\n@@ -38,4 +39,6 @@\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to set_default_backend \"\n \"must be Text or BaseBackend\")\n+ if isinstance(backend, Text) and backend not in backend_factory._BACKENDS:\n+ raise ValueError(f\"Backend '{backend}' was not found.\")\n _default_backend_stack.default_backend = backend\n", "issue": "tn.set_default_backend should raise exception\n`tn.set_default_backend(backend_name)` should raise if `backend_name` is not a valid backend.\n", "before_files": [{"content": "from typing import Text, Union\nfrom tensornetwork.backends.base_backend import BaseBackend\n\nclass DefaultBackend():\n \"\"\"Context manager for setting up backend for nodes\"\"\"\n\n def __init__(self, backend: Union[Text, BaseBackend]) -> None:\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to DefaultBackend \"\n \"must be Text or BaseBackend\")\n self.backend = backend\n\n def __enter__(self):\n _default_backend_stack.stack.append(self)\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n _default_backend_stack.stack.pop()\n\nclass _DefaultBackendStack():\n \"\"\"A stack to keep track default backends context manager\"\"\"\n\n def __init__(self):\n self.stack = []\n self.default_backend = \"numpy\"\n\n def get_current_backend(self):\n return self.stack[-1].backend if self.stack else self.default_backend\n\n_default_backend_stack = _DefaultBackendStack()\n\ndef get_default_backend():\n return _default_backend_stack.get_current_backend()\n\ndef set_default_backend(backend: Union[Text, BaseBackend]) -> None:\n if _default_backend_stack.stack:\n raise AssertionError(\"The default backend should not be changed \"\n \"inside the backend context manager\")\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to set_default_backend \"\n \"must be Text or BaseBackend\")\n _default_backend_stack.default_backend = backend\n", "path": "tensornetwork/backend_contextmanager.py"}], "after_files": [{"content": "from typing import Text, Union\nfrom tensornetwork.backends.base_backend import BaseBackend\nfrom tensornetwork.backends import backend_factory\n\nclass DefaultBackend():\n \"\"\"Context manager for setting up backend for nodes\"\"\"\n\n def __init__(self, backend: Union[Text, BaseBackend]) -> None:\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to DefaultBackend \"\n \"must be Text or BaseBackend\")\n self.backend = backend\n\n def __enter__(self):\n _default_backend_stack.stack.append(self)\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n _default_backend_stack.stack.pop()\n\nclass _DefaultBackendStack():\n \"\"\"A stack to keep track default backends context manager\"\"\"\n\n def __init__(self):\n self.stack = []\n self.default_backend = \"numpy\"\n\n def get_current_backend(self):\n return self.stack[-1].backend if self.stack else self.default_backend\n\n_default_backend_stack = _DefaultBackendStack()\n\ndef get_default_backend():\n return _default_backend_stack.get_current_backend()\n\ndef set_default_backend(backend: Union[Text, BaseBackend]) -> None:\n if _default_backend_stack.stack:\n raise AssertionError(\"The default backend should not be changed \"\n \"inside the backend context manager\")\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to set_default_backend \"\n \"must be Text or BaseBackend\")\n if isinstance(backend, Text) and backend not in backend_factory._BACKENDS:\n raise ValueError(f\"Backend '{backend}' was not found.\")\n _default_backend_stack.default_backend = backend\n", "path": "tensornetwork/backend_contextmanager.py"}]} | 682 | 186 |
gh_patches_debug_9249 | rasdani/github-patches | git_diff | sublimelsp__LSP-490 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] CamelCase instead of snace_case
`documentChanges` argument on the left https://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/rename.py#L69
should be `document_changes`, like `LspApplyWorkspaceEditCommand` expects:
https://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/core/edit.py#L19
When doing a rename, this popped up in the console
```
LSP: --> textDocument/rename
Traceback (most recent call last):
File "/opt/sublime_text/sublime_plugin.py", line 1034, in run_
return self.run(**args)
TypeError: run() got an unexpected keyword argument 'documentChanges'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/rename.py`
Content:
```
1 import sublime_plugin
2 from .core.registry import client_for_view, LspTextCommand
3 from .core.protocol import Request
4 from .core.documents import get_document_position, get_position, is_at_word
5 try:
6 from typing import List, Dict, Optional
7 assert List and Dict and Optional
8 except ImportError:
9 pass
10
11
12 class RenameSymbolInputHandler(sublime_plugin.TextInputHandler):
13 def __init__(self, view):
14 self.view = view
15
16 def name(self):
17 return "new_name"
18
19 def placeholder(self):
20 return self.get_current_symbol_name()
21
22 def initial_text(self):
23 return self.get_current_symbol_name()
24
25 def validate(self, name):
26 return len(name) > 0
27
28 def get_current_symbol_name(self):
29 pos = get_position(self.view)
30 current_name = self.view.substr(self.view.word(pos))
31 # Is this check necessary?
32 if not current_name:
33 current_name = ""
34 return current_name
35
36
37 class LspSymbolRenameCommand(LspTextCommand):
38 def __init__(self, view):
39 super().__init__(view)
40
41 def is_enabled(self, event=None):
42 # TODO: check what kind of scope we're in.
43 if self.has_client_with_capability('renameProvider'):
44 return is_at_word(self.view, event)
45 return False
46
47 def input(self, args):
48 if "new_name" not in args:
49 return RenameSymbolInputHandler(self.view)
50 else:
51 return None
52
53 def run(self, edit, new_name, event=None):
54 pos = get_position(self.view, event)
55 params = get_document_position(self.view, pos)
56
57 self.request_rename(params, new_name)
58
59 def request_rename(self, params, new_name) -> None:
60 client = client_for_view(self.view)
61 if client:
62 params["newName"] = new_name
63 client.send_request(Request.rename(params), self.handle_response)
64
65 def handle_response(self, response: 'Optional[Dict]') -> None:
66 if response:
67 self.view.window().run_command('lsp_apply_workspace_edit',
68 {'changes': response.get('changes'),
69 'documentChanges': response.get('documentChanges')})
70 else:
71 self.view.window().status_message('No rename edits returned')
72
73 def want_event(self):
74 return True
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/rename.py b/plugin/rename.py
--- a/plugin/rename.py
+++ b/plugin/rename.py
@@ -66,7 +66,7 @@
if response:
self.view.window().run_command('lsp_apply_workspace_edit',
{'changes': response.get('changes'),
- 'documentChanges': response.get('documentChanges')})
+ 'document_changes': response.get('documentChanges')})
else:
self.view.window().status_message('No rename edits returned')
| {"golden_diff": "diff --git a/plugin/rename.py b/plugin/rename.py\n--- a/plugin/rename.py\n+++ b/plugin/rename.py\n@@ -66,7 +66,7 @@\n if response:\n self.view.window().run_command('lsp_apply_workspace_edit',\n {'changes': response.get('changes'),\n- 'documentChanges': response.get('documentChanges')})\n+ 'document_changes': response.get('documentChanges')})\n else:\n self.view.window().status_message('No rename edits returned')\n", "issue": "[bug] CamelCase instead of snace_case \n`documentChanges` argument on the left https://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/rename.py#L69\r\nshould be `document_changes`, like `LspApplyWorkspaceEditCommand` expects:\r\nhttps://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/core/edit.py#L19\r\n\r\nWhen doing a rename, this popped up in the console\r\n```\r\nLSP: --> textDocument/rename\r\nTraceback (most recent call last):\r\n File \"/opt/sublime_text/sublime_plugin.py\", line 1034, in run_\r\n return self.run(**args)\r\nTypeError: run() got an unexpected keyword argument 'documentChanges'\r\n```\n", "before_files": [{"content": "import sublime_plugin\nfrom .core.registry import client_for_view, LspTextCommand\nfrom .core.protocol import Request\nfrom .core.documents import get_document_position, get_position, is_at_word\ntry:\n from typing import List, Dict, Optional\n assert List and Dict and Optional\nexcept ImportError:\n pass\n\n\nclass RenameSymbolInputHandler(sublime_plugin.TextInputHandler):\n def __init__(self, view):\n self.view = view\n\n def name(self):\n return \"new_name\"\n\n def placeholder(self):\n return self.get_current_symbol_name()\n\n def initial_text(self):\n return self.get_current_symbol_name()\n\n def validate(self, name):\n return len(name) > 0\n\n def get_current_symbol_name(self):\n pos = get_position(self.view)\n current_name = self.view.substr(self.view.word(pos))\n # Is this check necessary?\n if not current_name:\n current_name = \"\"\n return current_name\n\n\nclass LspSymbolRenameCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_enabled(self, event=None):\n # TODO: check what kind of scope we're in.\n if self.has_client_with_capability('renameProvider'):\n return is_at_word(self.view, event)\n return False\n\n def input(self, args):\n if \"new_name\" not in args:\n return RenameSymbolInputHandler(self.view)\n else:\n return None\n\n def run(self, edit, new_name, event=None):\n pos = get_position(self.view, event)\n params = get_document_position(self.view, pos)\n\n self.request_rename(params, new_name)\n\n def request_rename(self, params, new_name) -> None:\n client = client_for_view(self.view)\n if client:\n params[\"newName\"] = new_name\n client.send_request(Request.rename(params), self.handle_response)\n\n def handle_response(self, response: 'Optional[Dict]') -> None:\n if response:\n self.view.window().run_command('lsp_apply_workspace_edit',\n {'changes': response.get('changes'),\n 'documentChanges': response.get('documentChanges')})\n else:\n self.view.window().status_message('No rename edits returned')\n\n def want_event(self):\n return True\n", "path": "plugin/rename.py"}], "after_files": [{"content": "import sublime_plugin\nfrom .core.registry import client_for_view, LspTextCommand\nfrom .core.protocol import Request\nfrom .core.documents import get_document_position, get_position, is_at_word\ntry:\n from typing import List, Dict, Optional\n assert List and Dict and Optional\nexcept ImportError:\n pass\n\n\nclass RenameSymbolInputHandler(sublime_plugin.TextInputHandler):\n def __init__(self, view):\n self.view = view\n\n def name(self):\n return \"new_name\"\n\n def placeholder(self):\n return self.get_current_symbol_name()\n\n def initial_text(self):\n return self.get_current_symbol_name()\n\n def validate(self, name):\n return len(name) > 0\n\n def get_current_symbol_name(self):\n pos = get_position(self.view)\n current_name = self.view.substr(self.view.word(pos))\n # Is this check necessary?\n if not current_name:\n current_name = \"\"\n return current_name\n\n\nclass LspSymbolRenameCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_enabled(self, event=None):\n # TODO: check what kind of scope we're in.\n if self.has_client_with_capability('renameProvider'):\n return is_at_word(self.view, event)\n return False\n\n def input(self, args):\n if \"new_name\" not in args:\n return RenameSymbolInputHandler(self.view)\n else:\n return None\n\n def run(self, edit, new_name, event=None):\n pos = get_position(self.view, event)\n params = get_document_position(self.view, pos)\n\n self.request_rename(params, new_name)\n\n def request_rename(self, params, new_name) -> None:\n client = client_for_view(self.view)\n if client:\n params[\"newName\"] = new_name\n client.send_request(Request.rename(params), self.handle_response)\n\n def handle_response(self, response: 'Optional[Dict]') -> None:\n if response:\n self.view.window().run_command('lsp_apply_workspace_edit',\n {'changes': response.get('changes'),\n 'document_changes': response.get('documentChanges')})\n else:\n self.view.window().status_message('No rename edits returned')\n\n def want_event(self):\n return True\n", "path": "plugin/rename.py"}]} | 1,120 | 109 |
gh_patches_debug_9490 | rasdani/github-patches | git_diff | Mailu__Mailu-2255 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide a "slow" transport for Postfix
## Environment & Versions
### Environment
- [x] docker-compose
- [ ] kubernetes
- [ ] docker swarm
### Versions
1.9
## Description
Orange, a mainstream french ISP, and a few others, have a rate limit : without a slow transport, I get deferred messages with this : "Too many connections, slow down." It is a known issue https://blog.network-studio.fr/2011/06/30/too-many-connections-slow-down/
I managed to get it done with the overrides/ files :
overrides/postfix.cf :
```
transport_maps = socketmap:unix:/tmp/podop.socket:transport lmdb:/etc/postfix/transport.map
slow_destination_concurrency_limit = 1
slow_destination_recipient_limit = 20
slow_destination_rate_delay = 5s
slow_destination_concurrency_failed_cohort_limit=10
```
overrides/postfix.master :
```
slow/unix= slow unix - - n - 5 smtp -o syslog_name=postfix-slow
```
overrides/transport.map :
```
wanadoo.com slow:
wanadoo.fr slow:
orange.com slow:
orange.fr slow:
laposte.net slow:
free.fr slow:
hotmail.fr slow:
outlook.fr slow:
yahoo.fr slow:
```
I did not have time to fully test it, but it seems to work. Configuration values may need a fine tuning...
It would be nice to have such "slow" transport built in in Mailu, with an override possibility to edit the domain list.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/postfix/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import glob
5 import shutil
6 import multiprocessing
7 import logging as log
8 import sys
9
10 from podop import run_server
11 from pwd import getpwnam
12 from socrate import system, conf
13
14 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
15
16 def start_podop():
17 os.setuid(getpwnam('postfix').pw_uid)
18 os.mkdir('/dev/shm/postfix',mode=0o700)
19 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/postfix/"
20 # TODO: Remove verbosity setting from Podop?
21 run_server(0, "postfix", "/tmp/podop.socket", [
22 ("transport", "url", url + "transport/§"),
23 ("alias", "url", url + "alias/§"),
24 ("dane", "url", url + "dane/§"),
25 ("domain", "url", url + "domain/§"),
26 ("mailbox", "url", url + "mailbox/§"),
27 ("recipientmap", "url", url + "recipient/map/§"),
28 ("sendermap", "url", url + "sender/map/§"),
29 ("senderaccess", "url", url + "sender/access/§"),
30 ("senderlogin", "url", url + "sender/login/§"),
31 ("senderrate", "url", url + "sender/rate/§")
32 ])
33
34 def start_mta_sts_daemon():
35 os.chmod("/root/", 0o755) # read access to /root/.netrc required
36 os.setuid(getpwnam('postfix').pw_uid)
37 from postfix_mta_sts_resolver import daemon
38 daemon.main()
39
40 def is_valid_postconf_line(line):
41 return not line.startswith("#") \
42 and not line == ''
43
44 # Actual startup script
45 os.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'
46 os.environ["FRONT_ADDRESS"] = system.get_host_address_from_environment("FRONT", "front")
47 os.environ["ADMIN_ADDRESS"] = system.get_host_address_from_environment("ADMIN", "admin")
48 os.environ["ANTISPAM_MILTER_ADDRESS"] = system.get_host_address_from_environment("ANTISPAM_MILTER", "antispam:11332")
49 os.environ["LMTP_ADDRESS"] = system.get_host_address_from_environment("LMTP", "imap:2525")
50 os.environ["POSTFIX_LOG_SYSLOG"] = os.environ.get("POSTFIX_LOG_SYSLOG","local")
51 os.environ["POSTFIX_LOG_FILE"] = os.environ.get("POSTFIX_LOG_FILE", "")
52
53 for postfix_file in glob.glob("/conf/*.cf"):
54 conf.jinja(postfix_file, os.environ, os.path.join("/etc/postfix", os.path.basename(postfix_file)))
55
56 if os.path.exists("/overrides/postfix.cf"):
57 for line in open("/overrides/postfix.cf").read().strip().split("\n"):
58 if is_valid_postconf_line(line):
59 os.system('postconf -e "{}"'.format(line))
60
61 if os.path.exists("/overrides/postfix.master"):
62 for line in open("/overrides/postfix.master").read().strip().split("\n"):
63 if is_valid_postconf_line(line):
64 os.system('postconf -Me "{}"'.format(line))
65
66 for map_file in glob.glob("/overrides/*.map"):
67 destination = os.path.join("/etc/postfix", os.path.basename(map_file))
68 shutil.copyfile(map_file, destination)
69 os.system("postmap {}".format(destination))
70 os.remove(destination)
71
72 if os.path.exists("/overrides/mta-sts-daemon.yml"):
73 shutil.copyfile("/overrides/mta-sts-daemon.yml", "/etc/mta-sts-daemon.yml")
74 else:
75 conf.jinja("/conf/mta-sts-daemon.yml", os.environ, "/etc/mta-sts-daemon.yml")
76
77 if not os.path.exists("/etc/postfix/tls_policy.map.lmdb"):
78 open("/etc/postfix/tls_policy.map", "a").close()
79 os.system("postmap /etc/postfix/tls_policy.map")
80
81 if "RELAYUSER" in os.environ:
82 path = "/etc/postfix/sasl_passwd"
83 conf.jinja("/conf/sasl_passwd", os.environ, path)
84 os.system("postmap {}".format(path))
85
86 # Configure and start local rsyslog server
87 conf.jinja("/conf/rsyslog.conf", os.environ, "/etc/rsyslog.conf")
88 os.system("/usr/sbin/rsyslogd -niNONE &")
89 # Configure logrotate and start crond
90 if os.environ["POSTFIX_LOG_FILE"] != "":
91 conf.jinja("/conf/logrotate.conf", os.environ, "/etc/logrotate.d/postfix.conf")
92 os.system("/usr/sbin/crond")
93 if os.path.exists("/overrides/logrotate.conf"):
94 shutil.copyfile("/overrides/logrotate.conf", "/etc/logrotate.d/postfix.conf")
95
96 # Run Podop and Postfix
97 multiprocessing.Process(target=start_podop).start()
98 multiprocessing.Process(target=start_mta_sts_daemon).start()
99 os.system("/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing")
100 # Before starting postfix, we need to check permissions on /queue
101 # in the event that postfix,postdrop id have changed
102 os.system("postfix set-permissions")
103 os.system("postfix start-fg")
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/postfix/start.py b/core/postfix/start.py
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -74,9 +74,10 @@
else:
conf.jinja("/conf/mta-sts-daemon.yml", os.environ, "/etc/mta-sts-daemon.yml")
-if not os.path.exists("/etc/postfix/tls_policy.map.lmdb"):
- open("/etc/postfix/tls_policy.map", "a").close()
- os.system("postmap /etc/postfix/tls_policy.map")
+for policy in ['tls_policy', 'transport']:
+ if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):
+ open(f'/etc/postfix/{policy}.map', 'a').close()
+ os.system(f'postmap /etc/postfix/{policy}.map')
if "RELAYUSER" in os.environ:
path = "/etc/postfix/sasl_passwd"
| {"golden_diff": "diff --git a/core/postfix/start.py b/core/postfix/start.py\n--- a/core/postfix/start.py\n+++ b/core/postfix/start.py\n@@ -74,9 +74,10 @@\n else:\n conf.jinja(\"/conf/mta-sts-daemon.yml\", os.environ, \"/etc/mta-sts-daemon.yml\")\n \n-if not os.path.exists(\"/etc/postfix/tls_policy.map.lmdb\"):\n- open(\"/etc/postfix/tls_policy.map\", \"a\").close()\n- os.system(\"postmap /etc/postfix/tls_policy.map\")\n+for policy in ['tls_policy', 'transport']:\n+ if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):\n+ open(f'/etc/postfix/{policy}.map', 'a').close()\n+ os.system(f'postmap /etc/postfix/{policy}.map')\n \n if \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n", "issue": "Provide a \"slow\" transport for Postfix\n## Environment & Versions\r\n### Environment\r\n - [x] docker-compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.9\r\n\r\n## Description\r\nOrange, a mainstream french ISP, and a few others, have a rate limit : without a slow transport, I get deferred messages with this : \"Too many connections, slow down.\" It is a known issue https://blog.network-studio.fr/2011/06/30/too-many-connections-slow-down/\r\n\r\nI managed to get it done with the overrides/ files :\r\n\r\noverrides/postfix.cf :\r\n\r\n```\r\ntransport_maps = socketmap:unix:/tmp/podop.socket:transport lmdb:/etc/postfix/transport.map\r\n\r\nslow_destination_concurrency_limit = 1\r\nslow_destination_recipient_limit = 20\r\nslow_destination_rate_delay = 5s\r\nslow_destination_concurrency_failed_cohort_limit=10\r\n\r\n```\r\noverrides/postfix.master :\r\n\r\n```\r\nslow/unix= slow unix - - n - 5 smtp -o syslog_name=postfix-slow\r\n```\r\n\r\noverrides/transport.map :\r\n\r\n```\r\nwanadoo.com slow:\r\nwanadoo.fr slow:\r\norange.com slow:\r\norange.fr slow:\r\nlaposte.net slow:\r\nfree.fr slow:\r\nhotmail.fr slow:\r\noutlook.fr slow:\r\nyahoo.fr slow:\r\n```\r\nI did not have time to fully test it, but it seems to work. Configuration values may need a fine tuning...\r\n\r\nIt would be nice to have such \"slow\" transport built in in Mailu, with an override possibility to edit the domain list.\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom pwd import getpwnam\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(getpwnam('postfix').pw_uid)\n os.mkdir('/dev/shm/postfix',mode=0o700)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n (\"transport\", \"url\", url + \"transport/\u00a7\"),\n (\"alias\", \"url\", url + \"alias/\u00a7\"),\n (\"dane\", \"url\", url + \"dane/\u00a7\"),\n (\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\"),\n (\"senderrate\", \"url\", url + \"sender/rate/\u00a7\")\n ])\n\ndef start_mta_sts_daemon():\n os.chmod(\"/root/\", 0o755) # read access to /root/.netrc required\n os.setuid(getpwnam('postfix').pw_uid)\n from postfix_mta_sts_resolver import daemon\n daemon.main()\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'\nos.environ[\"FRONT_ADDRESS\"] = system.get_host_address_from_environment(\"FRONT\", \"front\")\nos.environ[\"ADMIN_ADDRESS\"] = system.get_host_address_from_environment(\"ADMIN\", \"admin\")\nos.environ[\"ANTISPAM_MILTER_ADDRESS\"] = system.get_host_address_from_environment(\"ANTISPAM_MILTER\", \"antispam:11332\")\nos.environ[\"LMTP_ADDRESS\"] = system.get_host_address_from_environment(\"LMTP\", \"imap:2525\")\nos.environ[\"POSTFIX_LOG_SYSLOG\"] = os.environ.get(\"POSTFIX_LOG_SYSLOG\",\"local\")\nos.environ[\"POSTFIX_LOG_FILE\"] = os.environ.get(\"POSTFIX_LOG_FILE\", \"\")\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif os.path.exists(\"/overrides/mta-sts-daemon.yml\"):\n shutil.copyfile(\"/overrides/mta-sts-daemon.yml\", \"/etc/mta-sts-daemon.yml\")\nelse:\n conf.jinja(\"/conf/mta-sts-daemon.yml\", os.environ, \"/etc/mta-sts-daemon.yml\")\n\nif not os.path.exists(\"/etc/postfix/tls_policy.map.lmdb\"):\n open(\"/etc/postfix/tls_policy.map\", \"a\").close()\n os.system(\"postmap /etc/postfix/tls_policy.map\")\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Configure and start local rsyslog server\nconf.jinja(\"/conf/rsyslog.conf\", os.environ, \"/etc/rsyslog.conf\")\nos.system(\"/usr/sbin/rsyslogd -niNONE &\")\n# Configure logrotate and start crond\nif os.environ[\"POSTFIX_LOG_FILE\"] != \"\":\n conf.jinja(\"/conf/logrotate.conf\", os.environ, \"/etc/logrotate.d/postfix.conf\")\n os.system(\"/usr/sbin/crond\")\n if os.path.exists(\"/overrides/logrotate.conf\"):\n shutil.copyfile(\"/overrides/logrotate.conf\", \"/etc/logrotate.d/postfix.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nmultiprocessing.Process(target=start_mta_sts_daemon).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom pwd import getpwnam\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(getpwnam('postfix').pw_uid)\n os.mkdir('/dev/shm/postfix',mode=0o700)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n (\"transport\", \"url\", url + \"transport/\u00a7\"),\n (\"alias\", \"url\", url + \"alias/\u00a7\"),\n (\"dane\", \"url\", url + \"dane/\u00a7\"),\n (\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\"),\n (\"senderrate\", \"url\", url + \"sender/rate/\u00a7\")\n ])\n\ndef start_mta_sts_daemon():\n os.chmod(\"/root/\", 0o755) # read access to /root/.netrc required\n os.setuid(getpwnam('postfix').pw_uid)\n from postfix_mta_sts_resolver import daemon\n daemon.main()\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'\nos.environ[\"FRONT_ADDRESS\"] = system.get_host_address_from_environment(\"FRONT\", \"front\")\nos.environ[\"ADMIN_ADDRESS\"] = system.get_host_address_from_environment(\"ADMIN\", \"admin\")\nos.environ[\"ANTISPAM_MILTER_ADDRESS\"] = system.get_host_address_from_environment(\"ANTISPAM_MILTER\", \"antispam:11332\")\nos.environ[\"LMTP_ADDRESS\"] = system.get_host_address_from_environment(\"LMTP\", \"imap:2525\")\nos.environ[\"POSTFIX_LOG_SYSLOG\"] = os.environ.get(\"POSTFIX_LOG_SYSLOG\",\"local\")\nos.environ[\"POSTFIX_LOG_FILE\"] = os.environ.get(\"POSTFIX_LOG_FILE\", \"\")\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif os.path.exists(\"/overrides/mta-sts-daemon.yml\"):\n shutil.copyfile(\"/overrides/mta-sts-daemon.yml\", \"/etc/mta-sts-daemon.yml\")\nelse:\n conf.jinja(\"/conf/mta-sts-daemon.yml\", os.environ, \"/etc/mta-sts-daemon.yml\")\n\nfor policy in ['tls_policy', 'transport']:\n if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):\n open(f'/etc/postfix/{policy}.map', 'a').close()\n os.system(f'postmap /etc/postfix/{policy}.map')\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Configure and start local rsyslog server\nconf.jinja(\"/conf/rsyslog.conf\", os.environ, \"/etc/rsyslog.conf\")\nos.system(\"/usr/sbin/rsyslogd -niNONE &\")\n# Configure logrotate and start crond\nif os.environ[\"POSTFIX_LOG_FILE\"] != \"\":\n conf.jinja(\"/conf/logrotate.conf\", os.environ, \"/etc/logrotate.d/postfix.conf\")\n os.system(\"/usr/sbin/crond\")\n if os.path.exists(\"/overrides/logrotate.conf\"):\n shutil.copyfile(\"/overrides/logrotate.conf\", \"/etc/logrotate.d/postfix.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nmultiprocessing.Process(target=start_mta_sts_daemon).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}]} | 1,974 | 212 |
gh_patches_debug_41023 | rasdani/github-patches | git_diff | pyload__pyload-180 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implemented StreamcloudEu plugin based on XFileSharingPro
Resolves #128
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `module/plugins/hoster/StreamcloudEu.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo
3 import re
4
5 class StreamcloudEu(XFileSharingPro):
6 __name__ = "StreamcloudEu"
7 __type__ = "hoster"
8 __pattern__ = r"http://(www\.)?streamcloud\.eu/\S+"
9 __version__ = "0.01"
10 __description__ = """Streamcloud.eu hoster plugin"""
11 __author_name__ = ("seoester")
12 __author_mail__ = ("[email protected]")
13
14 HOSTER_NAME = "streamcloud.eu"
15 DIRECT_LINK_PATTERN = r'file: "(http://(stor|cdn)\d+\.streamcloud.eu:?\d*/.*/video\.mp4)",'
16
17 def setup(self):
18 super(XFileSharingPro, self).setup()
19 self.multiDL = True
20
21 def getDownloadLink(self):
22 found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)
23 if found:
24 return found.group(1)
25
26 return super(XFileSharingPro, self).getDownloadLink()
27
28 getInfo = create_getInfo(StreamcloudEu)
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/module/plugins/hoster/StreamcloudEu.py b/module/plugins/hoster/StreamcloudEu.py
--- a/module/plugins/hoster/StreamcloudEu.py
+++ b/module/plugins/hoster/StreamcloudEu.py
@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
from module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo
+from module.network.HTTPRequest import HTTPRequest
+from time import sleep
import re
class StreamcloudEu(XFileSharingPro):
@@ -15,7 +17,7 @@
DIRECT_LINK_PATTERN = r'file: "(http://(stor|cdn)\d+\.streamcloud.eu:?\d*/.*/video\.mp4)",'
def setup(self):
- super(XFileSharingPro, self).setup()
+ super(StreamcloudEu, self).setup()
self.multiDL = True
def getDownloadLink(self):
@@ -23,6 +25,87 @@
if found:
return found.group(1)
- return super(XFileSharingPro, self).getDownloadLink()
+ for i in range(5):
+ self.logDebug("Getting download link: #%d" % i)
+ data = self.getPostParameters()
+ httpRequest = HTTPRequest(options=self.req.options)
+ httpRequest.cj = self.req.cj
+ sleep(10)
+ self.html = httpRequest.load(self.pyfile.url, post = data, referer=False, cookies=True, decode = True)
+ self.header = httpRequest.header
+
+ found = re.search("Location\s*:\s*(.*)", self.header, re.I)
+ if found:
+ break
+
+ found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)
+ if found:
+ break
+
+ else:
+ if self.errmsg and 'captcha' in self.errmsg:
+ self.fail("No valid captcha code entered")
+ else:
+ self.fail("Download link not found")
+
+ return found.group(1)
+
+ def getPostParameters(self):
+ for i in range(3):
+ if not self.errmsg: self.checkErrors()
+
+ if hasattr(self,"FORM_PATTERN"):
+ action, inputs = self.parseHtmlForm(self.FORM_PATTERN)
+ else:
+ action, inputs = self.parseHtmlForm(input_names={"op": re.compile("^download")})
+
+ if not inputs:
+ action, inputs = self.parseHtmlForm('F1')
+ if not inputs:
+ if self.errmsg:
+ self.retry()
+ else:
+ self.parseError("Form not found")
+
+ self.logDebug(self.HOSTER_NAME, inputs)
+
+ if 'op' in inputs and inputs['op'] in ('download1', 'download2', 'download3'):
+ if "password" in inputs:
+ if self.passwords:
+ inputs['password'] = self.passwords.pop(0)
+ else:
+ self.fail("No or invalid passport")
+
+ if not self.premium:
+ found = re.search(self.WAIT_PATTERN, self.html)
+ if found:
+ wait_time = int(found.group(1)) + 1
+ self.setWait(wait_time, False)
+ else:
+ wait_time = 0
+
+ self.captcha = self.handleCaptcha(inputs)
+
+ if wait_time: self.wait()
+
+ self.errmsg = None
+ self.logDebug("getPostParameters {0}".format(i))
+ return inputs
+
+ else:
+ inputs['referer'] = self.pyfile.url
+
+ if self.premium:
+ inputs['method_premium'] = "Premium Download"
+ if 'method_free' in inputs: del inputs['method_free']
+ else:
+ inputs['method_free'] = "Free Download"
+ if 'method_premium' in inputs: del inputs['method_premium']
+
+ self.html = self.load(self.pyfile.url, post = inputs, ref = False)
+ self.errmsg = None
+
+ else: self.parseError('FORM: %s' % (inputs['op'] if 'op' in inputs else 'UNKNOWN'))
+
getInfo = create_getInfo(StreamcloudEu)
| {"golden_diff": "diff --git a/module/plugins/hoster/StreamcloudEu.py b/module/plugins/hoster/StreamcloudEu.py\n--- a/module/plugins/hoster/StreamcloudEu.py\n+++ b/module/plugins/hoster/StreamcloudEu.py\n@@ -1,5 +1,7 @@\n # -*- coding: utf-8 -*-\n from module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo\n+from module.network.HTTPRequest import HTTPRequest\n+from time import sleep\n import re\n \n class StreamcloudEu(XFileSharingPro):\n@@ -15,7 +17,7 @@\n DIRECT_LINK_PATTERN = r'file: \"(http://(stor|cdn)\\d+\\.streamcloud.eu:?\\d*/.*/video\\.mp4)\",'\n \n def setup(self):\n- super(XFileSharingPro, self).setup()\n+ super(StreamcloudEu, self).setup()\n self.multiDL = True\n \n def getDownloadLink(self):\n@@ -23,6 +25,87 @@\n if found:\n return found.group(1)\n \n- return super(XFileSharingPro, self).getDownloadLink()\n+ for i in range(5):\n+ self.logDebug(\"Getting download link: #%d\" % i)\n+ data = self.getPostParameters()\n+ httpRequest = HTTPRequest(options=self.req.options)\n+ httpRequest.cj = self.req.cj\n+ sleep(10)\n+ self.html = httpRequest.load(self.pyfile.url, post = data, referer=False, cookies=True, decode = True)\n+ self.header = httpRequest.header\n+\n+ found = re.search(\"Location\\s*:\\s*(.*)\", self.header, re.I)\n+ if found:\n+ break\n+\n+ found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)\n+ if found:\n+ break\n+\n+ else:\n+ if self.errmsg and 'captcha' in self.errmsg:\n+ self.fail(\"No valid captcha code entered\")\n+ else:\n+ self.fail(\"Download link not found\")\n+\n+ return found.group(1)\n+\n+ def getPostParameters(self):\n+ for i in range(3):\n+ if not self.errmsg: self.checkErrors()\n+\n+ if hasattr(self,\"FORM_PATTERN\"):\n+ action, inputs = self.parseHtmlForm(self.FORM_PATTERN)\n+ else:\n+ action, inputs = self.parseHtmlForm(input_names={\"op\": re.compile(\"^download\")})\n+\n+ if not inputs:\n+ action, inputs = self.parseHtmlForm('F1')\n+ if not inputs:\n+ if self.errmsg:\n+ self.retry()\n+ else:\n+ self.parseError(\"Form not found\")\n+\n+ self.logDebug(self.HOSTER_NAME, inputs)\n+\n+ if 'op' in inputs and inputs['op'] in ('download1', 'download2', 'download3'):\n+ if \"password\" in inputs:\n+ if self.passwords:\n+ inputs['password'] = self.passwords.pop(0)\n+ else:\n+ self.fail(\"No or invalid passport\")\n+\n+ if not self.premium:\n+ found = re.search(self.WAIT_PATTERN, self.html)\n+ if found:\n+ wait_time = int(found.group(1)) + 1\n+ self.setWait(wait_time, False)\n+ else:\n+ wait_time = 0\n+\n+ self.captcha = self.handleCaptcha(inputs)\n+\n+ if wait_time: self.wait()\n+\n+ self.errmsg = None\n+ self.logDebug(\"getPostParameters {0}\".format(i))\n+ return inputs\n+\n+ else:\n+ inputs['referer'] = self.pyfile.url\n+\n+ if self.premium:\n+ inputs['method_premium'] = \"Premium Download\"\n+ if 'method_free' in inputs: del inputs['method_free']\n+ else:\n+ inputs['method_free'] = \"Free Download\"\n+ if 'method_premium' in inputs: del inputs['method_premium']\n+\n+ self.html = self.load(self.pyfile.url, post = inputs, ref = False)\n+ self.errmsg = None\n+\n+ else: self.parseError('FORM: %s' % (inputs['op'] if 'op' in inputs else 'UNKNOWN'))\n+\n \n getInfo = create_getInfo(StreamcloudEu)\n", "issue": "Implemented StreamcloudEu plugin based on XFileSharingPro\nResolves #128\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo\nimport re\n\nclass StreamcloudEu(XFileSharingPro):\n __name__ = \"StreamcloudEu\"\n __type__ = \"hoster\"\n __pattern__ = r\"http://(www\\.)?streamcloud\\.eu/\\S+\"\n __version__ = \"0.01\"\n __description__ = \"\"\"Streamcloud.eu hoster plugin\"\"\"\n __author_name__ = (\"seoester\")\n __author_mail__ = (\"[email protected]\")\n\n HOSTER_NAME = \"streamcloud.eu\"\n DIRECT_LINK_PATTERN = r'file: \"(http://(stor|cdn)\\d+\\.streamcloud.eu:?\\d*/.*/video\\.mp4)\",'\n\n def setup(self):\n super(XFileSharingPro, self).setup()\n self.multiDL = True\n\n def getDownloadLink(self):\n found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)\n if found:\n return found.group(1)\n\n return super(XFileSharingPro, self).getDownloadLink()\n\ngetInfo = create_getInfo(StreamcloudEu)\n", "path": "module/plugins/hoster/StreamcloudEu.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo\nfrom module.network.HTTPRequest import HTTPRequest\nfrom time import sleep\nimport re\n\nclass StreamcloudEu(XFileSharingPro):\n __name__ = \"StreamcloudEu\"\n __type__ = \"hoster\"\n __pattern__ = r\"http://(www\\.)?streamcloud\\.eu/\\S+\"\n __version__ = \"0.01\"\n __description__ = \"\"\"Streamcloud.eu hoster plugin\"\"\"\n __author_name__ = (\"seoester\")\n __author_mail__ = (\"[email protected]\")\n\n HOSTER_NAME = \"streamcloud.eu\"\n DIRECT_LINK_PATTERN = r'file: \"(http://(stor|cdn)\\d+\\.streamcloud.eu:?\\d*/.*/video\\.mp4)\",'\n\n def setup(self):\n super(StreamcloudEu, self).setup()\n self.multiDL = True\n\n def getDownloadLink(self):\n found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)\n if found:\n return found.group(1)\n\n for i in range(5):\n self.logDebug(\"Getting download link: #%d\" % i)\n data = self.getPostParameters()\n httpRequest = HTTPRequest(options=self.req.options)\n httpRequest.cj = self.req.cj\n sleep(10)\n self.html = httpRequest.load(self.pyfile.url, post = data, referer=False, cookies=True, decode = True)\n self.header = httpRequest.header\n\n found = re.search(\"Location\\s*:\\s*(.*)\", self.header, re.I)\n if found:\n break\n\n found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)\n if found:\n break\n\n else:\n if self.errmsg and 'captcha' in self.errmsg:\n self.fail(\"No valid captcha code entered\")\n else:\n self.fail(\"Download link not found\")\n\n return found.group(1)\n\n def getPostParameters(self):\n for i in range(3):\n if not self.errmsg: self.checkErrors()\n\n if hasattr(self,\"FORM_PATTERN\"):\n action, inputs = self.parseHtmlForm(self.FORM_PATTERN)\n else:\n action, inputs = self.parseHtmlForm(input_names={\"op\": re.compile(\"^download\")})\n\n if not inputs:\n action, inputs = self.parseHtmlForm('F1')\n if not inputs:\n if self.errmsg:\n self.retry()\n else:\n self.parseError(\"Form not found\")\n\n self.logDebug(self.HOSTER_NAME, inputs)\n\n if 'op' in inputs and inputs['op'] in ('download1', 'download2', 'download3'):\n if \"password\" in inputs:\n if self.passwords:\n inputs['password'] = self.passwords.pop(0)\n else:\n self.fail(\"No or invalid passport\")\n\n if not self.premium:\n found = re.search(self.WAIT_PATTERN, self.html)\n if found:\n wait_time = int(found.group(1)) + 1\n self.setWait(wait_time, False)\n else:\n wait_time = 0\n\n self.captcha = self.handleCaptcha(inputs)\n\n if wait_time: self.wait()\n\n self.errmsg = None\n self.logDebug(\"getPostParameters {0}\".format(i))\n return inputs\n\n else:\n inputs['referer'] = self.pyfile.url\n\n if self.premium:\n inputs['method_premium'] = \"Premium Download\"\n if 'method_free' in inputs: del inputs['method_free']\n else:\n inputs['method_free'] = \"Free Download\"\n if 'method_premium' in inputs: del inputs['method_premium']\n\n self.html = self.load(self.pyfile.url, post = inputs, ref = False)\n self.errmsg = None\n\n else: self.parseError('FORM: %s' % (inputs['op'] if 'op' in inputs else 'UNKNOWN'))\n\n\ngetInfo = create_getInfo(StreamcloudEu)\n", "path": "module/plugins/hoster/StreamcloudEu.py"}]} | 590 | 946 |
gh_patches_debug_36759 | rasdani/github-patches | git_diff | tensorflow__addons-206 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Generate API docs
As our repository matures it's important to have api docs to improve user experience. As discussed in #38 we will also be able to remove the table of contents off the main README.
Should we host on https://readthedocs.org/ or is there something else recommended @ewilderj @dynamicwebpaige @karmel ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/docs/build_docs.py`
Content:
```
1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """ Modified from the tfdocs example api reference docs generation script.
16
17 This script generates API reference docs.
18
19 Install pre-requisites:
20 $> pip install -U git+https://github.com/tensorflow/docs
21 $> pip install artifacts/tensorflow_addons-*.whl
22
23 Generate Docs:
24 $> from the repo root run: python tools/docs/build_docs.py
25 """
26
27 from __future__ import absolute_import
28 from __future__ import division
29 from __future__ import print_function
30
31 from absl import app
32 from absl import flags
33
34 import tensorflow_addons
35 from tensorflow_docs.api_generator import generate_lib
36 from tensorflow_docs.api_generator import public_api
37
38 PROJECT_SHORT_NAME = 'tfaddons'
39 PROJECT_FULL_NAME = 'TensorFlow Addons'
40 CODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'
41
42 FLAGS = flags.FLAGS
43
44 flags.DEFINE_string(
45 'output_dir',
46 default='/addons/docs/api_docs/python/',
47 help='Where to write the resulting docs to.')
48
49
50 def main(argv):
51 if argv[1:]:
52 raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))
53
54 doc_generator = generate_lib.DocGenerator(
55 root_title=PROJECT_FULL_NAME,
56 # Replace `tensorflow_docs` with your module, here.
57 py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],
58 code_url_prefix=CODE_URL_PREFIX,
59 # This callback cleans up a lot of aliases caused by internal imports.
60 callbacks=[public_api.local_definitions_filter])
61
62 doc_generator.build(FLAGS.output_dir)
63
64 print('Output docs to: ', FLAGS.output_dir)
65
66
67 if __name__ == '__main__':
68 app.run(main)
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/docs/build_docs.py b/tools/docs/build_docs.py
--- a/tools/docs/build_docs.py
+++ b/tools/docs/build_docs.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
-""" Modified from the tfdocs example api reference docs generation script.
+"""Modified from the tfdocs example api reference docs generation script.
This script generates API reference docs.
@@ -31,19 +31,30 @@
from absl import app
from absl import flags
-import tensorflow_addons
+import tensorflow_addons as tfa
+
from tensorflow_docs.api_generator import generate_lib
+from tensorflow_docs.api_generator import parser
from tensorflow_docs.api_generator import public_api
-PROJECT_SHORT_NAME = 'tfaddons'
+from tensorflow.python.util import tf_inspect
+
+# Use tensorflow's `tf_inspect`, which is aware of `tf_decorator`.
+parser.tf_inspect = tf_inspect
+
+PROJECT_SHORT_NAME = 'tfa'
PROJECT_FULL_NAME = 'TensorFlow Addons'
-CODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'
FLAGS = flags.FLAGS
+flags.DEFINE_string(
+ 'git_branch',
+ default='master',
+ help='The name of the corresponding branch on github.')
+
flags.DEFINE_string(
'output_dir',
- default='/addons/docs/api_docs/python/',
+ default='docs/api_docs/python/',
help='Where to write the resulting docs to.')
@@ -51,11 +62,16 @@
if argv[1:]:
raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))
+ code_url_prefix = ('https://github.com/tensorflow/addons/tree/'
+ '{git_branch}/tensorflow_addons'.format(
+ git_branch=FLAGS.git_branch))
+
doc_generator = generate_lib.DocGenerator(
root_title=PROJECT_FULL_NAME,
# Replace `tensorflow_docs` with your module, here.
- py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],
- code_url_prefix=CODE_URL_PREFIX,
+ py_modules=[(PROJECT_SHORT_NAME, tfa)],
+ code_url_prefix=code_url_prefix,
+ private_map={'tfa': ['__version__', 'utils', 'version']},
# This callback cleans up a lot of aliases caused by internal imports.
callbacks=[public_api.local_definitions_filter])
| {"golden_diff": "diff --git a/tools/docs/build_docs.py b/tools/docs/build_docs.py\n--- a/tools/docs/build_docs.py\n+++ b/tools/docs/build_docs.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n # ==============================================================================\n-\"\"\" Modified from the tfdocs example api reference docs generation script.\n+\"\"\"Modified from the tfdocs example api reference docs generation script.\n \n This script generates API reference docs.\n \n@@ -31,19 +31,30 @@\n from absl import app\n from absl import flags\n \n-import tensorflow_addons\n+import tensorflow_addons as tfa\n+\n from tensorflow_docs.api_generator import generate_lib\n+from tensorflow_docs.api_generator import parser\n from tensorflow_docs.api_generator import public_api\n \n-PROJECT_SHORT_NAME = 'tfaddons'\n+from tensorflow.python.util import tf_inspect\n+\n+# Use tensorflow's `tf_inspect`, which is aware of `tf_decorator`.\n+parser.tf_inspect = tf_inspect\n+\n+PROJECT_SHORT_NAME = 'tfa'\n PROJECT_FULL_NAME = 'TensorFlow Addons'\n-CODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'\n \n FLAGS = flags.FLAGS\n \n+flags.DEFINE_string(\n+ 'git_branch',\n+ default='master',\n+ help='The name of the corresponding branch on github.')\n+\n flags.DEFINE_string(\n 'output_dir',\n- default='/addons/docs/api_docs/python/',\n+ default='docs/api_docs/python/',\n help='Where to write the resulting docs to.')\n \n \n@@ -51,11 +62,16 @@\n if argv[1:]:\n raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))\n \n+ code_url_prefix = ('https://github.com/tensorflow/addons/tree/'\n+ '{git_branch}/tensorflow_addons'.format(\n+ git_branch=FLAGS.git_branch))\n+\n doc_generator = generate_lib.DocGenerator(\n root_title=PROJECT_FULL_NAME,\n # Replace `tensorflow_docs` with your module, here.\n- py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],\n- code_url_prefix=CODE_URL_PREFIX,\n+ py_modules=[(PROJECT_SHORT_NAME, tfa)],\n+ code_url_prefix=code_url_prefix,\n+ private_map={'tfa': ['__version__', 'utils', 'version']},\n # This callback cleans up a lot of aliases caused by internal imports.\n callbacks=[public_api.local_definitions_filter])\n", "issue": "Generate API docs\nAs our repository matures it's important to have api docs to improve user experience. As discussed in #38 we will also be able to remove the table of contents off the main README.\r\n\r\nShould we host on https://readthedocs.org/ or is there something else recommended @ewilderj @dynamicwebpaige @karmel ?\n", "before_files": [{"content": "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\" Modified from the tfdocs example api reference docs generation script.\n\nThis script generates API reference docs.\n\nInstall pre-requisites:\n$> pip install -U git+https://github.com/tensorflow/docs\n$> pip install artifacts/tensorflow_addons-*.whl\n\nGenerate Docs:\n$> from the repo root run: python tools/docs/build_docs.py\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import app\nfrom absl import flags\n\nimport tensorflow_addons\nfrom tensorflow_docs.api_generator import generate_lib\nfrom tensorflow_docs.api_generator import public_api\n\nPROJECT_SHORT_NAME = 'tfaddons'\nPROJECT_FULL_NAME = 'TensorFlow Addons'\nCODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n 'output_dir',\n default='/addons/docs/api_docs/python/',\n help='Where to write the resulting docs to.')\n\n\ndef main(argv):\n if argv[1:]:\n raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))\n\n doc_generator = generate_lib.DocGenerator(\n root_title=PROJECT_FULL_NAME,\n # Replace `tensorflow_docs` with your module, here.\n py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],\n code_url_prefix=CODE_URL_PREFIX,\n # This callback cleans up a lot of aliases caused by internal imports.\n callbacks=[public_api.local_definitions_filter])\n\n doc_generator.build(FLAGS.output_dir)\n\n print('Output docs to: ', FLAGS.output_dir)\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tools/docs/build_docs.py"}], "after_files": [{"content": "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Modified from the tfdocs example api reference docs generation script.\n\nThis script generates API reference docs.\n\nInstall pre-requisites:\n$> pip install -U git+https://github.com/tensorflow/docs\n$> pip install artifacts/tensorflow_addons-*.whl\n\nGenerate Docs:\n$> from the repo root run: python tools/docs/build_docs.py\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import app\nfrom absl import flags\n\nimport tensorflow_addons as tfa\n\nfrom tensorflow_docs.api_generator import generate_lib\nfrom tensorflow_docs.api_generator import parser\nfrom tensorflow_docs.api_generator import public_api\n\nfrom tensorflow.python.util import tf_inspect\n\n# Use tensorflow's `tf_inspect`, which is aware of `tf_decorator`.\nparser.tf_inspect = tf_inspect\n\nPROJECT_SHORT_NAME = 'tfa'\nPROJECT_FULL_NAME = 'TensorFlow Addons'\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n 'git_branch',\n default='master',\n help='The name of the corresponding branch on github.')\n\nflags.DEFINE_string(\n 'output_dir',\n default='docs/api_docs/python/',\n help='Where to write the resulting docs to.')\n\n\ndef main(argv):\n if argv[1:]:\n raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))\n\n code_url_prefix = ('https://github.com/tensorflow/addons/tree/'\n '{git_branch}/tensorflow_addons'.format(\n git_branch=FLAGS.git_branch))\n\n doc_generator = generate_lib.DocGenerator(\n root_title=PROJECT_FULL_NAME,\n # Replace `tensorflow_docs` with your module, here.\n py_modules=[(PROJECT_SHORT_NAME, tfa)],\n code_url_prefix=code_url_prefix,\n private_map={'tfa': ['__version__', 'utils', 'version']},\n # This callback cleans up a lot of aliases caused by internal imports.\n callbacks=[public_api.local_definitions_filter])\n\n doc_generator.build(FLAGS.output_dir)\n\n print('Output docs to: ', FLAGS.output_dir)\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tools/docs/build_docs.py"}]} | 956 | 531 |
gh_patches_debug_13568 | rasdani/github-patches | git_diff | facebookresearch__xformers-40 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Logo doesn't appear on documentation sub-pages
# 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
Currently, the `xFormers` logo only appears on the main docs page and the `what_is_xformers` page which is present in the same directory as it, but not on the other sub-pages. I was wondering whether setting the Sphinx option `html_logo` in the `conf.py` file would fix this.
Would be happy to make a PR for this, let me know what you think.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # type: ignore
8 # Configuration file for the Sphinx documentation builder.
9 #
10 # This file only contains a selection of the most common options. For a full
11 # list see the documentation:
12 # https://www.sphinx-doc.org/en/master/usage/configuration.html
13
14 # -- Path setup --------------------------------------------------------------
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 import os
21 import sys
22 from typing import Any, List
23
24 # The theme to use for HTML and HTML Help pages. See the documentation for
25 # a list of builtin themes.
26 #
27 from recommonmark.transform import AutoStructify
28
29 sys.path.insert(0, os.path.abspath("../.."))
30
31 # -- Project information -----------------------------------------------------
32
33 project = "xFormers"
34 copyright = "2021, Facebook AI Research"
35 author = "Facebook AI Research"
36
37 # The full version, including alpha/beta/rc tags
38 release = "0.0.1"
39
40
41 # -- General configuration ---------------------------------------------------
42
43 # Add any Sphinx extension module names here, as strings. They can be
44 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
45 # ones.
46 extensions = [
47 "sphinx.ext.autodoc",
48 "sphinx.ext.autosectionlabel",
49 "sphinx.ext.napoleon", # support NumPy and Google style docstrings
50 "recommonmark",
51 "sphinx.ext.intersphinx",
52 "sphinx.ext.todo",
53 "sphinx.ext.coverage",
54 "sphinx.ext.mathjax",
55 "sphinx.ext.viewcode",
56 "sphinx.ext.githubpages",
57 "sphinx.ext.doctest",
58 "sphinx.ext.ifconfig",
59 ]
60
61 # autosectionlabel throws warnings if section names are duplicated.
62 # The following tells autosectionlabel to not throw a warning for
63 # duplicated section names that are in different documents.
64 autosectionlabel_prefix_document = True
65
66 # -- Configurations for plugins ------------
67 napoleon_google_docstring = True
68 napoleon_include_init_with_doc = True
69 napoleon_include_special_with_doc = True
70 napoleon_numpy_docstring = False
71 napoleon_use_rtype = False
72 autodoc_inherit_docstrings = False
73 autodoc_member_order = "bysource"
74
75 intersphinx_mapping = {
76 "python": ("https://docs.python.org/3.6", None),
77 "numpy": ("https://docs.scipy.org/doc/numpy/", None),
78 "torch": ("https://pytorch.org/docs/master/", None),
79 }
80 # -------------------------
81
82 # Add any paths that contain templates here, relative to this directory.
83 templates_path = ["_templates"]
84
85 # List of patterns, relative to source directory, that match files and
86 # directories to ignore when looking for source files.
87 # This pattern also affects html_static_path and html_extra_path.
88 exclude_patterns: List[Any] = []
89
90 # The suffix(es) of source filenames.
91 # You can specify multiple suffix as a list of string:
92 #
93 source_suffix = [".rst", ".md"]
94
95 # The master toctree document.
96 master_doc = "index"
97
98 # If true, `todo` and `todoList` produce output, else they produce nothing.
99 todo_include_todos = True
100
101 # -- Options for HTML output -------------------------------------------------
102
103
104 html_theme = "pytorch_sphinx_theme"
105 templates_path = ["_templates"]
106
107
108 # Add any paths that contain custom static files (such as style sheets) here,
109 # Theme options are theme-specific and customize the look and feel of a theme
110 # further. For a list of options available for each theme, see the
111 # documentation.
112 #
113 html_theme_options = {
114 "includehidden": True,
115 "canonical_url": "https://fairinternal.github.io/xformers",
116 "pytorch_project": "docs",
117 "logo_only": True, # default = False
118 }
119
120 # relative to this directory. They are copied after the builtin static files,
121 # so a file named "default.css" will overwrite the builtin "default.css".
122 html_static_path = ["_static"]
123
124 # setting custom stylesheets https://stackoverflow.com/a/34420612
125 html_context = {"css_files": ["_static/css/customize.css"]}
126
127 # -- Options for HTMLHelp output ------------------------------------------
128
129 # Output file base name for HTML help builder.
130 htmlhelp_basename = "xformersdocs"
131 github_doc_root = "https://github.com/fairinternal/xformers/blob/v0.1/"
132
133
134 # Over-ride PyTorch Sphinx css
135 def setup(app):
136 app.add_config_value(
137 "recommonmark_config",
138 {
139 "url_resolver": lambda url: github_doc_root + url,
140 "auto_toc_tree_section": "Contents",
141 "enable_math": True,
142 "enable_inline_math": True,
143 "enable_eval_rst": True,
144 "enable_auto_toc_tree": True,
145 },
146 True,
147 )
148 app.add_transform(AutoStructify)
149 app.add_css_file("css/customize.css")
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -112,7 +112,7 @@
#
html_theme_options = {
"includehidden": True,
- "canonical_url": "https://fairinternal.github.io/xformers",
+ "canonical_url": "https://facebookresearch.github.io/xformers",
"pytorch_project": "docs",
"logo_only": True, # default = False
}
@@ -128,7 +128,7 @@
# Output file base name for HTML help builder.
htmlhelp_basename = "xformersdocs"
-github_doc_root = "https://github.com/fairinternal/xformers/blob/v0.1/"
+github_doc_root = "https://github.com/facebookresearch/xformers/tree/main/docs/"
# Over-ride PyTorch Sphinx css
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -112,7 +112,7 @@\n #\n html_theme_options = {\n \"includehidden\": True,\n- \"canonical_url\": \"https://fairinternal.github.io/xformers\",\n+ \"canonical_url\": \"https://facebookresearch.github.io/xformers\",\n \"pytorch_project\": \"docs\",\n \"logo_only\": True, # default = False\n }\n@@ -128,7 +128,7 @@\n \n # Output file base name for HTML help builder.\n htmlhelp_basename = \"xformersdocs\"\n-github_doc_root = \"https://github.com/fairinternal/xformers/blob/v0.1/\"\n+github_doc_root = \"https://github.com/facebookresearch/xformers/tree/main/docs/\"\n \n \n # Over-ride PyTorch Sphinx css\n", "issue": "Logo doesn't appear on documentation sub-pages\n# \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nCurrently, the `xFormers` logo only appears on the main docs page and the `what_is_xformers` page which is present in the same directory as it, but not on the other sub-pages. I was wondering whether setting the Sphinx option `html_logo` in the `conf.py` file would fix this.\r\n\r\nWould be happy to make a PR for this, let me know what you think.\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# type: ignore\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nfrom typing import Any, List\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nfrom recommonmark.transform import AutoStructify\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\n\nproject = \"xFormers\"\ncopyright = \"2021, Facebook AI Research\"\nauthor = \"Facebook AI Research\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"0.0.1\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.napoleon\", # support NumPy and Google style docstrings\n \"recommonmark\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.ifconfig\",\n]\n\n# autosectionlabel throws warnings if section names are duplicated.\n# The following tells autosectionlabel to not throw a warning for\n# duplicated section names that are in different documents.\nautosectionlabel_prefix_document = True\n\n# -- Configurations for plugins ------------\nnapoleon_google_docstring = True\nnapoleon_include_init_with_doc = True\nnapoleon_include_special_with_doc = True\nnapoleon_numpy_docstring = False\nnapoleon_use_rtype = False\nautodoc_inherit_docstrings = False\nautodoc_member_order = \"bysource\"\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3.6\", None),\n \"numpy\": (\"https://docs.scipy.org/doc/numpy/\", None),\n \"torch\": (\"https://pytorch.org/docs/master/\", None),\n}\n# -------------------------\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns: List[Any] = []\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Options for HTML output -------------------------------------------------\n\n\nhtml_theme = \"pytorch_sphinx_theme\"\ntemplates_path = [\"_templates\"]\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"includehidden\": True,\n \"canonical_url\": \"https://fairinternal.github.io/xformers\",\n \"pytorch_project\": \"docs\",\n \"logo_only\": True, # default = False\n}\n\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# setting custom stylesheets https://stackoverflow.com/a/34420612\nhtml_context = {\"css_files\": [\"_static/css/customize.css\"]}\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"xformersdocs\"\ngithub_doc_root = \"https://github.com/fairinternal/xformers/blob/v0.1/\"\n\n\n# Over-ride PyTorch Sphinx css\ndef setup(app):\n app.add_config_value(\n \"recommonmark_config\",\n {\n \"url_resolver\": lambda url: github_doc_root + url,\n \"auto_toc_tree_section\": \"Contents\",\n \"enable_math\": True,\n \"enable_inline_math\": True,\n \"enable_eval_rst\": True,\n \"enable_auto_toc_tree\": True,\n },\n True,\n )\n app.add_transform(AutoStructify)\n app.add_css_file(\"css/customize.css\")\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# type: ignore\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nfrom typing import Any, List\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nfrom recommonmark.transform import AutoStructify\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\n\nproject = \"xFormers\"\ncopyright = \"2021, Facebook AI Research\"\nauthor = \"Facebook AI Research\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"0.0.1\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.napoleon\", # support NumPy and Google style docstrings\n \"recommonmark\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.ifconfig\",\n]\n\n# autosectionlabel throws warnings if section names are duplicated.\n# The following tells autosectionlabel to not throw a warning for\n# duplicated section names that are in different documents.\nautosectionlabel_prefix_document = True\n\n# -- Configurations for plugins ------------\nnapoleon_google_docstring = True\nnapoleon_include_init_with_doc = True\nnapoleon_include_special_with_doc = True\nnapoleon_numpy_docstring = False\nnapoleon_use_rtype = False\nautodoc_inherit_docstrings = False\nautodoc_member_order = \"bysource\"\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3.6\", None),\n \"numpy\": (\"https://docs.scipy.org/doc/numpy/\", None),\n \"torch\": (\"https://pytorch.org/docs/master/\", None),\n}\n# -------------------------\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns: List[Any] = []\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Options for HTML output -------------------------------------------------\n\n\nhtml_theme = \"pytorch_sphinx_theme\"\ntemplates_path = [\"_templates\"]\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"includehidden\": True,\n \"canonical_url\": \"https://facebookresearch.github.io/xformers\",\n \"pytorch_project\": \"docs\",\n \"logo_only\": True, # default = False\n}\n\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# setting custom stylesheets https://stackoverflow.com/a/34420612\nhtml_context = {\"css_files\": [\"_static/css/customize.css\"]}\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"xformersdocs\"\ngithub_doc_root = \"https://github.com/facebookresearch/xformers/tree/main/docs/\"\n\n\n# Over-ride PyTorch Sphinx css\ndef setup(app):\n app.add_config_value(\n \"recommonmark_config\",\n {\n \"url_resolver\": lambda url: github_doc_root + url,\n \"auto_toc_tree_section\": \"Contents\",\n \"enable_math\": True,\n \"enable_inline_math\": True,\n \"enable_eval_rst\": True,\n \"enable_auto_toc_tree\": True,\n },\n True,\n )\n app.add_transform(AutoStructify)\n app.add_css_file(\"css/customize.css\")\n", "path": "docs/source/conf.py"}]} | 1,856 | 199 |
gh_patches_debug_35287 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-6529 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pkgutil.iter_modules with arbitrary path
## Description of the issue
The iter_modules patch implemented in #5959 has a bug where the path must start with the _MEIPASS or it will throw an assertion error.
The normal iter_modules function can take any valid path. Your code first calls that:
https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L37
and later asserts it starts with _MEIPASS
https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L59
which means that a path outside of the executable will throw the assertion error.
I think when implementing it was overlooked that this function could be used to look at a path outside the executable path.
### Context information (for bug reports)
* PyInstaller Version 4.8
* All OS and python versions
I will have a look into creating a pull request to fix this issue.
I think the solution is to change the assertion to an if statement to only run the code below that if it starts with _MEIPASS and thus could be bundled in the executable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2021, PyInstaller Development Team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: Apache-2.0
10 #-----------------------------------------------------------------------------
11 #
12 # This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list
13 # sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive
14 # build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).
15 #
16 # The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to
17 # FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while
18 # PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter
19 # into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension
20 # modules) in addition to PYZ-embedded ones.
21 #
22 # Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of
23 # original implementation with contents of PYZ archive from FrozenImporter's TOC.
24
25 import os
26 import pkgutil
27 import sys
28
29 from pyimod03_importers import FrozenImporter
30
31 _orig_pkgutil_iter_modules = pkgutil.iter_modules
32
33
34 def _pyi_pkgutil_iter_modules(path=None, prefix=''):
35 # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary
36 # extensions and compiled pyc modules in noarchive debug builds).
37 yield from _orig_pkgutil_iter_modules(path, prefix)
38
39 # Find the instance of PyInstaller's FrozenImporter.
40 for importer in pkgutil.iter_importers():
41 if isinstance(importer, FrozenImporter):
42 break
43 else:
44 return
45
46 if not path:
47 # Search for all top-level packages/modules. These will have no dots in their entry names.
48 for entry in importer.toc:
49 if entry.count('.') != 0:
50 continue
51 is_pkg = importer.is_package(entry)
52 yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)
53 else:
54 # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.
55 SYS_PREFIX = sys._MEIPASS + os.path.sep
56 SYS_PREFIXLEN = len(SYS_PREFIX)
57 # Only single path is supported, and it must start with sys._MEIPASS.
58 pkg_path = os.path.normpath(path[0])
59 assert pkg_path.startswith(SYS_PREFIX)
60 # Construct package prefix from path...
61 pkg_prefix = pkg_path[SYS_PREFIXLEN:]
62 pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
63 # ... and ensure it ends with a dot (so we can directly filter out the package itself).
64 if not pkg_prefix.endswith('.'):
65 pkg_prefix += '.'
66 pkg_prefix_len = len(pkg_prefix)
67
68 for entry in importer.toc:
69 if not entry.startswith(pkg_prefix):
70 continue
71 name = entry[pkg_prefix_len:]
72 if name.count('.') != 0:
73 continue
74 is_pkg = importer.is_package(entry)
75 yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
76
77
78 pkgutil.iter_modules = _pyi_pkgutil_iter_modules
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
@@ -43,7 +43,7 @@
else:
return
- if not path:
+ if path is None:
# Search for all top-level packages/modules. These will have no dots in their entry names.
for entry in importer.toc:
if entry.count('.') != 0:
@@ -54,25 +54,28 @@
# Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.
SYS_PREFIX = sys._MEIPASS + os.path.sep
SYS_PREFIXLEN = len(SYS_PREFIX)
- # Only single path is supported, and it must start with sys._MEIPASS.
- pkg_path = os.path.normpath(path[0])
- assert pkg_path.startswith(SYS_PREFIX)
- # Construct package prefix from path...
- pkg_prefix = pkg_path[SYS_PREFIXLEN:]
- pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
- # ... and ensure it ends with a dot (so we can directly filter out the package itself).
- if not pkg_prefix.endswith('.'):
- pkg_prefix += '.'
- pkg_prefix_len = len(pkg_prefix)
- for entry in importer.toc:
- if not entry.startswith(pkg_prefix):
- continue
- name = entry[pkg_prefix_len:]
- if name.count('.') != 0:
+ for pkg_path in path:
+ pkg_path = os.path.normpath(pkg_path)
+ if not pkg_path.startswith(SYS_PREFIX):
+ # if the path does not start with sys._MEIPASS then it cannot be a bundled package.
continue
- is_pkg = importer.is_package(entry)
- yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
+ # Construct package prefix from path...
+ pkg_prefix = pkg_path[SYS_PREFIXLEN:]
+ pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
+ # ... and ensure it ends with a dot (so we can directly filter out the package itself).
+ if not pkg_prefix.endswith('.'):
+ pkg_prefix += '.'
+ pkg_prefix_len = len(pkg_prefix)
+
+ for entry in importer.toc:
+ if not entry.startswith(pkg_prefix):
+ continue
+ name = entry[pkg_prefix_len:]
+ if name.count('.') != 0:
+ continue
+ is_pkg = importer.is_package(entry)
+ yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
pkgutil.iter_modules = _pyi_pkgutil_iter_modules
| {"golden_diff": "diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n@@ -43,7 +43,7 @@\n else:\n return\n \n- if not path:\n+ if path is None:\n # Search for all top-level packages/modules. These will have no dots in their entry names.\n for entry in importer.toc:\n if entry.count('.') != 0:\n@@ -54,25 +54,28 @@\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n SYS_PREFIX = sys._MEIPASS + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n- # Only single path is supported, and it must start with sys._MEIPASS.\n- pkg_path = os.path.normpath(path[0])\n- assert pkg_path.startswith(SYS_PREFIX)\n- # Construct package prefix from path...\n- pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n- pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n- # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n- if not pkg_prefix.endswith('.'):\n- pkg_prefix += '.'\n- pkg_prefix_len = len(pkg_prefix)\n \n- for entry in importer.toc:\n- if not entry.startswith(pkg_prefix):\n- continue\n- name = entry[pkg_prefix_len:]\n- if name.count('.') != 0:\n+ for pkg_path in path:\n+ pkg_path = os.path.normpath(pkg_path)\n+ if not pkg_path.startswith(SYS_PREFIX):\n+ # if the path does not start with sys._MEIPASS then it cannot be a bundled package.\n continue\n- is_pkg = importer.is_package(entry)\n- yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n+ # Construct package prefix from path...\n+ pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n+ pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n+ # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n+ if not pkg_prefix.endswith('.'):\n+ pkg_prefix += '.'\n+ pkg_prefix_len = len(pkg_prefix)\n+\n+ for entry in importer.toc:\n+ if not entry.startswith(pkg_prefix):\n+ continue\n+ name = entry[pkg_prefix_len:]\n+ if name.count('.') != 0:\n+ continue\n+ is_pkg = importer.is_package(entry)\n+ yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n \n \n pkgutil.iter_modules = _pyi_pkgutil_iter_modules\n", "issue": "pkgutil.iter_modules with arbitrary path\n## Description of the issue\r\nThe iter_modules patch implemented in #5959 has a bug where the path must start with the _MEIPASS or it will throw an assertion error.\r\n\r\nThe normal iter_modules function can take any valid path. Your code first calls that:\r\nhttps://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L37\r\n\r\nand later asserts it starts with _MEIPASS\r\nhttps://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L59\r\n\r\nwhich means that a path outside of the executable will throw the assertion error.\r\n\r\nI think when implementing it was overlooked that this function could be used to look at a path outside the executable path.\r\n\r\n### Context information (for bug reports)\r\n\r\n* PyInstaller Version 4.8\r\n* All OS and python versions\r\n\r\nI will have a look into creating a pull request to fix this issue.\r\nI think the solution is to change the assertion to an if statement to only run the code below that if it starts with _MEIPASS and thus could be bundled in the executable.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n#\n# This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list\n# sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive\n# build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).\n#\n# The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to\n# FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while\n# PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter\n# into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension\n# modules) in addition to PYZ-embedded ones.\n#\n# Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of\n# original implementation with contents of PYZ archive from FrozenImporter's TOC.\n\nimport os\nimport pkgutil\nimport sys\n\nfrom pyimod03_importers import FrozenImporter\n\n_orig_pkgutil_iter_modules = pkgutil.iter_modules\n\n\ndef _pyi_pkgutil_iter_modules(path=None, prefix=''):\n # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary\n # extensions and compiled pyc modules in noarchive debug builds).\n yield from _orig_pkgutil_iter_modules(path, prefix)\n\n # Find the instance of PyInstaller's FrozenImporter.\n for importer in pkgutil.iter_importers():\n if isinstance(importer, FrozenImporter):\n break\n else:\n return\n\n if not path:\n # Search for all top-level packages/modules. These will have no dots in their entry names.\n for entry in importer.toc:\n if entry.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)\n else:\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n SYS_PREFIX = sys._MEIPASS + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n # Only single path is supported, and it must start with sys._MEIPASS.\n pkg_path = os.path.normpath(path[0])\n assert pkg_path.startswith(SYS_PREFIX)\n # Construct package prefix from path...\n pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n if not pkg_prefix.endswith('.'):\n pkg_prefix += '.'\n pkg_prefix_len = len(pkg_prefix)\n\n for entry in importer.toc:\n if not entry.startswith(pkg_prefix):\n continue\n name = entry[pkg_prefix_len:]\n if name.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n\n\npkgutil.iter_modules = _pyi_pkgutil_iter_modules\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n#\n# This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list\n# sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive\n# build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).\n#\n# The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to\n# FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while\n# PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter\n# into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension\n# modules) in addition to PYZ-embedded ones.\n#\n# Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of\n# original implementation with contents of PYZ archive from FrozenImporter's TOC.\n\nimport os\nimport pkgutil\nimport sys\n\nfrom pyimod03_importers import FrozenImporter\n\n_orig_pkgutil_iter_modules = pkgutil.iter_modules\n\n\ndef _pyi_pkgutil_iter_modules(path=None, prefix=''):\n # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary\n # extensions and compiled pyc modules in noarchive debug builds).\n yield from _orig_pkgutil_iter_modules(path, prefix)\n\n # Find the instance of PyInstaller's FrozenImporter.\n for importer in pkgutil.iter_importers():\n if isinstance(importer, FrozenImporter):\n break\n else:\n return\n\n if path is None:\n # Search for all top-level packages/modules. These will have no dots in their entry names.\n for entry in importer.toc:\n if entry.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)\n else:\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n SYS_PREFIX = sys._MEIPASS + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n\n for pkg_path in path:\n pkg_path = os.path.normpath(pkg_path)\n if not pkg_path.startswith(SYS_PREFIX):\n # if the path does not start with sys._MEIPASS then it cannot be a bundled package.\n continue\n # Construct package prefix from path...\n pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n if not pkg_prefix.endswith('.'):\n pkg_prefix += '.'\n pkg_prefix_len = len(pkg_prefix)\n\n for entry in importer.toc:\n if not entry.startswith(pkg_prefix):\n continue\n name = entry[pkg_prefix_len:]\n if name.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n\n\npkgutil.iter_modules = _pyi_pkgutil_iter_modules\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py"}]} | 1,429 | 620 |
gh_patches_debug_1650 | rasdani/github-patches | git_diff | ivy-llc__ivy-13273 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
unravel_index
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/jax/numpy/indexing.py`
Content:
```
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def diagonal(a, offset=0, axis1=0, axis2=1):
10 return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)
11
12
13 @to_ivy_arrays_and_back
14 def diag(v, k=0):
15 return ivy.diag(v, k=k)
16
17
18 @to_ivy_arrays_and_back
19 def diag_indices(n, ndim=2):
20 idx = ivy.arange(n, dtype=int)
21 return (idx,) * ndim
22
23
24 # take_along_axis
25 @to_ivy_arrays_and_back
26 def take_along_axis(arr, indices, axis, mode="fill"):
27 return ivy.take_along_axis(arr, indices, axis, mode=mode)
28
29
30 @to_ivy_arrays_and_back
31 def tril_indices(n_rows, n_cols=None, k=0):
32 return ivy.tril_indices(n_rows, n_cols, k)
33
34
35 @to_ivy_arrays_and_back
36 def triu_indices(n, k=0, m=None):
37 return ivy.triu_indices(n, m, k)
38
39
40 @to_ivy_arrays_and_back
41 def triu_indices_from(arr, k=0):
42 return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)
43
44
45 def tril_indices_from(arr, k=0):
46 return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py
--- a/ivy/functional/frontends/jax/numpy/indexing.py
+++ b/ivy/functional/frontends/jax/numpy/indexing.py
@@ -44,3 +44,10 @@
def tril_indices_from(arr, k=0):
return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
+
+
+# unravel_index
+@to_ivy_arrays_and_back
+def unravel_index(indices, shape):
+ ret = [x.astype("int64") for x in ivy.unravel_index(indices, shape)]
+ return tuple(ret)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py\n--- a/ivy/functional/frontends/jax/numpy/indexing.py\n+++ b/ivy/functional/frontends/jax/numpy/indexing.py\n@@ -44,3 +44,10 @@\n \n def tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n+\n+\n+# unravel_index\n+@to_ivy_arrays_and_back\n+def unravel_index(indices, shape):\n+ ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n+ return tuple(ret)\n", "issue": "unravel_index\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef diagonal(a, offset=0, axis1=0, axis2=1):\n return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n, dtype=int)\n return (idx,) * ndim\n\n\n# take_along_axis\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis, mode=\"fill\"):\n return ivy.take_along_axis(arr, indices, axis, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n_rows, n_cols=None, k=0):\n return ivy.tril_indices(n_rows, n_cols, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices(n, k=0, m=None):\n return ivy.triu_indices(n, m, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices_from(arr, k=0):\n return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)\n\n\ndef tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n", "path": "ivy/functional/frontends/jax/numpy/indexing.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef diagonal(a, offset=0, axis1=0, axis2=1):\n return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n, dtype=int)\n return (idx,) * ndim\n\n\n# take_along_axis\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis, mode=\"fill\"):\n return ivy.take_along_axis(arr, indices, axis, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n_rows, n_cols=None, k=0):\n return ivy.tril_indices(n_rows, n_cols, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices(n, k=0, m=None):\n return ivy.triu_indices(n, m, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices_from(arr, k=0):\n return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)\n\n\ndef tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n\n\n# unravel_index\n@to_ivy_arrays_and_back\ndef unravel_index(indices, shape):\n ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n return tuple(ret)\n", "path": "ivy/functional/frontends/jax/numpy/indexing.py"}]} | 701 | 162 |
gh_patches_debug_4175 | rasdani/github-patches | git_diff | cleanlab__cleanlab-965 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Revert #961 before release
Tensorflow version temporarily has an upper bound (`tensorflow<2.16.0`) in requirements-dev.txt.
scikit-learn version temporarily has an upper bound (`scikit-learn>=1.0,<1.4.0`) in setup.py
This needs to be reverted before releasing v2.6.0.
_Originally posted by @elisno in https://github.com/cleanlab/cleanlab/issues/961#issuecomment-1898968097_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 from setuptools.command.egg_info import egg_info
3
4 # To use a consistent encoding
5 from codecs import open
6 from os import path
7
8
9 class egg_info_ex(egg_info):
10 """Includes license file into `.egg-info` folder."""
11
12 def run(self):
13 # don't duplicate license into `.egg-info` when building a distribution
14 if not self.distribution.have_run.get("install", True):
15 # `install` command is in progress, copy license
16 self.mkpath(self.egg_info)
17 self.copy_file("LICENSE", self.egg_info)
18
19 egg_info.run(self)
20
21
22 here = path.abspath(path.dirname(__file__))
23
24 # Get the long description from the README file
25 with open(path.join(here, "README.md"), encoding="utf-8") as f:
26 long_description = f.read()
27
28 # Get version number and store it in __version__
29 exec(open("cleanlab/version.py").read())
30
31 DATALAB_REQUIRE = [
32 # Mainly for Datalab's data storage class.
33 # Still some type hints that require datasets
34 "datasets>=2.7.0",
35 ]
36
37 IMAGE_REQUIRE = DATALAB_REQUIRE + ["cleanvision>=0.3.2"]
38
39 EXTRAS_REQUIRE = {
40 "datalab": DATALAB_REQUIRE,
41 "image": IMAGE_REQUIRE,
42 "all": ["matplotlib>=3.5.1"],
43 }
44 EXTRAS_REQUIRE["all"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))
45
46 setup(
47 name="cleanlab",
48 version=__version__,
49 license="AGPLv3+",
50 long_description=long_description,
51 long_description_content_type="text/markdown",
52 description="The standard package for data-centric AI, machine learning with label errors, "
53 "and automatically finding and fixing dataset issues in Python.",
54 url="https://cleanlab.ai",
55 project_urls={
56 "Documentation": "https://docs.cleanlab.ai",
57 "Bug Tracker": "https://github.com/cleanlab/cleanlab/issues",
58 "Source Code": "https://github.com/cleanlab/cleanlab",
59 },
60 author="Cleanlab Inc.",
61 author_email="[email protected]",
62 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
63 classifiers=[
64 "Development Status :: 4 - Beta",
65 "Intended Audience :: Developers",
66 "Intended Audience :: Education",
67 "Intended Audience :: Science/Research",
68 "Intended Audience :: Information Technology",
69 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
70 "Natural Language :: English",
71 # We believe this package works will these versions, but we do not guarantee it!
72 "Programming Language :: Python :: 3",
73 "Programming Language :: Python :: 3.7",
74 "Programming Language :: Python :: 3.8",
75 "Programming Language :: Python :: 3.9",
76 "Programming Language :: Python :: 3.10",
77 "Programming Language :: Python",
78 "Topic :: Software Development",
79 "Topic :: Scientific/Engineering",
80 "Topic :: Scientific/Engineering :: Mathematics",
81 "Topic :: Scientific/Engineering :: Artificial Intelligence",
82 "Topic :: Software Development :: Libraries",
83 "Topic :: Software Development :: Libraries :: Python Modules",
84 ],
85 python_requires=">=3.7",
86 # What does your project relate to?
87 keywords="machine_learning data_cleaning confident_learning classification weak_supervision "
88 "learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric",
89 # You can just specify the packages manually here if your project is
90 # simple. Or you can use find_packages().
91 packages=find_packages(exclude=[]),
92 # Include cleanlab license file.
93 include_package_data=True,
94 package_data={
95 "": ["LICENSE"],
96 },
97 license_files=("LICENSE",),
98 cmdclass={"egg_info": egg_info_ex},
99 # List run-time dependencies here. These will be installed by pip when
100 # your project is installed. For an analysis of "install_requires" vs pip's
101 # requirements files see:
102 # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/
103 install_requires=[
104 "numpy>=1.20.0",
105 "scikit-learn>=1.0,<1.4.0",
106 "tqdm>=4.53.0",
107 "pandas>=1.1.5",
108 "termcolor>=2.0.0,<2.4.0",
109 ],
110 extras_require=EXTRAS_REQUIRE,
111 )
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -102,7 +102,7 @@
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/
install_requires=[
"numpy>=1.20.0",
- "scikit-learn>=1.0,<1.4.0",
+ "scikit-learn>=1.0",
"tqdm>=4.53.0",
"pandas>=1.1.5",
"termcolor>=2.0.0,<2.4.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -102,7 +102,7 @@\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.20.0\",\n- \"scikit-learn>=1.0,<1.4.0\",\n+ \"scikit-learn>=1.0\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.1.5\",\n \"termcolor>=2.0.0,<2.4.0\",\n", "issue": "Revert #961 before release\nTensorflow version temporarily has an upper bound (`tensorflow<2.16.0`) in requirements-dev.txt.\r\nscikit-learn version temporarily has an upper bound (`scikit-learn>=1.0,<1.4.0`) in setup.py\r\n\r\nThis needs to be reverted before releasing v2.6.0.\r\n\r\n\r\n _Originally posted by @elisno in https://github.com/cleanlab/cleanlab/issues/961#issuecomment-1898968097_\r\n \n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom setuptools.command.egg_info import egg_info\n\n# To use a consistent encoding\nfrom codecs import open\nfrom os import path\n\n\nclass egg_info_ex(egg_info):\n \"\"\"Includes license file into `.egg-info` folder.\"\"\"\n\n def run(self):\n # don't duplicate license into `.egg-info` when building a distribution\n if not self.distribution.have_run.get(\"install\", True):\n # `install` command is in progress, copy license\n self.mkpath(self.egg_info)\n self.copy_file(\"LICENSE\", self.egg_info)\n\n egg_info.run(self)\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith open(path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# Get version number and store it in __version__\nexec(open(\"cleanlab/version.py\").read())\n\nDATALAB_REQUIRE = [\n # Mainly for Datalab's data storage class.\n # Still some type hints that require datasets\n \"datasets>=2.7.0\",\n]\n\nIMAGE_REQUIRE = DATALAB_REQUIRE + [\"cleanvision>=0.3.2\"]\n\nEXTRAS_REQUIRE = {\n \"datalab\": DATALAB_REQUIRE,\n \"image\": IMAGE_REQUIRE,\n \"all\": [\"matplotlib>=3.5.1\"],\n}\nEXTRAS_REQUIRE[\"all\"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))\n\nsetup(\n name=\"cleanlab\",\n version=__version__,\n license=\"AGPLv3+\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n description=\"The standard package for data-centric AI, machine learning with label errors, \"\n \"and automatically finding and fixing dataset issues in Python.\",\n url=\"https://cleanlab.ai\",\n project_urls={\n \"Documentation\": \"https://docs.cleanlab.ai\",\n \"Bug Tracker\": \"https://github.com/cleanlab/cleanlab/issues\",\n \"Source Code\": \"https://github.com/cleanlab/cleanlab\",\n },\n author=\"Cleanlab Inc.\",\n author_email=\"[email protected]\",\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Natural Language :: English\",\n # We believe this package works will these versions, but we do not guarantee it!\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.7\",\n # What does your project relate to?\n keywords=\"machine_learning data_cleaning confident_learning classification weak_supervision \"\n \"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric\",\n # You can just specify the packages manually here if your project is\n # simple. Or you can use find_packages().\n packages=find_packages(exclude=[]),\n # Include cleanlab license file.\n include_package_data=True,\n package_data={\n \"\": [\"LICENSE\"],\n },\n license_files=(\"LICENSE\",),\n cmdclass={\"egg_info\": egg_info_ex},\n # List run-time dependencies here. These will be installed by pip when\n # your project is installed. For an analysis of \"install_requires\" vs pip's\n # requirements files see:\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.20.0\",\n \"scikit-learn>=1.0,<1.4.0\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.1.5\",\n \"termcolor>=2.0.0,<2.4.0\",\n ],\n extras_require=EXTRAS_REQUIRE,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nfrom setuptools.command.egg_info import egg_info\n\n# To use a consistent encoding\nfrom codecs import open\nfrom os import path\n\n\nclass egg_info_ex(egg_info):\n \"\"\"Includes license file into `.egg-info` folder.\"\"\"\n\n def run(self):\n # don't duplicate license into `.egg-info` when building a distribution\n if not self.distribution.have_run.get(\"install\", True):\n # `install` command is in progress, copy license\n self.mkpath(self.egg_info)\n self.copy_file(\"LICENSE\", self.egg_info)\n\n egg_info.run(self)\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith open(path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# Get version number and store it in __version__\nexec(open(\"cleanlab/version.py\").read())\n\nDATALAB_REQUIRE = [\n # Mainly for Datalab's data storage class.\n # Still some type hints that require datasets\n \"datasets>=2.7.0\",\n]\n\nIMAGE_REQUIRE = DATALAB_REQUIRE + [\"cleanvision>=0.3.2\"]\n\nEXTRAS_REQUIRE = {\n \"datalab\": DATALAB_REQUIRE,\n \"image\": IMAGE_REQUIRE,\n \"all\": [\"matplotlib>=3.5.1\"],\n}\nEXTRAS_REQUIRE[\"all\"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))\n\nsetup(\n name=\"cleanlab\",\n version=__version__,\n license=\"AGPLv3+\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n description=\"The standard package for data-centric AI, machine learning with label errors, \"\n \"and automatically finding and fixing dataset issues in Python.\",\n url=\"https://cleanlab.ai\",\n project_urls={\n \"Documentation\": \"https://docs.cleanlab.ai\",\n \"Bug Tracker\": \"https://github.com/cleanlab/cleanlab/issues\",\n \"Source Code\": \"https://github.com/cleanlab/cleanlab\",\n },\n author=\"Cleanlab Inc.\",\n author_email=\"[email protected]\",\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Natural Language :: English\",\n # We believe this package works will these versions, but we do not guarantee it!\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.7\",\n # What does your project relate to?\n keywords=\"machine_learning data_cleaning confident_learning classification weak_supervision \"\n \"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric\",\n # You can just specify the packages manually here if your project is\n # simple. Or you can use find_packages().\n packages=find_packages(exclude=[]),\n # Include cleanlab license file.\n include_package_data=True,\n package_data={\n \"\": [\"LICENSE\"],\n },\n license_files=(\"LICENSE\",),\n cmdclass={\"egg_info\": egg_info_ex},\n # List run-time dependencies here. These will be installed by pip when\n # your project is installed. For an analysis of \"install_requires\" vs pip's\n # requirements files see:\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.20.0\",\n \"scikit-learn>=1.0\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.1.5\",\n \"termcolor>=2.0.0,<2.4.0\",\n ],\n extras_require=EXTRAS_REQUIRE,\n)\n", "path": "setup.py"}]} | 1,592 | 139 |
gh_patches_debug_31014 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3608 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement `tables.delete` RPC method
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/rpc/tables.py`
Content:
```
1 from typing import Optional, TypedDict
2
3 from modernrpc.core import rpc_method, REQUEST_KEY
4 from modernrpc.auth.basic import http_basic_auth_login_required
5
6 from db.tables.operations.select import get_table_info
7 from mathesar.rpc.exceptions.handlers import handle_rpc_exceptions
8 from mathesar.rpc.utils import connect
9
10
11 class TableInfo(TypedDict):
12 """
13 Information about a table.
14
15 Attributes:
16 oid: The `oid` of the table in the schema.
17 name: The name of the table.
18 schema: The `oid` of the schema where the table lives.
19 description: The description of the table.
20 """
21 oid: int
22 name: str
23 schema: int
24 description: Optional[str]
25
26
27 @rpc_method(name="tables.list")
28 @http_basic_auth_login_required
29 @handle_rpc_exceptions
30 def list_(*, schema_oid: int, database_id: int, **kwargs) -> list[TableInfo]:
31 """
32 List information about tables for a schema. Exposed as `list`.
33
34 Args:
35 schema_oid: Identity of the schema in the user's database.
36 database_id: The Django id of the database containing the table.
37
38 Returns:
39 A list of table details.
40 """
41 user = kwargs.get(REQUEST_KEY).user
42 with connect(database_id, user) as conn:
43 raw_table_info = get_table_info(schema_oid, conn)
44 return [
45 TableInfo(tab) for tab in raw_table_info
46 ]
47
```
Path: `db/tables/operations/drop.py`
Content:
```
1 from db.connection import execute_msar_func_with_engine
2
3
4 def drop_table(name, schema, engine, cascade=False, if_exists=False):
5 execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)
6
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/tables/operations/drop.py b/db/tables/operations/drop.py
--- a/db/tables/operations/drop.py
+++ b/db/tables/operations/drop.py
@@ -1,5 +1,21 @@
-from db.connection import execute_msar_func_with_engine
+from db.connection import execute_msar_func_with_engine, exec_msar_func
def drop_table(name, schema, engine, cascade=False, if_exists=False):
execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)
+
+
+def drop_table_from_database(table_oid, conn, cascade=False):
+ """
+ Drop a table.
+
+ Args:
+ table_oid: OID of the table to drop.
+ cascade: Whether to drop the dependent objects.
+
+ Returns:
+ Returns the fully qualified name of the dropped table.
+ """
+ return exec_msar_func(
+ conn, 'drop_table', table_oid, cascade
+ ).fetchone()[0]
diff --git a/mathesar/rpc/tables.py b/mathesar/rpc/tables.py
--- a/mathesar/rpc/tables.py
+++ b/mathesar/rpc/tables.py
@@ -4,6 +4,7 @@
from modernrpc.auth.basic import http_basic_auth_login_required
from db.tables.operations.select import get_table_info
+from db.tables.operations.drop import drop_table_from_database
from mathesar.rpc.exceptions.handlers import handle_rpc_exceptions
from mathesar.rpc.utils import connect
@@ -44,3 +45,25 @@
return [
TableInfo(tab) for tab in raw_table_info
]
+
+
+@rpc_method(name="tables.delete")
+@http_basic_auth_login_required
+@handle_rpc_exceptions
+def delete(
+ *, table_oid: int, database_id: int, cascade: bool = False, **kwargs
+) -> str:
+ """
+ Delete a table from a schema.
+
+ Args:
+ table_oid: Identity of the table in the user's database.
+ database_id: The Django id of the database containing the table.
+ cascade: Whether to drop the dependent objects.
+
+ Returns:
+ The name of the dropped table.
+ """
+ user = kwargs.get(REQUEST_KEY).user
+ with connect(database_id, user) as conn:
+ return drop_table_from_database(table_oid, conn, cascade)
| {"golden_diff": "diff --git a/db/tables/operations/drop.py b/db/tables/operations/drop.py\n--- a/db/tables/operations/drop.py\n+++ b/db/tables/operations/drop.py\n@@ -1,5 +1,21 @@\n-from db.connection import execute_msar_func_with_engine\n+from db.connection import execute_msar_func_with_engine, exec_msar_func\n \n \n def drop_table(name, schema, engine, cascade=False, if_exists=False):\n execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)\n+\n+\n+def drop_table_from_database(table_oid, conn, cascade=False):\n+ \"\"\"\n+ Drop a table.\n+\n+ Args:\n+ table_oid: OID of the table to drop.\n+ cascade: Whether to drop the dependent objects.\n+\n+ Returns:\n+ Returns the fully qualified name of the dropped table.\n+ \"\"\"\n+ return exec_msar_func(\n+ conn, 'drop_table', table_oid, cascade\n+ ).fetchone()[0]\ndiff --git a/mathesar/rpc/tables.py b/mathesar/rpc/tables.py\n--- a/mathesar/rpc/tables.py\n+++ b/mathesar/rpc/tables.py\n@@ -4,6 +4,7 @@\n from modernrpc.auth.basic import http_basic_auth_login_required\n \n from db.tables.operations.select import get_table_info\n+from db.tables.operations.drop import drop_table_from_database\n from mathesar.rpc.exceptions.handlers import handle_rpc_exceptions\n from mathesar.rpc.utils import connect\n \n@@ -44,3 +45,25 @@\n return [\n TableInfo(tab) for tab in raw_table_info\n ]\n+\n+\n+@rpc_method(name=\"tables.delete\")\n+@http_basic_auth_login_required\n+@handle_rpc_exceptions\n+def delete(\n+ *, table_oid: int, database_id: int, cascade: bool = False, **kwargs\n+) -> str:\n+ \"\"\"\n+ Delete a table from a schema.\n+\n+ Args:\n+ table_oid: Identity of the table in the user's database.\n+ database_id: The Django id of the database containing the table.\n+ cascade: Whether to drop the dependent objects.\n+\n+ Returns:\n+ The name of the dropped table.\n+ \"\"\"\n+ user = kwargs.get(REQUEST_KEY).user\n+ with connect(database_id, user) as conn:\n+ return drop_table_from_database(table_oid, conn, cascade)\n", "issue": "Implement `tables.delete` RPC method\n\n", "before_files": [{"content": "from typing import Optional, TypedDict\n\nfrom modernrpc.core import rpc_method, REQUEST_KEY\nfrom modernrpc.auth.basic import http_basic_auth_login_required\n\nfrom db.tables.operations.select import get_table_info\nfrom mathesar.rpc.exceptions.handlers import handle_rpc_exceptions\nfrom mathesar.rpc.utils import connect\n\n\nclass TableInfo(TypedDict):\n \"\"\"\n Information about a table.\n\n Attributes:\n oid: The `oid` of the table in the schema.\n name: The name of the table.\n schema: The `oid` of the schema where the table lives.\n description: The description of the table.\n \"\"\"\n oid: int\n name: str\n schema: int\n description: Optional[str]\n\n\n@rpc_method(name=\"tables.list\")\n@http_basic_auth_login_required\n@handle_rpc_exceptions\ndef list_(*, schema_oid: int, database_id: int, **kwargs) -> list[TableInfo]:\n \"\"\"\n List information about tables for a schema. Exposed as `list`.\n\n Args:\n schema_oid: Identity of the schema in the user's database.\n database_id: The Django id of the database containing the table.\n\n Returns:\n A list of table details.\n \"\"\"\n user = kwargs.get(REQUEST_KEY).user\n with connect(database_id, user) as conn:\n raw_table_info = get_table_info(schema_oid, conn)\n return [\n TableInfo(tab) for tab in raw_table_info\n ]\n", "path": "mathesar/rpc/tables.py"}, {"content": "from db.connection import execute_msar_func_with_engine\n\n\ndef drop_table(name, schema, engine, cascade=False, if_exists=False):\n execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)\n", "path": "db/tables/operations/drop.py"}], "after_files": [{"content": "from typing import Optional, TypedDict\n\nfrom modernrpc.core import rpc_method, REQUEST_KEY\nfrom modernrpc.auth.basic import http_basic_auth_login_required\n\nfrom db.tables.operations.select import get_table_info\nfrom db.tables.operations.drop import drop_table_from_database\nfrom mathesar.rpc.exceptions.handlers import handle_rpc_exceptions\nfrom mathesar.rpc.utils import connect\n\n\nclass TableInfo(TypedDict):\n \"\"\"\n Information about a table.\n\n Attributes:\n oid: The `oid` of the table in the schema.\n name: The name of the table.\n schema: The `oid` of the schema where the table lives.\n description: The description of the table.\n \"\"\"\n oid: int\n name: str\n schema: int\n description: Optional[str]\n\n\n@rpc_method(name=\"tables.list\")\n@http_basic_auth_login_required\n@handle_rpc_exceptions\ndef list_(*, schema_oid: int, database_id: int, **kwargs) -> list[TableInfo]:\n \"\"\"\n List information about tables for a schema. Exposed as `list`.\n\n Args:\n schema_oid: Identity of the schema in the user's database.\n database_id: The Django id of the database containing the table.\n\n Returns:\n A list of table details.\n \"\"\"\n user = kwargs.get(REQUEST_KEY).user\n with connect(database_id, user) as conn:\n raw_table_info = get_table_info(schema_oid, conn)\n return [\n TableInfo(tab) for tab in raw_table_info\n ]\n\n\n@rpc_method(name=\"tables.delete\")\n@http_basic_auth_login_required\n@handle_rpc_exceptions\ndef delete(\n *, table_oid: int, database_id: int, cascade: bool = False, **kwargs\n) -> str:\n \"\"\"\n Delete a table from a schema.\n\n Args:\n table_oid: Identity of the table in the user's database.\n database_id: The Django id of the database containing the table.\n cascade: Whether to drop the dependent objects.\n\n Returns:\n The name of the dropped table.\n \"\"\"\n user = kwargs.get(REQUEST_KEY).user\n with connect(database_id, user) as conn:\n return drop_table_from_database(table_oid, conn, cascade)\n", "path": "mathesar/rpc/tables.py"}, {"content": "from db.connection import execute_msar_func_with_engine, exec_msar_func\n\n\ndef drop_table(name, schema, engine, cascade=False, if_exists=False):\n execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)\n\n\ndef drop_table_from_database(table_oid, conn, cascade=False):\n \"\"\"\n Drop a table.\n\n Args:\n table_oid: OID of the table to drop.\n cascade: Whether to drop the dependent objects.\n\n Returns:\n Returns the fully qualified name of the dropped table.\n \"\"\"\n return exec_msar_func(\n conn, 'drop_table', table_oid, cascade\n ).fetchone()[0]\n", "path": "db/tables/operations/drop.py"}]} | 743 | 525 |
gh_patches_debug_2811 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4707 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rules/participate in project
As you can see in the test, the paricipate_project rule behaves a bit weird for project group members. I think, they should also be allowed to participate. The question is what it is used for.
Cool! The participate_project rule is a bit unexpected, so we should check that out. Like where it is used and what for. But anyway, will merge for now and add an issue.
_Originally posted by @fuzzylogic2000 in https://github.com/liqd/a4-meinberlin/pull/4077#pullrequestreview-837466549_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/projects/rules.py`
Content:
```
1 import rules
2 from rules.predicates import is_superuser
3
4 from adhocracy4.organisations.predicates import is_initiator
5 from adhocracy4.projects.predicates import is_live
6 from adhocracy4.projects.predicates import is_moderator
7 from adhocracy4.projects.predicates import is_prj_group_member
8 from adhocracy4.projects.predicates import is_project_member
9 from adhocracy4.projects.predicates import is_public
10 from adhocracy4.projects.predicates import is_semipublic
11
12 rules.remove_perm('a4projects.view_project')
13 rules.add_perm('a4projects.view_project',
14 is_superuser | is_initiator |
15 is_moderator | is_prj_group_member |
16 ((is_public | is_semipublic | is_project_member)
17 & is_live))
18
19 rules.set_perm('a4projects.participate_in_project',
20 is_superuser | is_initiator | is_moderator |
21 ((is_public | is_project_member) & is_live))
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/projects/rules.py b/meinberlin/apps/projects/rules.py
--- a/meinberlin/apps/projects/rules.py
+++ b/meinberlin/apps/projects/rules.py
@@ -17,5 +17,6 @@
& is_live))
rules.set_perm('a4projects.participate_in_project',
- is_superuser | is_initiator | is_moderator |
+ is_superuser | is_initiator |
+ is_moderator | is_prj_group_member |
((is_public | is_project_member) & is_live))
| {"golden_diff": "diff --git a/meinberlin/apps/projects/rules.py b/meinberlin/apps/projects/rules.py\n--- a/meinberlin/apps/projects/rules.py\n+++ b/meinberlin/apps/projects/rules.py\n@@ -17,5 +17,6 @@\n & is_live))\n \n rules.set_perm('a4projects.participate_in_project',\n- is_superuser | is_initiator | is_moderator |\n+ is_superuser | is_initiator |\n+ is_moderator | is_prj_group_member |\n ((is_public | is_project_member) & is_live))\n", "issue": "rules/participate in project\nAs you can see in the test, the paricipate_project rule behaves a bit weird for project group members. I think, they should also be allowed to participate. The question is what it is used for.\r\n\r\nCool! The participate_project rule is a bit unexpected, so we should check that out. Like where it is used and what for. But anyway, will merge for now and add an issue.\r\n\r\n_Originally posted by @fuzzylogic2000 in https://github.com/liqd/a4-meinberlin/pull/4077#pullrequestreview-837466549_\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_live\nfrom adhocracy4.projects.predicates import is_moderator\nfrom adhocracy4.projects.predicates import is_prj_group_member\nfrom adhocracy4.projects.predicates import is_project_member\nfrom adhocracy4.projects.predicates import is_public\nfrom adhocracy4.projects.predicates import is_semipublic\n\nrules.remove_perm('a4projects.view_project')\nrules.add_perm('a4projects.view_project',\n is_superuser | is_initiator |\n is_moderator | is_prj_group_member |\n ((is_public | is_semipublic | is_project_member)\n & is_live))\n\nrules.set_perm('a4projects.participate_in_project',\n is_superuser | is_initiator | is_moderator |\n ((is_public | is_project_member) & is_live))\n", "path": "meinberlin/apps/projects/rules.py"}], "after_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_live\nfrom adhocracy4.projects.predicates import is_moderator\nfrom adhocracy4.projects.predicates import is_prj_group_member\nfrom adhocracy4.projects.predicates import is_project_member\nfrom adhocracy4.projects.predicates import is_public\nfrom adhocracy4.projects.predicates import is_semipublic\n\nrules.remove_perm('a4projects.view_project')\nrules.add_perm('a4projects.view_project',\n is_superuser | is_initiator |\n is_moderator | is_prj_group_member |\n ((is_public | is_semipublic | is_project_member)\n & is_live))\n\nrules.set_perm('a4projects.participate_in_project',\n is_superuser | is_initiator |\n is_moderator | is_prj_group_member |\n ((is_public | is_project_member) & is_live))\n", "path": "meinberlin/apps/projects/rules.py"}]} | 639 | 124 |
gh_patches_debug_28703 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2540 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement InMemoryMetricExporter
See [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk_exporters/in-memory.md). This will be great for testing.
IMO this should be a "pull exporter" (metric reader atm) that has a method `get_metrics()` or similar to return metrics from the SDK.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 import os
17 from abc import ABC, abstractmethod
18 from enum import Enum
19 from os import environ, linesep
20 from sys import stdout
21 from threading import Event, Thread
22 from typing import IO, Callable, Iterable, Optional, Sequence
23
24 from opentelemetry.context import (
25 _SUPPRESS_INSTRUMENTATION_KEY,
26 attach,
27 detach,
28 set_value,
29 )
30 from opentelemetry.sdk._metrics.metric_reader import MetricReader
31 from opentelemetry.sdk._metrics.point import AggregationTemporality, Metric
32 from opentelemetry.util._once import Once
33
34 _logger = logging.getLogger(__name__)
35
36
37 class MetricExportResult(Enum):
38 SUCCESS = 0
39 FAILURE = 1
40
41
42 class MetricExporter(ABC):
43 """Interface for exporting metrics.
44
45 Interface to be implemented by services that want to export metrics received
46 in their own format.
47 """
48
49 @property
50 def preferred_temporality(self) -> AggregationTemporality:
51 return AggregationTemporality.CUMULATIVE
52
53 @abstractmethod
54 def export(self, metrics: Sequence[Metric]) -> "MetricExportResult":
55 """Exports a batch of telemetry data.
56
57 Args:
58 metrics: The list of `opentelemetry.sdk._metrics.data.MetricData` objects to be exported
59
60 Returns:
61 The result of the export
62 """
63
64 @abstractmethod
65 def shutdown(self) -> None:
66 """Shuts down the exporter.
67
68 Called when the SDK is shut down.
69 """
70
71
72 class ConsoleMetricExporter(MetricExporter):
73 """Implementation of :class:`MetricExporter` that prints metrics to the
74 console.
75
76 This class can be used for diagnostic purposes. It prints the exported
77 metrics to the console STDOUT.
78 """
79
80 def __init__(
81 self,
82 out: IO = stdout,
83 formatter: Callable[[Metric], str] = lambda metric: metric.to_json()
84 + linesep,
85 ):
86 self.out = out
87 self.formatter = formatter
88
89 def export(self, metrics: Sequence[Metric]) -> MetricExportResult:
90 for metric in metrics:
91 self.out.write(self.formatter(metric))
92 self.out.flush()
93 return MetricExportResult.SUCCESS
94
95 def shutdown(self) -> None:
96 pass
97
98
99 class PeriodicExportingMetricReader(MetricReader):
100 """`PeriodicExportingMetricReader` is an implementation of `MetricReader`
101 that collects metrics based on a user-configurable time interval, and passes the
102 metrics to the configured exporter.
103 """
104
105 def __init__(
106 self,
107 exporter: MetricExporter,
108 export_interval_millis: Optional[float] = None,
109 export_timeout_millis: Optional[float] = None,
110 ) -> None:
111 super().__init__(preferred_temporality=exporter.preferred_temporality)
112 self._exporter = exporter
113 if export_interval_millis is None:
114 try:
115 export_interval_millis = float(
116 environ.get("OTEL_METRIC_EXPORT_INTERVAL", 60000)
117 )
118 except ValueError:
119 _logger.warning(
120 "Found invalid value for export interval, using default"
121 )
122 export_interval_millis = 60000
123 if export_timeout_millis is None:
124 try:
125 export_timeout_millis = float(
126 environ.get("OTEL_METRIC_EXPORT_TIMEOUT", 30000)
127 )
128 except ValueError:
129 _logger.warning(
130 "Found invalid value for export timeout, using default"
131 )
132 export_timeout_millis = 30000
133 self._export_interval_millis = export_interval_millis
134 self._export_timeout_millis = export_timeout_millis
135 self._shutdown = False
136 self._shutdown_event = Event()
137 self._shutdown_once = Once()
138 self._daemon_thread = Thread(target=self._ticker, daemon=True)
139 self._daemon_thread.start()
140 if hasattr(os, "register_at_fork"):
141 os.register_at_fork(
142 after_in_child=self._at_fork_reinit
143 ) # pylint: disable=protected-access
144
145 def _at_fork_reinit(self):
146 self._daemon_thread = Thread(target=self._ticker, daemon=True)
147 self._daemon_thread.start()
148
149 def _ticker(self) -> None:
150 interval_secs = self._export_interval_millis / 1e3
151 while not self._shutdown_event.wait(interval_secs):
152 self.collect()
153 # one last collection below before shutting down completely
154 self.collect()
155
156 def _receive_metrics(self, metrics: Iterable[Metric]) -> None:
157 if metrics is None:
158 return
159 token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))
160 try:
161 self._exporter.export(metrics)
162 except Exception as e: # pylint: disable=broad-except,invalid-name
163 _logger.exception("Exception while exporting metrics %s", str(e))
164 detach(token)
165
166 def shutdown(self) -> bool:
167 def _shutdown():
168 self._shutdown = True
169
170 did_set = self._shutdown_once.do_once(_shutdown)
171 if not did_set:
172 _logger.warning("Can't shutdown multiple times")
173 return False
174
175 self._shutdown_event.set()
176 self._daemon_thread.join()
177 self._exporter.shutdown()
178 return True
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py
@@ -18,8 +18,8 @@
from enum import Enum
from os import environ, linesep
from sys import stdout
-from threading import Event, Thread
-from typing import IO, Callable, Iterable, Optional, Sequence
+from threading import Event, RLock, Thread
+from typing import IO, Callable, Iterable, List, Optional, Sequence
from opentelemetry.context import (
_SUPPRESS_INSTRUMENTATION_KEY,
@@ -96,6 +96,36 @@
pass
+class InMemoryMetricReader(MetricReader):
+ """Implementation of :class:`MetricReader` that returns its metrics from :func:`metrics`.
+
+ This is useful for e.g. unit tests.
+ """
+
+ def __init__(
+ self,
+ preferred_temporality: AggregationTemporality = AggregationTemporality.CUMULATIVE,
+ ) -> None:
+ super().__init__(preferred_temporality=preferred_temporality)
+ self._lock = RLock()
+ self._metrics: List[Metric] = []
+
+ def get_metrics(self) -> List[Metric]:
+ """Reads and returns current metrics from the SDK"""
+ with self._lock:
+ self.collect()
+ metrics = self._metrics
+ self._metrics = []
+ return metrics
+
+ def _receive_metrics(self, metrics: Iterable[Metric]):
+ with self._lock:
+ self._metrics = list(metrics)
+
+ def shutdown(self) -> bool:
+ return True
+
+
class PeriodicExportingMetricReader(MetricReader):
"""`PeriodicExportingMetricReader` is an implementation of `MetricReader`
that collects metrics based on a user-configurable time interval, and passes the
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py\n@@ -18,8 +18,8 @@\n from enum import Enum\n from os import environ, linesep\n from sys import stdout\n-from threading import Event, Thread\n-from typing import IO, Callable, Iterable, Optional, Sequence\n+from threading import Event, RLock, Thread\n+from typing import IO, Callable, Iterable, List, Optional, Sequence\n \n from opentelemetry.context import (\n _SUPPRESS_INSTRUMENTATION_KEY,\n@@ -96,6 +96,36 @@\n pass\n \n \n+class InMemoryMetricReader(MetricReader):\n+ \"\"\"Implementation of :class:`MetricReader` that returns its metrics from :func:`metrics`.\n+\n+ This is useful for e.g. unit tests.\n+ \"\"\"\n+\n+ def __init__(\n+ self,\n+ preferred_temporality: AggregationTemporality = AggregationTemporality.CUMULATIVE,\n+ ) -> None:\n+ super().__init__(preferred_temporality=preferred_temporality)\n+ self._lock = RLock()\n+ self._metrics: List[Metric] = []\n+\n+ def get_metrics(self) -> List[Metric]:\n+ \"\"\"Reads and returns current metrics from the SDK\"\"\"\n+ with self._lock:\n+ self.collect()\n+ metrics = self._metrics\n+ self._metrics = []\n+ return metrics\n+\n+ def _receive_metrics(self, metrics: Iterable[Metric]):\n+ with self._lock:\n+ self._metrics = list(metrics)\n+\n+ def shutdown(self) -> bool:\n+ return True\n+\n+\n class PeriodicExportingMetricReader(MetricReader):\n \"\"\"`PeriodicExportingMetricReader` is an implementation of `MetricReader`\n that collects metrics based on a user-configurable time interval, and passes the\n", "issue": "Implement InMemoryMetricExporter\nSee [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk_exporters/in-memory.md). This will be great for testing.\r\n\r\nIMO this should be a \"pull exporter\" (metric reader atm) that has a method `get_metrics()` or similar to return metrics from the SDK.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport os\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\nfrom os import environ, linesep\nfrom sys import stdout\nfrom threading import Event, Thread\nfrom typing import IO, Callable, Iterable, Optional, Sequence\n\nfrom opentelemetry.context import (\n _SUPPRESS_INSTRUMENTATION_KEY,\n attach,\n detach,\n set_value,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk._metrics.point import AggregationTemporality, Metric\nfrom opentelemetry.util._once import Once\n\n_logger = logging.getLogger(__name__)\n\n\nclass MetricExportResult(Enum):\n SUCCESS = 0\n FAILURE = 1\n\n\nclass MetricExporter(ABC):\n \"\"\"Interface for exporting metrics.\n\n Interface to be implemented by services that want to export metrics received\n in their own format.\n \"\"\"\n\n @property\n def preferred_temporality(self) -> AggregationTemporality:\n return AggregationTemporality.CUMULATIVE\n\n @abstractmethod\n def export(self, metrics: Sequence[Metric]) -> \"MetricExportResult\":\n \"\"\"Exports a batch of telemetry data.\n\n Args:\n metrics: The list of `opentelemetry.sdk._metrics.data.MetricData` objects to be exported\n\n Returns:\n The result of the export\n \"\"\"\n\n @abstractmethod\n def shutdown(self) -> None:\n \"\"\"Shuts down the exporter.\n\n Called when the SDK is shut down.\n \"\"\"\n\n\nclass ConsoleMetricExporter(MetricExporter):\n \"\"\"Implementation of :class:`MetricExporter` that prints metrics to the\n console.\n\n This class can be used for diagnostic purposes. It prints the exported\n metrics to the console STDOUT.\n \"\"\"\n\n def __init__(\n self,\n out: IO = stdout,\n formatter: Callable[[Metric], str] = lambda metric: metric.to_json()\n + linesep,\n ):\n self.out = out\n self.formatter = formatter\n\n def export(self, metrics: Sequence[Metric]) -> MetricExportResult:\n for metric in metrics:\n self.out.write(self.formatter(metric))\n self.out.flush()\n return MetricExportResult.SUCCESS\n\n def shutdown(self) -> None:\n pass\n\n\nclass PeriodicExportingMetricReader(MetricReader):\n \"\"\"`PeriodicExportingMetricReader` is an implementation of `MetricReader`\n that collects metrics based on a user-configurable time interval, and passes the\n metrics to the configured exporter.\n \"\"\"\n\n def __init__(\n self,\n exporter: MetricExporter,\n export_interval_millis: Optional[float] = None,\n export_timeout_millis: Optional[float] = None,\n ) -> None:\n super().__init__(preferred_temporality=exporter.preferred_temporality)\n self._exporter = exporter\n if export_interval_millis is None:\n try:\n export_interval_millis = float(\n environ.get(\"OTEL_METRIC_EXPORT_INTERVAL\", 60000)\n )\n except ValueError:\n _logger.warning(\n \"Found invalid value for export interval, using default\"\n )\n export_interval_millis = 60000\n if export_timeout_millis is None:\n try:\n export_timeout_millis = float(\n environ.get(\"OTEL_METRIC_EXPORT_TIMEOUT\", 30000)\n )\n except ValueError:\n _logger.warning(\n \"Found invalid value for export timeout, using default\"\n )\n export_timeout_millis = 30000\n self._export_interval_millis = export_interval_millis\n self._export_timeout_millis = export_timeout_millis\n self._shutdown = False\n self._shutdown_event = Event()\n self._shutdown_once = Once()\n self._daemon_thread = Thread(target=self._ticker, daemon=True)\n self._daemon_thread.start()\n if hasattr(os, \"register_at_fork\"):\n os.register_at_fork(\n after_in_child=self._at_fork_reinit\n ) # pylint: disable=protected-access\n\n def _at_fork_reinit(self):\n self._daemon_thread = Thread(target=self._ticker, daemon=True)\n self._daemon_thread.start()\n\n def _ticker(self) -> None:\n interval_secs = self._export_interval_millis / 1e3\n while not self._shutdown_event.wait(interval_secs):\n self.collect()\n # one last collection below before shutting down completely\n self.collect()\n\n def _receive_metrics(self, metrics: Iterable[Metric]) -> None:\n if metrics is None:\n return\n token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))\n try:\n self._exporter.export(metrics)\n except Exception as e: # pylint: disable=broad-except,invalid-name\n _logger.exception(\"Exception while exporting metrics %s\", str(e))\n detach(token)\n\n def shutdown(self) -> bool:\n def _shutdown():\n self._shutdown = True\n\n did_set = self._shutdown_once.do_once(_shutdown)\n if not did_set:\n _logger.warning(\"Can't shutdown multiple times\")\n return False\n\n self._shutdown_event.set()\n self._daemon_thread.join()\n self._exporter.shutdown()\n return True\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport os\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\nfrom os import environ, linesep\nfrom sys import stdout\nfrom threading import Event, RLock, Thread\nfrom typing import IO, Callable, Iterable, List, Optional, Sequence\n\nfrom opentelemetry.context import (\n _SUPPRESS_INSTRUMENTATION_KEY,\n attach,\n detach,\n set_value,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk._metrics.point import AggregationTemporality, Metric\nfrom opentelemetry.util._once import Once\n\n_logger = logging.getLogger(__name__)\n\n\nclass MetricExportResult(Enum):\n SUCCESS = 0\n FAILURE = 1\n\n\nclass MetricExporter(ABC):\n \"\"\"Interface for exporting metrics.\n\n Interface to be implemented by services that want to export metrics received\n in their own format.\n \"\"\"\n\n @property\n def preferred_temporality(self) -> AggregationTemporality:\n return AggregationTemporality.CUMULATIVE\n\n @abstractmethod\n def export(self, metrics: Sequence[Metric]) -> \"MetricExportResult\":\n \"\"\"Exports a batch of telemetry data.\n\n Args:\n metrics: The list of `opentelemetry.sdk._metrics.data.MetricData` objects to be exported\n\n Returns:\n The result of the export\n \"\"\"\n\n @abstractmethod\n def shutdown(self) -> None:\n \"\"\"Shuts down the exporter.\n\n Called when the SDK is shut down.\n \"\"\"\n\n\nclass ConsoleMetricExporter(MetricExporter):\n \"\"\"Implementation of :class:`MetricExporter` that prints metrics to the\n console.\n\n This class can be used for diagnostic purposes. It prints the exported\n metrics to the console STDOUT.\n \"\"\"\n\n def __init__(\n self,\n out: IO = stdout,\n formatter: Callable[[Metric], str] = lambda metric: metric.to_json()\n + linesep,\n ):\n self.out = out\n self.formatter = formatter\n\n def export(self, metrics: Sequence[Metric]) -> MetricExportResult:\n for metric in metrics:\n self.out.write(self.formatter(metric))\n self.out.flush()\n return MetricExportResult.SUCCESS\n\n def shutdown(self) -> None:\n pass\n\n\nclass InMemoryMetricReader(MetricReader):\n \"\"\"Implementation of :class:`MetricReader` that returns its metrics from :func:`metrics`.\n\n This is useful for e.g. unit tests.\n \"\"\"\n\n def __init__(\n self,\n preferred_temporality: AggregationTemporality = AggregationTemporality.CUMULATIVE,\n ) -> None:\n super().__init__(preferred_temporality=preferred_temporality)\n self._lock = RLock()\n self._metrics: List[Metric] = []\n\n def get_metrics(self) -> List[Metric]:\n \"\"\"Reads and returns current metrics from the SDK\"\"\"\n with self._lock:\n self.collect()\n metrics = self._metrics\n self._metrics = []\n return metrics\n\n def _receive_metrics(self, metrics: Iterable[Metric]):\n with self._lock:\n self._metrics = list(metrics)\n\n def shutdown(self) -> bool:\n return True\n\n\nclass PeriodicExportingMetricReader(MetricReader):\n \"\"\"`PeriodicExportingMetricReader` is an implementation of `MetricReader`\n that collects metrics based on a user-configurable time interval, and passes the\n metrics to the configured exporter.\n \"\"\"\n\n def __init__(\n self,\n exporter: MetricExporter,\n export_interval_millis: Optional[float] = None,\n export_timeout_millis: Optional[float] = None,\n ) -> None:\n super().__init__(preferred_temporality=exporter.preferred_temporality)\n self._exporter = exporter\n if export_interval_millis is None:\n try:\n export_interval_millis = float(\n environ.get(\"OTEL_METRIC_EXPORT_INTERVAL\", 60000)\n )\n except ValueError:\n _logger.warning(\n \"Found invalid value for export interval, using default\"\n )\n export_interval_millis = 60000\n if export_timeout_millis is None:\n try:\n export_timeout_millis = float(\n environ.get(\"OTEL_METRIC_EXPORT_TIMEOUT\", 30000)\n )\n except ValueError:\n _logger.warning(\n \"Found invalid value for export timeout, using default\"\n )\n export_timeout_millis = 30000\n self._export_interval_millis = export_interval_millis\n self._export_timeout_millis = export_timeout_millis\n self._shutdown = False\n self._shutdown_event = Event()\n self._shutdown_once = Once()\n self._daemon_thread = Thread(target=self._ticker, daemon=True)\n self._daemon_thread.start()\n if hasattr(os, \"register_at_fork\"):\n os.register_at_fork(\n after_in_child=self._at_fork_reinit\n ) # pylint: disable=protected-access\n\n def _at_fork_reinit(self):\n self._daemon_thread = Thread(target=self._ticker, daemon=True)\n self._daemon_thread.start()\n\n def _ticker(self) -> None:\n interval_secs = self._export_interval_millis / 1e3\n while not self._shutdown_event.wait(interval_secs):\n self.collect()\n # one last collection below before shutting down completely\n self.collect()\n\n def _receive_metrics(self, metrics: Iterable[Metric]) -> None:\n if metrics is None:\n return\n token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))\n try:\n self._exporter.export(metrics)\n except Exception as e: # pylint: disable=broad-except,invalid-name\n _logger.exception(\"Exception while exporting metrics %s\", str(e))\n detach(token)\n\n def shutdown(self) -> bool:\n def _shutdown():\n self._shutdown = True\n\n did_set = self._shutdown_once.do_once(_shutdown)\n if not did_set:\n _logger.warning(\"Can't shutdown multiple times\")\n return False\n\n self._shutdown_event.set()\n self._daemon_thread.join()\n self._exporter.shutdown()\n return True\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/export/__init__.py"}]} | 2,042 | 463 |
gh_patches_debug_23064 | rasdani/github-patches | git_diff | modoboa__modoboa-515 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
handle_mailbox_operations command not working
Hello,
This is a new Modoboa 1.1.0 installation. When I try to run:
```
python /opt/modoboa_admin/manage.py handle_mailbox_operations
```
I get the following error:
```
NotDefined: Application 'admin' and/or parameter 'HANDLE_MAILBOXES' not defined
```
According to the [documentation](http://modoboa.readthedocs.org/en/1.1.0/getting_started/configuration.html#admin-params) there should be an option in Modoboa->Parameters->General to activate this HANDLE_MAILBOXES. But I don't see it anywhere.
I tried to outsmart the system by inserting the value in the lib_parameter table but no luck. I guess something else is required.
```
insert into lib_parameter (name, value) values ('admin.HANDLE_MAILBOXES', 'yes')
```
Am I missing something ? Here is the screenshot of my admin interface, logged as the default admin user:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modoboa/extensions/admin/app_settings.py`
Content:
```
1 from django import forms
2 from django.utils.translation import ugettext_lazy
3 from modoboa.lib.formutils import YesNoField, SeparatorField
4 from modoboa.lib.sysutils import exec_cmd
5 from modoboa.lib import parameters
6
7
8 class AdminParametersForm(parameters.AdminParametersForm):
9 app = "admin"
10
11 mbsep = SeparatorField(label=ugettext_lazy("Mailboxes"))
12
13 handle_mailboxes = YesNoField(
14 label=ugettext_lazy("Handle mailboxes on filesystem"),
15 initial="no",
16 help_text=ugettext_lazy("Rename or remove mailboxes on the filesystem when they get renamed or removed within Modoboa")
17 )
18
19 mailboxes_owner = forms.CharField(
20 label=ugettext_lazy("Mailboxes ower"),
21 initial="vmail",
22 help_text=ugettext_lazy("The UNIX account who owns mailboxes on the filesystem")
23 )
24
25 default_domain_quota = forms.IntegerField(
26 label=ugettext_lazy("Default domain quota"),
27 initial=0,
28 help_text=ugettext_lazy(
29 "Default quota (in MB) applied to freshly created domains with no "
30 "value specified. A value of 0 means no quota."
31 ),
32 widget=forms.TextInput(attrs={'class': 'span2'})
33 )
34
35 auto_account_removal = YesNoField(
36 label=ugettext_lazy("Automatic account removal"),
37 initial="no",
38 help_text=ugettext_lazy("When a mailbox is removed, also remove the associated account")
39 )
40
41 # Visibility rules
42 visibility_rules = {
43 "mailboxes_owner": "handle_mailboxes=yes",
44 }
45
46 def __init__(self, *args, **kwargs):
47 super(AdminParametersForm, self).__init__(*args, **kwargs)
48 hide_fields = False
49 code, output = exec_cmd("which dovecot")
50 if not code:
51 dpath = output.strip()
52 try:
53 code, version = exec_cmd("%s --version" % dpath)
54 except OSError:
55 hide_fields = True
56 else:
57 if code or not version.strip().startswith("2"):
58 hide_fields = True
59 else:
60 hide_fields = True
61 if hide_fields:
62 del self.fields["handle_mailboxes"]
63 del self.fields["mailboxes_owner"]
64
65 def clean_default_domain_quota(self):
66 """Ensure quota is a positive integer."""
67 if self.cleaned_data['default_domain_quota'] < 0:
68 raise forms.ValidationError(
69 ugettext_lazy('Must be a positive integer')
70 )
71 return self.cleaned_data['default_domain_quota']
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modoboa/extensions/admin/app_settings.py b/modoboa/extensions/admin/app_settings.py
--- a/modoboa/extensions/admin/app_settings.py
+++ b/modoboa/extensions/admin/app_settings.py
@@ -3,6 +3,7 @@
from modoboa.lib.formutils import YesNoField, SeparatorField
from modoboa.lib.sysutils import exec_cmd
from modoboa.lib import parameters
+import os
class AdminParametersForm(parameters.AdminParametersForm):
@@ -46,9 +47,16 @@
def __init__(self, *args, **kwargs):
super(AdminParametersForm, self).__init__(*args, **kwargs)
hide_fields = False
+ dpath = None
code, output = exec_cmd("which dovecot")
+ known_paths = ("/usr/sbin/dovecot", "/usr/local/sbin/dovecot")
if not code:
dpath = output.strip()
+ else:
+ for fpath in known_paths:
+ if os.path.isfile(fpath) and os.access(fpath, os.X_OK):
+ dpath = fpath
+ if dpath:
try:
code, version = exec_cmd("%s --version" % dpath)
except OSError:
| {"golden_diff": "diff --git a/modoboa/extensions/admin/app_settings.py b/modoboa/extensions/admin/app_settings.py\n--- a/modoboa/extensions/admin/app_settings.py\n+++ b/modoboa/extensions/admin/app_settings.py\n@@ -3,6 +3,7 @@\n from modoboa.lib.formutils import YesNoField, SeparatorField\n from modoboa.lib.sysutils import exec_cmd\n from modoboa.lib import parameters\n+import os\n \n \n class AdminParametersForm(parameters.AdminParametersForm):\n@@ -46,9 +47,16 @@\n def __init__(self, *args, **kwargs):\n super(AdminParametersForm, self).__init__(*args, **kwargs)\n hide_fields = False\n+ dpath = None\n code, output = exec_cmd(\"which dovecot\")\n+ known_paths = (\"/usr/sbin/dovecot\", \"/usr/local/sbin/dovecot\")\n if not code:\n dpath = output.strip()\n+ else:\n+ for fpath in known_paths:\n+ if os.path.isfile(fpath) and os.access(fpath, os.X_OK):\n+ dpath = fpath\n+ if dpath:\n try:\n code, version = exec_cmd(\"%s --version\" % dpath)\n except OSError:\n", "issue": "handle_mailbox_operations command not working\nHello,\n\nThis is a new Modoboa 1.1.0 installation. When I try to run:\n\n```\npython /opt/modoboa_admin/manage.py handle_mailbox_operations\n```\n\nI get the following error:\n\n```\nNotDefined: Application 'admin' and/or parameter 'HANDLE_MAILBOXES' not defined\n```\n\nAccording to the [documentation](http://modoboa.readthedocs.org/en/1.1.0/getting_started/configuration.html#admin-params) there should be an option in Modoboa->Parameters->General to activate this HANDLE_MAILBOXES. But I don't see it anywhere.\n\nI tried to outsmart the system by inserting the value in the lib_parameter table but no luck. I guess something else is required.\n\n```\ninsert into lib_parameter (name, value) values ('admin.HANDLE_MAILBOXES', 'yes')\n```\n\nAm I missing something ? Here is the screenshot of my admin interface, logged as the default admin user:\n\n\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext_lazy\nfrom modoboa.lib.formutils import YesNoField, SeparatorField\nfrom modoboa.lib.sysutils import exec_cmd\nfrom modoboa.lib import parameters\n\n\nclass AdminParametersForm(parameters.AdminParametersForm):\n app = \"admin\"\n\n mbsep = SeparatorField(label=ugettext_lazy(\"Mailboxes\"))\n\n handle_mailboxes = YesNoField(\n label=ugettext_lazy(\"Handle mailboxes on filesystem\"),\n initial=\"no\",\n help_text=ugettext_lazy(\"Rename or remove mailboxes on the filesystem when they get renamed or removed within Modoboa\")\n )\n\n mailboxes_owner = forms.CharField(\n label=ugettext_lazy(\"Mailboxes ower\"),\n initial=\"vmail\",\n help_text=ugettext_lazy(\"The UNIX account who owns mailboxes on the filesystem\")\n )\n\n default_domain_quota = forms.IntegerField(\n label=ugettext_lazy(\"Default domain quota\"),\n initial=0,\n help_text=ugettext_lazy(\n \"Default quota (in MB) applied to freshly created domains with no \"\n \"value specified. A value of 0 means no quota.\"\n ),\n widget=forms.TextInput(attrs={'class': 'span2'})\n )\n\n auto_account_removal = YesNoField(\n label=ugettext_lazy(\"Automatic account removal\"),\n initial=\"no\",\n help_text=ugettext_lazy(\"When a mailbox is removed, also remove the associated account\")\n )\n\n # Visibility rules\n visibility_rules = {\n \"mailboxes_owner\": \"handle_mailboxes=yes\",\n }\n\n def __init__(self, *args, **kwargs):\n super(AdminParametersForm, self).__init__(*args, **kwargs)\n hide_fields = False\n code, output = exec_cmd(\"which dovecot\")\n if not code:\n dpath = output.strip()\n try:\n code, version = exec_cmd(\"%s --version\" % dpath)\n except OSError:\n hide_fields = True\n else:\n if code or not version.strip().startswith(\"2\"):\n hide_fields = True\n else:\n hide_fields = True\n if hide_fields:\n del self.fields[\"handle_mailboxes\"]\n del self.fields[\"mailboxes_owner\"]\n\n def clean_default_domain_quota(self):\n \"\"\"Ensure quota is a positive integer.\"\"\"\n if self.cleaned_data['default_domain_quota'] < 0:\n raise forms.ValidationError(\n ugettext_lazy('Must be a positive integer')\n )\n return self.cleaned_data['default_domain_quota']\n", "path": "modoboa/extensions/admin/app_settings.py"}], "after_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext_lazy\nfrom modoboa.lib.formutils import YesNoField, SeparatorField\nfrom modoboa.lib.sysutils import exec_cmd\nfrom modoboa.lib import parameters\nimport os\n\n\nclass AdminParametersForm(parameters.AdminParametersForm):\n app = \"admin\"\n\n mbsep = SeparatorField(label=ugettext_lazy(\"Mailboxes\"))\n\n handle_mailboxes = YesNoField(\n label=ugettext_lazy(\"Handle mailboxes on filesystem\"),\n initial=\"no\",\n help_text=ugettext_lazy(\"Rename or remove mailboxes on the filesystem when they get renamed or removed within Modoboa\")\n )\n\n mailboxes_owner = forms.CharField(\n label=ugettext_lazy(\"Mailboxes ower\"),\n initial=\"vmail\",\n help_text=ugettext_lazy(\"The UNIX account who owns mailboxes on the filesystem\")\n )\n\n default_domain_quota = forms.IntegerField(\n label=ugettext_lazy(\"Default domain quota\"),\n initial=0,\n help_text=ugettext_lazy(\n \"Default quota (in MB) applied to freshly created domains with no \"\n \"value specified. A value of 0 means no quota.\"\n ),\n widget=forms.TextInput(attrs={'class': 'span2'})\n )\n\n auto_account_removal = YesNoField(\n label=ugettext_lazy(\"Automatic account removal\"),\n initial=\"no\",\n help_text=ugettext_lazy(\"When a mailbox is removed, also remove the associated account\")\n )\n\n # Visibility rules\n visibility_rules = {\n \"mailboxes_owner\": \"handle_mailboxes=yes\",\n }\n\n def __init__(self, *args, **kwargs):\n super(AdminParametersForm, self).__init__(*args, **kwargs)\n hide_fields = False\n dpath = None\n code, output = exec_cmd(\"which dovecot\")\n known_paths = (\"/usr/sbin/dovecot\", \"/usr/local/sbin/dovecot\")\n if not code:\n dpath = output.strip()\n else:\n for fpath in known_paths:\n if os.path.isfile(fpath) and os.access(fpath, os.X_OK):\n dpath = fpath\n if dpath:\n try:\n code, version = exec_cmd(\"%s --version\" % dpath)\n except OSError:\n hide_fields = True\n else:\n if code or not version.strip().startswith(\"2\"):\n hide_fields = True\n else:\n hide_fields = True\n if hide_fields:\n del self.fields[\"handle_mailboxes\"]\n del self.fields[\"mailboxes_owner\"]\n\n def clean_default_domain_quota(self):\n \"\"\"Ensure quota is a positive integer.\"\"\"\n if self.cleaned_data['default_domain_quota'] < 0:\n raise forms.ValidationError(\n ugettext_lazy('Must be a positive integer')\n )\n return self.cleaned_data['default_domain_quota']\n", "path": "modoboa/extensions/admin/app_settings.py"}]} | 1,206 | 271 |
gh_patches_debug_25467 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-175 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IntegrityError in language tree
I just found a bug causing an `IntegrityError` in the language tree. The error can be reproduced reliably in the current state of the develop branch.
Steps to reproduce:
- In the network admin view:
- Create a new region
- Create at least two languages (in the following steps, we assume the two languages to be German and Englisch, works with any other languages as well)
- In the region view (in the region we just created):
- Create a new language node for the base language (German in this example)
- **Bug occurs in the next steps, therefore I provide a more precise description of the following steps:** in the language tree view, click on "create language tree node"
- Choose "English" as language, "German" as source language, check the checkbox for language activation
- click on "save", a success message should show up
- click on "save" again without changing any form fields
- now the form fields should have the following contents:
- language: "English"
- source language: "German"
- activate language: is checked (`true`)
- change language field to "German", as all languages can be chosen again
- now the form fields should have the following contents:
- language: "German"
- source language: "German"
- activate language: is checked (`true`)
- click on "save" again
- `IntegrityError` occurs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/cms/views/language_tree/language_tree_node.py`
Content:
```
1 """
2
3 Returns:
4 [type]: [description]
5 """
6
7 from django.contrib import messages
8 from django.contrib.auth.decorators import login_required
9 from django.contrib.auth.mixins import PermissionRequiredMixin
10 from django.utils.translation import ugettext as _
11 from django.utils.decorators import method_decorator
12 from django.views.generic import TemplateView
13 from django.shortcuts import render, redirect
14
15 from .language_tree_node_form import LanguageTreeNodeForm
16 from ...models import Language, LanguageTreeNode, Site
17 from ...decorators import region_permission_required
18
19
20 @method_decorator(login_required, name='dispatch')
21 @method_decorator(region_permission_required, name='dispatch')
22 class LanguageTreeNodeView(PermissionRequiredMixin, TemplateView):
23 permission_required = 'cms.manage_language_tree'
24 raise_exception = True
25
26 template_name = 'language_tree/tree_node.html'
27 base_context = {'current_menu_item': 'language_tree'}
28
29 def get(self, request, *args, **kwargs):
30 language_tree_node_id = self.kwargs.get('language_tree_node_id')
31 # limit possible parents to nodes of current region
32 parent_queryset = Site.get_current_site(request).language_tree_nodes
33 # limit possible languages to those which are not yet included in the tree
34 language_queryset = Language.objects.exclude(
35 language_tree_nodes__in=parent_queryset.exclude(id=language_tree_node_id)
36 )
37 if language_tree_node_id:
38 language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)
39 children = language_tree_node.get_descendants(include_self=True)
40 parent_queryset = parent_queryset.difference(children)
41 form = LanguageTreeNodeForm(initial={
42 'language': language_tree_node.language,
43 'parent': language_tree_node.parent,
44 'active': language_tree_node.active,
45 })
46 else:
47 form = LanguageTreeNodeForm()
48 form.fields['parent'].queryset = parent_queryset
49 form.fields['language'].queryset = language_queryset
50 return render(request, self.template_name, {
51 'form': form, **self.base_context})
52
53 def post(self, request, site_slug, language_tree_node_id=None):
54 # TODO: error handling
55 form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)
56 if form.is_valid():
57 if language_tree_node_id:
58 form.save_language_node(
59 language_tree_node_id=language_tree_node_id,
60 )
61 messages.success(request, _('Language tree node was saved successfully.'))
62 else:
63 language_tree_node = form.save_language_node()
64 messages.success(request, _('Language tree node was created successfully.'))
65 return redirect('edit_language_tree_node', **{
66 'language_tree_node_id': language_tree_node.id,
67 'site_slug': site_slug,
68 })
69 # TODO: improve messages
70 else:
71 messages.error(request, _('Errors have occurred.'))
72
73 return render(request, self.template_name, {
74 'form': form, **self.base_context})
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/cms/views/language_tree/language_tree_node.py b/backend/cms/views/language_tree/language_tree_node.py
--- a/backend/cms/views/language_tree/language_tree_node.py
+++ b/backend/cms/views/language_tree/language_tree_node.py
@@ -55,17 +55,17 @@
form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)
if form.is_valid():
if language_tree_node_id:
- form.save_language_node(
+ language_tree_node = form.save_language_node(
language_tree_node_id=language_tree_node_id,
)
messages.success(request, _('Language tree node was saved successfully.'))
else:
language_tree_node = form.save_language_node()
messages.success(request, _('Language tree node was created successfully.'))
- return redirect('edit_language_tree_node', **{
- 'language_tree_node_id': language_tree_node.id,
- 'site_slug': site_slug,
- })
+ return redirect('edit_language_tree_node', **{
+ 'language_tree_node_id': language_tree_node.id,
+ 'site_slug': site_slug,
+ })
# TODO: improve messages
else:
messages.error(request, _('Errors have occurred.'))
| {"golden_diff": "diff --git a/backend/cms/views/language_tree/language_tree_node.py b/backend/cms/views/language_tree/language_tree_node.py\n--- a/backend/cms/views/language_tree/language_tree_node.py\n+++ b/backend/cms/views/language_tree/language_tree_node.py\n@@ -55,17 +55,17 @@\n form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)\n if form.is_valid():\n if language_tree_node_id:\n- form.save_language_node(\n+ language_tree_node = form.save_language_node(\n language_tree_node_id=language_tree_node_id,\n )\n messages.success(request, _('Language tree node was saved successfully.'))\n else:\n language_tree_node = form.save_language_node()\n messages.success(request, _('Language tree node was created successfully.'))\n- return redirect('edit_language_tree_node', **{\n- 'language_tree_node_id': language_tree_node.id,\n- 'site_slug': site_slug,\n- })\n+ return redirect('edit_language_tree_node', **{\n+ 'language_tree_node_id': language_tree_node.id,\n+ 'site_slug': site_slug,\n+ })\n # TODO: improve messages\n else:\n messages.error(request, _('Errors have occurred.'))\n", "issue": "IntegrityError in language tree\nI just found a bug causing an `IntegrityError` in the language tree. The error can be reproduced reliably in the current state of the develop branch.\r\n\r\nSteps to reproduce:\r\n- In the network admin view:\r\n - Create a new region\r\n - Create at least two languages (in the following steps, we assume the two languages to be German and Englisch, works with any other languages as well)\r\n- In the region view (in the region we just created):\r\n - Create a new language node for the base language (German in this example)\r\n - **Bug occurs in the next steps, therefore I provide a more precise description of the following steps:** in the language tree view, click on \"create language tree node\"\r\n - Choose \"English\" as language, \"German\" as source language, check the checkbox for language activation\r\n - click on \"save\", a success message should show up\r\n - click on \"save\" again without changing any form fields\r\n - now the form fields should have the following contents:\r\n - language: \"English\"\r\n - source language: \"German\"\r\n - activate language: is checked (`true`)\r\n - change language field to \"German\", as all languages can be chosen again\r\n - now the form fields should have the following contents:\r\n - language: \"German\"\r\n - source language: \"German\"\r\n - activate language: is checked (`true`)\r\n - click on \"save\" again\r\n - `IntegrityError` occurs\n", "before_files": [{"content": "\"\"\"\n\nReturns:\n [type]: [description]\n\"\"\"\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.utils.translation import ugettext as _\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import TemplateView\nfrom django.shortcuts import render, redirect\n\nfrom .language_tree_node_form import LanguageTreeNodeForm\nfrom ...models import Language, LanguageTreeNode, Site\nfrom ...decorators import region_permission_required\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass LanguageTreeNodeView(PermissionRequiredMixin, TemplateView):\n permission_required = 'cms.manage_language_tree'\n raise_exception = True\n\n template_name = 'language_tree/tree_node.html'\n base_context = {'current_menu_item': 'language_tree'}\n\n def get(self, request, *args, **kwargs):\n language_tree_node_id = self.kwargs.get('language_tree_node_id')\n # limit possible parents to nodes of current region\n parent_queryset = Site.get_current_site(request).language_tree_nodes\n # limit possible languages to those which are not yet included in the tree\n language_queryset = Language.objects.exclude(\n language_tree_nodes__in=parent_queryset.exclude(id=language_tree_node_id)\n )\n if language_tree_node_id:\n language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)\n children = language_tree_node.get_descendants(include_self=True)\n parent_queryset = parent_queryset.difference(children)\n form = LanguageTreeNodeForm(initial={\n 'language': language_tree_node.language,\n 'parent': language_tree_node.parent,\n 'active': language_tree_node.active,\n })\n else:\n form = LanguageTreeNodeForm()\n form.fields['parent'].queryset = parent_queryset\n form.fields['language'].queryset = language_queryset\n return render(request, self.template_name, {\n 'form': form, **self.base_context})\n\n def post(self, request, site_slug, language_tree_node_id=None):\n # TODO: error handling\n form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)\n if form.is_valid():\n if language_tree_node_id:\n form.save_language_node(\n language_tree_node_id=language_tree_node_id,\n )\n messages.success(request, _('Language tree node was saved successfully.'))\n else:\n language_tree_node = form.save_language_node()\n messages.success(request, _('Language tree node was created successfully.'))\n return redirect('edit_language_tree_node', **{\n 'language_tree_node_id': language_tree_node.id,\n 'site_slug': site_slug,\n })\n # TODO: improve messages\n else:\n messages.error(request, _('Errors have occurred.'))\n\n return render(request, self.template_name, {\n 'form': form, **self.base_context})\n", "path": "backend/cms/views/language_tree/language_tree_node.py"}], "after_files": [{"content": "\"\"\"\n\nReturns:\n [type]: [description]\n\"\"\"\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.utils.translation import ugettext as _\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import TemplateView\nfrom django.shortcuts import render, redirect\n\nfrom .language_tree_node_form import LanguageTreeNodeForm\nfrom ...models import Language, LanguageTreeNode, Site\nfrom ...decorators import region_permission_required\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass LanguageTreeNodeView(PermissionRequiredMixin, TemplateView):\n permission_required = 'cms.manage_language_tree'\n raise_exception = True\n\n template_name = 'language_tree/tree_node.html'\n base_context = {'current_menu_item': 'language_tree'}\n\n def get(self, request, *args, **kwargs):\n language_tree_node_id = self.kwargs.get('language_tree_node_id')\n # limit possible parents to nodes of current region\n parent_queryset = Site.get_current_site(request).language_tree_nodes\n # limit possible languages to those which are not yet included in the tree\n language_queryset = Language.objects.exclude(\n language_tree_nodes__in=parent_queryset.exclude(id=language_tree_node_id)\n )\n if language_tree_node_id:\n language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)\n children = language_tree_node.get_descendants(include_self=True)\n parent_queryset = parent_queryset.difference(children)\n form = LanguageTreeNodeForm(initial={\n 'language': language_tree_node.language,\n 'parent': language_tree_node.parent,\n 'active': language_tree_node.active,\n })\n else:\n form = LanguageTreeNodeForm()\n form.fields['parent'].queryset = parent_queryset\n form.fields['language'].queryset = language_queryset\n return render(request, self.template_name, {\n 'form': form, **self.base_context})\n\n def post(self, request, site_slug, language_tree_node_id=None):\n # TODO: error handling\n form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)\n if form.is_valid():\n if language_tree_node_id:\n language_tree_node = form.save_language_node(\n language_tree_node_id=language_tree_node_id,\n )\n messages.success(request, _('Language tree node was saved successfully.'))\n else:\n language_tree_node = form.save_language_node()\n messages.success(request, _('Language tree node was created successfully.'))\n return redirect('edit_language_tree_node', **{\n 'language_tree_node_id': language_tree_node.id,\n 'site_slug': site_slug,\n })\n # TODO: improve messages\n else:\n messages.error(request, _('Errors have occurred.'))\n\n return render(request, self.template_name, {\n 'form': form, **self.base_context})\n", "path": "backend/cms/views/language_tree/language_tree_node.py"}]} | 1,323 | 258 |
gh_patches_debug_43552 | rasdani/github-patches | git_diff | streamlink__streamlink-4202 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.ard_mediathek: rewrite plugin
Resolves #4191
One issue I couldn't fix is the text encoding of the metadata which gets messed up by `validate.parse_html()`. See the VOD title down below...
https://github.com/streamlink/streamlink/blob/175d4748561c7154bb80c5a47dae22039e45d4ce/src/streamlink/utils/parse.py#L54-L55
Some VODs also have a second title, eg. if it's a TV show, but I couldn't be bothered to implement this. Not important.
----
Das Erste - Live:
```
$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/' best
[cli.output][debug] Opening subprocess: mpv "--force-media-title=Das Erste - Das Erste" -
```
WDR - Live:
```
$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/live/Y3JpZDovL3dkci5kZS9CZWl0cmFnLTNkYTY2NGRlLTE4YzItNDY1MC1hNGZmLTRmNjQxNDcyMDcyYg/' best
[cli.output][debug] Opening subprocess: mpv "--force-media-title=WDR - WDR Fernsehen im Livestream" -
```
VOD
```
$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/video/dokus-im-ersten/wirecard-die-milliarden-luege/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3JlcG9ydGFnZSBfIGRva3VtZW50YXRpb24gaW0gZXJzdGVuL2NlMjQ0OWM4LTQ4YTUtNGIyNC1iMTdlLWNhOTNjMDQ5OTc4Zg/' best
[cli.output][debug] Opening subprocess: mpv "--force-media-title=Das Erste - Wirecard - Die Milliarden-Lüge" -
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/ard_mediathek.py`
Content:
```
1 import logging
2 import re
3
4 from streamlink.plugin import Plugin, pluginmatcher
5 from streamlink.plugin.api import validate
6 from streamlink.stream.hls import HLSStream
7
8
9 log = logging.getLogger(__name__)
10
11
12 @pluginmatcher(re.compile(
13 r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)"
14 ))
15 class ARDMediathek(Plugin):
16 def _get_streams(self):
17 data_json = self.session.http.get(self.url, schema=validate.Schema(
18 validate.parse_html(),
19 validate.xml_findtext(".//script[@id='fetchedContextValue'][@type='application/json']"),
20 validate.any(None, validate.all(
21 validate.parse_json(),
22 {str: dict},
23 validate.transform(lambda obj: list(obj.items())),
24 validate.filter(lambda item: item[0].startswith("https://api.ardmediathek.de/page-gateway/pages/")),
25 validate.any(validate.get((0, 1)), [])
26 ))
27 ))
28 if not data_json:
29 return
30
31 schema_data = validate.Schema({
32 "id": str,
33 "widgets": validate.all(
34 [dict],
35 validate.filter(lambda item: item.get("mediaCollection")),
36 validate.get(0),
37 {
38 "geoblocked": bool,
39 "publicationService": {
40 "name": str,
41 },
42 "title": str,
43 "mediaCollection": {
44 "embedded": {
45 "_mediaArray": [{
46 "_mediaStreamArray": [{
47 "_quality": validate.any(str, int),
48 "_stream": validate.url()
49 }]
50 }]
51 }
52 }
53 }
54 )
55 })
56 data = schema_data.validate(data_json)
57
58 log.debug(f"Found media id: {data['id']}")
59 data_media = data["widgets"]
60
61 if data_media["geoblocked"]:
62 log.info("The content is not available in your region")
63 return
64
65 self.author = data_media["publicationService"]["name"]
66 self.title = data_media["title"]
67
68 for media in data_media["mediaCollection"]["embedded"]["_mediaArray"]:
69 for stream in media["_mediaStreamArray"]:
70 if stream["_quality"] != "auto" or ".m3u8" not in stream["_stream"]:
71 continue
72 return HLSStream.parse_variant_playlist(self.session, stream["_stream"])
73
74
75 __plugin__ = ARDMediathek
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py
--- a/src/streamlink/plugins/ard_mediathek.py
+++ b/src/streamlink/plugins/ard_mediathek.py
@@ -4,6 +4,7 @@
from streamlink.plugin import Plugin, pluginmatcher
from streamlink.plugin.api import validate
from streamlink.stream.hls import HLSStream
+from streamlink.stream.http import HTTPStream
log = logging.getLogger(__name__)
@@ -13,6 +14,14 @@
r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)"
))
class ARDMediathek(Plugin):
+ _QUALITY_MAP = {
+ 4: "1080p",
+ 3: "720p",
+ 2: "540p",
+ 1: "360p",
+ 0: "270p"
+ }
+
def _get_streams(self):
data_json = self.session.http.get(self.url, schema=validate.Schema(
validate.parse_html(),
@@ -34,42 +43,64 @@
[dict],
validate.filter(lambda item: item.get("mediaCollection")),
validate.get(0),
- {
- "geoblocked": bool,
- "publicationService": {
- "name": str,
+ validate.any(None, validate.all(
+ {
+ "geoblocked": bool,
+ "publicationService": {
+ "name": str,
+ },
+ "show": validate.any(None, validate.all(
+ {"title": str},
+ validate.get("title")
+ )),
+ "title": str,
+ "mediaCollection": {
+ "embedded": {
+ "_mediaArray": [validate.all(
+ {
+ "_mediaStreamArray": [validate.all(
+ {
+ "_quality": validate.any(str, int),
+ "_stream": validate.url(),
+ },
+ validate.union_get("_quality", "_stream")
+ )]
+ },
+ validate.get("_mediaStreamArray"),
+ validate.transform(dict)
+ )]
+ }
+ },
},
- "title": str,
- "mediaCollection": {
- "embedded": {
- "_mediaArray": [{
- "_mediaStreamArray": [{
- "_quality": validate.any(str, int),
- "_stream": validate.url()
- }]
- }]
- }
- }
- }
+ validate.union_get(
+ "geoblocked",
+ ("mediaCollection", "embedded", "_mediaArray", 0),
+ ("publicationService", "name"),
+ "title",
+ "show",
+ )
+ ))
)
})
data = schema_data.validate(data_json)
log.debug(f"Found media id: {data['id']}")
- data_media = data["widgets"]
+ if not data["widgets"]:
+ log.info("The content is unavailable")
+ return
- if data_media["geoblocked"]:
+ geoblocked, media, self.author, self.title, show = data["widgets"]
+ if geoblocked:
log.info("The content is not available in your region")
return
+ if show:
+ self.title = f"{show}: {self.title}"
- self.author = data_media["publicationService"]["name"]
- self.title = data_media["title"]
-
- for media in data_media["mediaCollection"]["embedded"]["_mediaArray"]:
- for stream in media["_mediaStreamArray"]:
- if stream["_quality"] != "auto" or ".m3u8" not in stream["_stream"]:
- continue
- return HLSStream.parse_variant_playlist(self.session, stream["_stream"])
+ if media.get("auto"):
+ yield from HLSStream.parse_variant_playlist(self.session, media.get("auto")).items()
+ else:
+ for quality, stream in media.items():
+ yield self._QUALITY_MAP.get(quality, quality), HTTPStream(self.session, stream)
__plugin__ = ARDMediathek
| {"golden_diff": "diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py\n--- a/src/streamlink/plugins/ard_mediathek.py\n+++ b/src/streamlink/plugins/ard_mediathek.py\n@@ -4,6 +4,7 @@\n from streamlink.plugin import Plugin, pluginmatcher\n from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n+from streamlink.stream.http import HTTPStream\n \n \n log = logging.getLogger(__name__)\n@@ -13,6 +14,14 @@\n r\"https?://(?:(\\w+\\.)?ardmediathek\\.de/|mediathek\\.daserste\\.de/)\"\n ))\n class ARDMediathek(Plugin):\n+ _QUALITY_MAP = {\n+ 4: \"1080p\",\n+ 3: \"720p\",\n+ 2: \"540p\",\n+ 1: \"360p\",\n+ 0: \"270p\"\n+ }\n+\n def _get_streams(self):\n data_json = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n@@ -34,42 +43,64 @@\n [dict],\n validate.filter(lambda item: item.get(\"mediaCollection\")),\n validate.get(0),\n- {\n- \"geoblocked\": bool,\n- \"publicationService\": {\n- \"name\": str,\n+ validate.any(None, validate.all(\n+ {\n+ \"geoblocked\": bool,\n+ \"publicationService\": {\n+ \"name\": str,\n+ },\n+ \"show\": validate.any(None, validate.all(\n+ {\"title\": str},\n+ validate.get(\"title\")\n+ )),\n+ \"title\": str,\n+ \"mediaCollection\": {\n+ \"embedded\": {\n+ \"_mediaArray\": [validate.all(\n+ {\n+ \"_mediaStreamArray\": [validate.all(\n+ {\n+ \"_quality\": validate.any(str, int),\n+ \"_stream\": validate.url(),\n+ },\n+ validate.union_get(\"_quality\", \"_stream\")\n+ )]\n+ },\n+ validate.get(\"_mediaStreamArray\"),\n+ validate.transform(dict)\n+ )]\n+ }\n+ },\n },\n- \"title\": str,\n- \"mediaCollection\": {\n- \"embedded\": {\n- \"_mediaArray\": [{\n- \"_mediaStreamArray\": [{\n- \"_quality\": validate.any(str, int),\n- \"_stream\": validate.url()\n- }]\n- }]\n- }\n- }\n- }\n+ validate.union_get(\n+ \"geoblocked\",\n+ (\"mediaCollection\", \"embedded\", \"_mediaArray\", 0),\n+ (\"publicationService\", \"name\"),\n+ \"title\",\n+ \"show\",\n+ )\n+ ))\n )\n })\n data = schema_data.validate(data_json)\n \n log.debug(f\"Found media id: {data['id']}\")\n- data_media = data[\"widgets\"]\n+ if not data[\"widgets\"]:\n+ log.info(\"The content is unavailable\")\n+ return\n \n- if data_media[\"geoblocked\"]:\n+ geoblocked, media, self.author, self.title, show = data[\"widgets\"]\n+ if geoblocked:\n log.info(\"The content is not available in your region\")\n return\n+ if show:\n+ self.title = f\"{show}: {self.title}\"\n \n- self.author = data_media[\"publicationService\"][\"name\"]\n- self.title = data_media[\"title\"]\n-\n- for media in data_media[\"mediaCollection\"][\"embedded\"][\"_mediaArray\"]:\n- for stream in media[\"_mediaStreamArray\"]:\n- if stream[\"_quality\"] != \"auto\" or \".m3u8\" not in stream[\"_stream\"]:\n- continue\n- return HLSStream.parse_variant_playlist(self.session, stream[\"_stream\"])\n+ if media.get(\"auto\"):\n+ yield from HLSStream.parse_variant_playlist(self.session, media.get(\"auto\")).items()\n+ else:\n+ for quality, stream in media.items():\n+ yield self._QUALITY_MAP.get(quality, quality), HTTPStream(self.session, stream)\n \n \n __plugin__ = ARDMediathek\n", "issue": "plugins.ard_mediathek: rewrite plugin\nResolves #4191 \r\n\r\nOne issue I couldn't fix is the text encoding of the metadata which gets messed up by `validate.parse_html()`. See the VOD title down below...\r\nhttps://github.com/streamlink/streamlink/blob/175d4748561c7154bb80c5a47dae22039e45d4ce/src/streamlink/utils/parse.py#L54-L55\r\n\r\nSome VODs also have a second title, eg. if it's a TV show, but I couldn't be bothered to implement this. Not important.\r\n\r\n----\r\n\r\nDas Erste - Live:\r\n```\r\n$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/' best\r\n[cli.output][debug] Opening subprocess: mpv \"--force-media-title=Das Erste - Das Erste\" -\r\n```\r\n\r\nWDR - Live:\r\n```\r\n$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/live/Y3JpZDovL3dkci5kZS9CZWl0cmFnLTNkYTY2NGRlLTE4YzItNDY1MC1hNGZmLTRmNjQxNDcyMDcyYg/' best\r\n[cli.output][debug] Opening subprocess: mpv \"--force-media-title=WDR - WDR Fernsehen im Livestream\" -\r\n```\r\n\r\nVOD\r\n```\r\n$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/video/dokus-im-ersten/wirecard-die-milliarden-luege/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3JlcG9ydGFnZSBfIGRva3VtZW50YXRpb24gaW0gZXJzdGVuL2NlMjQ0OWM4LTQ4YTUtNGIyNC1iMTdlLWNhOTNjMDQ5OTc4Zg/' best\r\n[cli.output][debug] Opening subprocess: mpv \"--force-media-title=Das Erste - Wirecard - Die Milliarden-L\u00c3\u00bcge\" -\r\n```\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:(\\w+\\.)?ardmediathek\\.de/|mediathek\\.daserste\\.de/)\"\n))\nclass ARDMediathek(Plugin):\n def _get_streams(self):\n data_json = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_findtext(\".//script[@id='fetchedContextValue'][@type='application/json']\"),\n validate.any(None, validate.all(\n validate.parse_json(),\n {str: dict},\n validate.transform(lambda obj: list(obj.items())),\n validate.filter(lambda item: item[0].startswith(\"https://api.ardmediathek.de/page-gateway/pages/\")),\n validate.any(validate.get((0, 1)), [])\n ))\n ))\n if not data_json:\n return\n\n schema_data = validate.Schema({\n \"id\": str,\n \"widgets\": validate.all(\n [dict],\n validate.filter(lambda item: item.get(\"mediaCollection\")),\n validate.get(0),\n {\n \"geoblocked\": bool,\n \"publicationService\": {\n \"name\": str,\n },\n \"title\": str,\n \"mediaCollection\": {\n \"embedded\": {\n \"_mediaArray\": [{\n \"_mediaStreamArray\": [{\n \"_quality\": validate.any(str, int),\n \"_stream\": validate.url()\n }]\n }]\n }\n }\n }\n )\n })\n data = schema_data.validate(data_json)\n\n log.debug(f\"Found media id: {data['id']}\")\n data_media = data[\"widgets\"]\n\n if data_media[\"geoblocked\"]:\n log.info(\"The content is not available in your region\")\n return\n\n self.author = data_media[\"publicationService\"][\"name\"]\n self.title = data_media[\"title\"]\n\n for media in data_media[\"mediaCollection\"][\"embedded\"][\"_mediaArray\"]:\n for stream in media[\"_mediaStreamArray\"]:\n if stream[\"_quality\"] != \"auto\" or \".m3u8\" not in stream[\"_stream\"]:\n continue\n return HLSStream.parse_variant_playlist(self.session, stream[\"_stream\"])\n\n\n__plugin__ = ARDMediathek\n", "path": "src/streamlink/plugins/ard_mediathek.py"}], "after_files": [{"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.stream.http import HTTPStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:(\\w+\\.)?ardmediathek\\.de/|mediathek\\.daserste\\.de/)\"\n))\nclass ARDMediathek(Plugin):\n _QUALITY_MAP = {\n 4: \"1080p\",\n 3: \"720p\",\n 2: \"540p\",\n 1: \"360p\",\n 0: \"270p\"\n }\n\n def _get_streams(self):\n data_json = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_findtext(\".//script[@id='fetchedContextValue'][@type='application/json']\"),\n validate.any(None, validate.all(\n validate.parse_json(),\n {str: dict},\n validate.transform(lambda obj: list(obj.items())),\n validate.filter(lambda item: item[0].startswith(\"https://api.ardmediathek.de/page-gateway/pages/\")),\n validate.any(validate.get((0, 1)), [])\n ))\n ))\n if not data_json:\n return\n\n schema_data = validate.Schema({\n \"id\": str,\n \"widgets\": validate.all(\n [dict],\n validate.filter(lambda item: item.get(\"mediaCollection\")),\n validate.get(0),\n validate.any(None, validate.all(\n {\n \"geoblocked\": bool,\n \"publicationService\": {\n \"name\": str,\n },\n \"show\": validate.any(None, validate.all(\n {\"title\": str},\n validate.get(\"title\")\n )),\n \"title\": str,\n \"mediaCollection\": {\n \"embedded\": {\n \"_mediaArray\": [validate.all(\n {\n \"_mediaStreamArray\": [validate.all(\n {\n \"_quality\": validate.any(str, int),\n \"_stream\": validate.url(),\n },\n validate.union_get(\"_quality\", \"_stream\")\n )]\n },\n validate.get(\"_mediaStreamArray\"),\n validate.transform(dict)\n )]\n }\n },\n },\n validate.union_get(\n \"geoblocked\",\n (\"mediaCollection\", \"embedded\", \"_mediaArray\", 0),\n (\"publicationService\", \"name\"),\n \"title\",\n \"show\",\n )\n ))\n )\n })\n data = schema_data.validate(data_json)\n\n log.debug(f\"Found media id: {data['id']}\")\n if not data[\"widgets\"]:\n log.info(\"The content is unavailable\")\n return\n\n geoblocked, media, self.author, self.title, show = data[\"widgets\"]\n if geoblocked:\n log.info(\"The content is not available in your region\")\n return\n if show:\n self.title = f\"{show}: {self.title}\"\n\n if media.get(\"auto\"):\n yield from HLSStream.parse_variant_playlist(self.session, media.get(\"auto\")).items()\n else:\n for quality, stream in media.items():\n yield self._QUALITY_MAP.get(quality, quality), HTTPStream(self.session, stream)\n\n\n__plugin__ = ARDMediathek\n", "path": "src/streamlink/plugins/ard_mediathek.py"}]} | 1,483 | 922 |
gh_patches_debug_13091 | rasdani/github-patches | git_diff | PrefectHQ__prefect-11999 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no import statement for wait_for_flow_run
### First check
- [X] I added a descriptive title to this issue.
- [X] I used GitHub search to find a similar request and didn't find it 😇
### Describe the issue
There is no import statement for wait_for_flow_run so typing this code into pycharm shows wait_for_flow_run as an error. Searching the internets, the import statement used to be
_from prefect.tasks.prefect import wait_for_flow_run_
yeah, that doesn't work anymore.
### Describe the proposed change
put the correct import statement in the docs which is
_from prefect.flow_runs import wait_for_flow_run_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/flow_runs.py`
Content:
```
1 from typing import Optional
2 from uuid import UUID
3
4 import anyio
5
6 from prefect.client.orchestration import PrefectClient
7 from prefect.client.schemas import FlowRun
8 from prefect.client.utilities import inject_client
9 from prefect.exceptions import FlowRunWaitTimeout
10 from prefect.logging import get_logger
11
12
13 @inject_client
14 async def wait_for_flow_run(
15 flow_run_id: UUID,
16 timeout: Optional[int] = 10800,
17 poll_interval: int = 5,
18 client: Optional[PrefectClient] = None,
19 log_states: bool = False,
20 ) -> FlowRun:
21 """
22 Waits for the prefect flow run to finish and returns the FlowRun
23
24 Args:
25 flow_run_id: The flow run ID for the flow run to wait for.
26 timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).
27 poll_interval: The poll interval in seconds. Defaults to 5.
28
29 Returns:
30 FlowRun: The finished flow run.
31
32 Raises:
33 prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.
34
35 Examples:
36 Create a flow run for a deployment and wait for it to finish:
37 ```python
38 import asyncio
39
40 from prefect import get_client
41
42 async def main():
43 async with get_client() as client:
44 flow_run = await client.create_flow_run_from_deployment(deployment_id="my-deployment-id")
45 flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)
46 print(flow_run.state)
47
48 if __name__ == "__main__":
49 asyncio.run(main())
50
51 ```
52
53 Trigger multiple flow runs and wait for them to finish:
54 ```python
55 import asyncio
56
57 from prefect import get_client
58
59 async def main(num_runs: int):
60 async with get_client() as client:
61 flow_runs = [
62 await client.create_flow_run_from_deployment(deployment_id="my-deployment-id")
63 for _
64 in range(num_runs)
65 ]
66 coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]
67 finished_flow_runs = await asyncio.gather(*coros)
68 print([flow_run.state for flow_run in finished_flow_runs])
69
70 if __name__ == "__main__":
71 asyncio.run(main(num_runs=10))
72
73 ```
74 """
75 assert client is not None, "Client injection failed"
76 logger = get_logger()
77 with anyio.move_on_after(timeout):
78 while True:
79 flow_run = await client.read_flow_run(flow_run_id)
80 flow_state = flow_run.state
81 if log_states:
82 logger.info(f"Flow run is in state {flow_run.state.name!r}")
83 if flow_state and flow_state.is_final():
84 return flow_run
85 await anyio.sleep(poll_interval)
86 raise FlowRunWaitTimeout(
87 f"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds"
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/flow_runs.py b/src/prefect/flow_runs.py
--- a/src/prefect/flow_runs.py
+++ b/src/prefect/flow_runs.py
@@ -38,6 +38,7 @@
import asyncio
from prefect import get_client
+ from prefect.flow_runs import wait_for_flow_run
async def main():
async with get_client() as client:
@@ -55,6 +56,7 @@
import asyncio
from prefect import get_client
+ from prefect.flow_runs import wait_for_flow_run
async def main(num_runs: int):
async with get_client() as client:
| {"golden_diff": "diff --git a/src/prefect/flow_runs.py b/src/prefect/flow_runs.py\n--- a/src/prefect/flow_runs.py\n+++ b/src/prefect/flow_runs.py\n@@ -38,6 +38,7 @@\n import asyncio\n \n from prefect import get_client\n+ from prefect.flow_runs import wait_for_flow_run\n \n async def main():\n async with get_client() as client:\n@@ -55,6 +56,7 @@\n import asyncio\n \n from prefect import get_client\n+ from prefect.flow_runs import wait_for_flow_run\n \n async def main(num_runs: int):\n async with get_client() as client:\n", "issue": "no import statement for wait_for_flow_run\n### First check\r\n\r\n- [X] I added a descriptive title to this issue.\r\n- [X] I used GitHub search to find a similar request and didn't find it \ud83d\ude07\r\n\r\n### Describe the issue\r\n\r\nThere is no import statement for wait_for_flow_run so typing this code into pycharm shows wait_for_flow_run as an error. Searching the internets, the import statement used to be\r\n\r\n_from prefect.tasks.prefect import wait_for_flow_run_\r\n\r\nyeah, that doesn't work anymore.\r\n\r\n### Describe the proposed change\r\n\r\nput the correct import statement in the docs which is \r\n\r\n_from prefect.flow_runs import wait_for_flow_run_\r\n\n", "before_files": [{"content": "from typing import Optional\nfrom uuid import UUID\n\nimport anyio\n\nfrom prefect.client.orchestration import PrefectClient\nfrom prefect.client.schemas import FlowRun\nfrom prefect.client.utilities import inject_client\nfrom prefect.exceptions import FlowRunWaitTimeout\nfrom prefect.logging import get_logger\n\n\n@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n", "path": "src/prefect/flow_runs.py"}], "after_files": [{"content": "from typing import Optional\nfrom uuid import UUID\n\nimport anyio\n\nfrom prefect.client.orchestration import PrefectClient\nfrom prefect.client.schemas import FlowRun\nfrom prefect.client.utilities import inject_client\nfrom prefect.exceptions import FlowRunWaitTimeout\nfrom prefect.logging import get_logger\n\n\n@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n", "path": "src/prefect/flow_runs.py"}]} | 1,205 | 146 |
gh_patches_debug_3915 | rasdani/github-patches | git_diff | fossasia__open-event-server-890 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show return model of sponsor types list in Swagger spec
Currently no return model (or schema) is shown for the GET API to get sponsor types used in a Event

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `open_event/api/sponsors.py`
Content:
```
1 from flask.ext.restplus import Resource, Namespace
2
3 from open_event.models.sponsor import Sponsor as SponsorModel
4
5 from .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event
6 from .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \
7 PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES
8 from .helpers import custom_fields as fields
9
10 api = Namespace('sponsors', description='Sponsors', path='/')
11
12 SPONSOR = api.model('Sponsor', {
13 'id': fields.Integer(required=True),
14 'name': fields.String(),
15 'url': fields.Uri(),
16 'logo': fields.ImageUri(),
17 'description': fields.String(),
18 'level': fields.String(),
19 'sponsor_type': fields.String(),
20 })
21
22 SPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {
23 'results': fields.List(fields.Nested(SPONSOR))
24 })
25
26 SPONSOR_POST = api.clone('SponsorPost', SPONSOR)
27 del SPONSOR_POST['id']
28
29
30 # Create DAO
31 class SponsorDAO(ServiceDAO):
32 def list_types(self, event_id):
33 sponsors = self.list(event_id)
34 return list(set(
35 sponsor.sponsor_type for sponsor in sponsors
36 if sponsor.sponsor_type))
37
38
39 DAO = SponsorDAO(SponsorModel, SPONSOR_POST)
40
41
42 @api.route('/events/<int:event_id>/sponsors/<int:sponsor_id>')
43 @api.response(404, 'Sponsor not found')
44 @api.response(400, 'Sponsor does not belong to event')
45 class Sponsor(Resource):
46 @api.doc('get_sponsor')
47 @api.marshal_with(SPONSOR)
48 def get(self, event_id, sponsor_id):
49 """Fetch a sponsor given its id"""
50 return DAO.get(event_id, sponsor_id)
51
52 @requires_auth
53 @api.doc('delete_sponsor')
54 @api.marshal_with(SPONSOR)
55 def delete(self, event_id, sponsor_id):
56 """Delete a sponsor given its id"""
57 return DAO.delete(event_id, sponsor_id)
58
59 @requires_auth
60 @api.doc('update_sponsor', responses=PUT_RESPONSES)
61 @api.marshal_with(SPONSOR)
62 @api.expect(SPONSOR_POST)
63 def put(self, event_id, sponsor_id):
64 """Update a sponsor given its id"""
65 return DAO.update(event_id, sponsor_id, self.api.payload)
66
67
68 @api.route('/events/<int:event_id>/sponsors')
69 class SponsorList(Resource):
70 @api.doc('list_sponsors')
71 @api.marshal_list_with(SPONSOR)
72 def get(self, event_id):
73 """List all sponsors"""
74 return DAO.list(event_id)
75
76 @requires_auth
77 @api.doc('create_sponsor', responses=POST_RESPONSES)
78 @api.marshal_with(SPONSOR)
79 @api.expect(SPONSOR_POST)
80 def post(self, event_id):
81 """Create a sponsor"""
82 return DAO.create(
83 event_id,
84 self.api.payload,
85 self.api.url_for(self, event_id=event_id)
86 )
87
88
89 @api.route('/events/<int:event_id>/sponsors/types')
90 class SponsorTypesList(Resource):
91 @api.doc('list_sponsor_types')
92 def get(self, event_id):
93 """List all sponsor types"""
94 return DAO.list_types(event_id)
95
96
97 @api.route('/events/<int:event_id>/sponsors/page')
98 class SponsorListPaginated(Resource, PaginatedResourceBase):
99 @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)
100 @api.marshal_with(SPONSOR_PAGINATED)
101 def get(self, event_id):
102 """List sponsors in a paginated manner"""
103 return get_paginated_list(
104 SponsorModel,
105 self.api.url_for(self, event_id=event_id),
106 args=self.parser.parse_args(),
107 event_id=event_id
108 )
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/open_event/api/sponsors.py b/open_event/api/sponsors.py
--- a/open_event/api/sponsors.py
+++ b/open_event/api/sponsors.py
@@ -88,7 +88,7 @@
@api.route('/events/<int:event_id>/sponsors/types')
class SponsorTypesList(Resource):
- @api.doc('list_sponsor_types')
+ @api.doc('list_sponsor_types', model=[fields.String()])
def get(self, event_id):
"""List all sponsor types"""
return DAO.list_types(event_id)
| {"golden_diff": "diff --git a/open_event/api/sponsors.py b/open_event/api/sponsors.py\n--- a/open_event/api/sponsors.py\n+++ b/open_event/api/sponsors.py\n@@ -88,7 +88,7 @@\n \n @api.route('/events/<int:event_id>/sponsors/types')\n class SponsorTypesList(Resource):\n- @api.doc('list_sponsor_types')\n+ @api.doc('list_sponsor_types', model=[fields.String()])\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n", "issue": "Show return model of sponsor types list in Swagger spec\nCurrently no return model (or schema) is shown for the GET API to get sponsor types used in a Event\n\n\n\n", "before_files": [{"content": "from flask.ext.restplus import Resource, Namespace\n\nfrom open_event.models.sponsor import Sponsor as SponsorModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\n\napi = Namespace('sponsors', description='Sponsors', path='/')\n\nSPONSOR = api.model('Sponsor', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'url': fields.Uri(),\n 'logo': fields.ImageUri(),\n 'description': fields.String(),\n 'level': fields.String(),\n 'sponsor_type': fields.String(),\n})\n\nSPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SPONSOR))\n})\n\nSPONSOR_POST = api.clone('SponsorPost', SPONSOR)\ndel SPONSOR_POST['id']\n\n\n# Create DAO\nclass SponsorDAO(ServiceDAO):\n def list_types(self, event_id):\n sponsors = self.list(event_id)\n return list(set(\n sponsor.sponsor_type for sponsor in sponsors\n if sponsor.sponsor_type))\n\n\nDAO = SponsorDAO(SponsorModel, SPONSOR_POST)\n\n\[email protected]('/events/<int:event_id>/sponsors/<int:sponsor_id>')\[email protected](404, 'Sponsor not found')\[email protected](400, 'Sponsor does not belong to event')\nclass Sponsor(Resource):\n @api.doc('get_sponsor')\n @api.marshal_with(SPONSOR)\n def get(self, event_id, sponsor_id):\n \"\"\"Fetch a sponsor given its id\"\"\"\n return DAO.get(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('delete_sponsor')\n @api.marshal_with(SPONSOR)\n def delete(self, event_id, sponsor_id):\n \"\"\"Delete a sponsor given its id\"\"\"\n return DAO.delete(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('update_sponsor', responses=PUT_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def put(self, event_id, sponsor_id):\n \"\"\"Update a sponsor given its id\"\"\"\n return DAO.update(event_id, sponsor_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sponsors')\nclass SponsorList(Resource):\n @api.doc('list_sponsors')\n @api.marshal_list_with(SPONSOR)\n def get(self, event_id):\n \"\"\"List all sponsors\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_sponsor', responses=POST_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def post(self, event_id):\n \"\"\"Create a sponsor\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sponsors/types')\nclass SponsorTypesList(Resource):\n @api.doc('list_sponsor_types')\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n\n\[email protected]('/events/<int:event_id>/sponsors/page')\nclass SponsorListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SPONSOR_PAGINATED)\n def get(self, event_id):\n \"\"\"List sponsors in a paginated manner\"\"\"\n return get_paginated_list(\n SponsorModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sponsors.py"}], "after_files": [{"content": "from flask.ext.restplus import Resource, Namespace\n\nfrom open_event.models.sponsor import Sponsor as SponsorModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\n\napi = Namespace('sponsors', description='Sponsors', path='/')\n\nSPONSOR = api.model('Sponsor', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'url': fields.Uri(),\n 'logo': fields.ImageUri(),\n 'description': fields.String(),\n 'level': fields.String(),\n 'sponsor_type': fields.String(),\n})\n\nSPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SPONSOR))\n})\n\nSPONSOR_POST = api.clone('SponsorPost', SPONSOR)\ndel SPONSOR_POST['id']\n\n\n# Create DAO\nclass SponsorDAO(ServiceDAO):\n def list_types(self, event_id):\n sponsors = self.list(event_id)\n return list(set(\n sponsor.sponsor_type for sponsor in sponsors\n if sponsor.sponsor_type))\n\n\nDAO = SponsorDAO(SponsorModel, SPONSOR_POST)\n\n\[email protected]('/events/<int:event_id>/sponsors/<int:sponsor_id>')\[email protected](404, 'Sponsor not found')\[email protected](400, 'Sponsor does not belong to event')\nclass Sponsor(Resource):\n @api.doc('get_sponsor')\n @api.marshal_with(SPONSOR)\n def get(self, event_id, sponsor_id):\n \"\"\"Fetch a sponsor given its id\"\"\"\n return DAO.get(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('delete_sponsor')\n @api.marshal_with(SPONSOR)\n def delete(self, event_id, sponsor_id):\n \"\"\"Delete a sponsor given its id\"\"\"\n return DAO.delete(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('update_sponsor', responses=PUT_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def put(self, event_id, sponsor_id):\n \"\"\"Update a sponsor given its id\"\"\"\n return DAO.update(event_id, sponsor_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sponsors')\nclass SponsorList(Resource):\n @api.doc('list_sponsors')\n @api.marshal_list_with(SPONSOR)\n def get(self, event_id):\n \"\"\"List all sponsors\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_sponsor', responses=POST_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def post(self, event_id):\n \"\"\"Create a sponsor\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sponsors/types')\nclass SponsorTypesList(Resource):\n @api.doc('list_sponsor_types', model=[fields.String()])\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n\n\[email protected]('/events/<int:event_id>/sponsors/page')\nclass SponsorListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SPONSOR_PAGINATED)\n def get(self, event_id):\n \"\"\"List sponsors in a paginated manner\"\"\"\n return get_paginated_list(\n SponsorModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sponsors.py"}]} | 1,415 | 119 |
gh_patches_debug_14502 | rasdani/github-patches | git_diff | conan-io__conan-3839 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Conan doesn't keep the username to log to server anymore
From conan 1.8,
When an authentication is required by the conan server, the username is now always asked event though it was specified by conan user. In older version, only the password was required.
To reproduce:
```
$ conan user -c
$ conan user username
Changed user of remote 'server' from 'None' (anonymous) to 'username'
$ conan search -r server *
Please log in to "server" to perform this action. Execute "conan user" command.
Remote 'server' username:
```
To help us debug your issue please explain:
- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/userio.py`
Content:
```
1 import os
2 import sys
3 from conans.client.output import ConanOutput
4 from conans.errors import InvalidNameException, ConanException
5 import getpass
6 from six.moves import input as raw_input
7
8
9 class UserIO(object):
10 """Class to interact with the user, used to show messages and ask for information"""
11
12 def __init__(self, ins=sys.stdin, out=None):
13 """
14 Params:
15 ins: input stream
16 out: ConanOutput, should have "write" method
17 """
18 self._ins = ins
19 if not out:
20 out = ConanOutput(sys.stdout)
21 self.out = out
22 self._interactive = True
23
24 def disable_input(self):
25 self._interactive = False
26
27 def _raise_if_non_interactive(self):
28 if not self._interactive:
29 raise ConanException("Conan interactive mode disabled")
30
31 def raw_input(self):
32 self._raise_if_non_interactive()
33 return raw_input()
34
35 def get_pass(self):
36 self._raise_if_non_interactive()
37 return getpass.getpass("")
38
39 def request_login(self, remote_name, username=None):
40 """Request user to input their name and password
41 :param username If username is specified it only request password"""
42 if self._interactive:
43 self.out.write("Remote '%s' username: " % remote_name)
44 username = self.get_username(remote_name)
45
46 if self._interactive:
47 self.out.write('Please enter a password for "%s" account: ' % username)
48 try:
49 pwd = self.get_password(remote_name)
50 except ConanException:
51 raise
52 except Exception as e:
53 raise ConanException('Cancelled pass %s' % e)
54 return username, pwd
55
56 def get_username(self, remote_name):
57 """Overridable for testing purpose"""
58 return self._get_env_username(remote_name) or self.raw_input()
59
60 def get_password(self, remote_name):
61 """Overridable for testing purpose"""
62 return self._get_env_password(remote_name) or self.get_pass()
63
64 def request_string(self, msg, default_value=None):
65 """Request user to input a msg
66 :param msg Name of the msg
67 """
68 self._raise_if_non_interactive()
69
70 if default_value:
71 self.out.input_text('%s (%s): ' % (msg, default_value))
72 else:
73 self.out.input_text('%s: ' % msg)
74 s = self._ins.readline().replace("\n", "")
75 if default_value is not None and s == '':
76 return default_value
77 return s
78
79 def request_boolean(self, msg, default_option=None):
80 """Request user to input a boolean"""
81 ret = None
82 while ret is None:
83 if default_option is True:
84 s = self.request_string("%s (YES/no)" % msg)
85 elif default_option is False:
86 s = self.request_string("%s (NO/yes)" % msg)
87 else:
88 s = self.request_string("%s (yes/no)" % msg)
89 if default_option is not None and s == '':
90 return default_option
91 if s.lower() in ['yes', 'y']:
92 ret = True
93 elif s.lower() in ['no', 'n']:
94 ret = False
95 else:
96 self.out.error("%s is not a valid answer" % s)
97 return ret
98
99 def _get_env_password(self, remote_name):
100 """
101 Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None
102 """
103 remote_name = remote_name.replace("-", "_").upper()
104 var_name = "CONAN_PASSWORD_%s" % remote_name
105 ret = os.getenv(var_name, None) or os.getenv("CONAN_PASSWORD", None)
106 if ret:
107 self.out.info("Got password '******' from environment")
108 return ret
109
110 def _get_env_username(self, remote_name):
111 """
112 Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None
113 """
114 remote_name = remote_name.replace("-", "_").upper()
115 var_name = "CONAN_LOGIN_USERNAME_%s" % remote_name
116 ret = os.getenv(var_name, None) or os.getenv("CONAN_LOGIN_USERNAME", None)
117
118 if ret:
119 self.out.info("Got username '%s' from environment" % ret)
120 return ret
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/userio.py b/conans/client/userio.py
--- a/conans/client/userio.py
+++ b/conans/client/userio.py
@@ -39,9 +39,11 @@
def request_login(self, remote_name, username=None):
"""Request user to input their name and password
:param username If username is specified it only request password"""
- if self._interactive:
- self.out.write("Remote '%s' username: " % remote_name)
- username = self.get_username(remote_name)
+
+ if not username:
+ if self._interactive:
+ self.out.write("Remote '%s' username: " % remote_name)
+ username = self.get_username(remote_name)
if self._interactive:
self.out.write('Please enter a password for "%s" account: ' % username)
| {"golden_diff": "diff --git a/conans/client/userio.py b/conans/client/userio.py\n--- a/conans/client/userio.py\n+++ b/conans/client/userio.py\n@@ -39,9 +39,11 @@\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n- if self._interactive:\n- self.out.write(\"Remote '%s' username: \" % remote_name)\n- username = self.get_username(remote_name)\n+\n+ if not username:\n+ if self._interactive:\n+ self.out.write(\"Remote '%s' username: \" % remote_name)\n+ username = self.get_username(remote_name)\n \n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n", "issue": "Conan doesn't keep the username to log to server anymore\nFrom conan 1.8,\r\n\r\nWhen an authentication is required by the conan server, the username is now always asked event though it was specified by conan user. In older version, only the password was required.\r\n\r\nTo reproduce:\r\n```\r\n$ conan user -c\r\n$ conan user username\r\nChanged user of remote 'server' from 'None' (anonymous) to 'username'\r\n$ conan search -r server *\r\nPlease log in to \"server\" to perform this action. Execute \"conan user\" command.\r\nRemote 'server' username:\r\n```\r\n\r\nTo help us debug your issue please explain:\r\n\r\n- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).\r\n- [X] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nfrom conans.client.output import ConanOutput\nfrom conans.errors import InvalidNameException, ConanException\nimport getpass\nfrom six.moves import input as raw_input\n\n\nclass UserIO(object):\n \"\"\"Class to interact with the user, used to show messages and ask for information\"\"\"\n\n def __init__(self, ins=sys.stdin, out=None):\n \"\"\"\n Params:\n ins: input stream\n out: ConanOutput, should have \"write\" method\n \"\"\"\n self._ins = ins\n if not out:\n out = ConanOutput(sys.stdout)\n self.out = out\n self._interactive = True\n\n def disable_input(self):\n self._interactive = False\n\n def _raise_if_non_interactive(self):\n if not self._interactive:\n raise ConanException(\"Conan interactive mode disabled\")\n\n def raw_input(self):\n self._raise_if_non_interactive()\n return raw_input()\n\n def get_pass(self):\n self._raise_if_non_interactive()\n return getpass.getpass(\"\")\n\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n if self._interactive:\n self.out.write(\"Remote '%s' username: \" % remote_name)\n username = self.get_username(remote_name)\n\n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n try:\n pwd = self.get_password(remote_name)\n except ConanException:\n raise\n except Exception as e:\n raise ConanException('Cancelled pass %s' % e)\n return username, pwd\n\n def get_username(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_username(remote_name) or self.raw_input()\n\n def get_password(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_password(remote_name) or self.get_pass()\n\n def request_string(self, msg, default_value=None):\n \"\"\"Request user to input a msg\n :param msg Name of the msg\n \"\"\"\n self._raise_if_non_interactive()\n\n if default_value:\n self.out.input_text('%s (%s): ' % (msg, default_value))\n else:\n self.out.input_text('%s: ' % msg)\n s = self._ins.readline().replace(\"\\n\", \"\")\n if default_value is not None and s == '':\n return default_value\n return s\n\n def request_boolean(self, msg, default_option=None):\n \"\"\"Request user to input a boolean\"\"\"\n ret = None\n while ret is None:\n if default_option is True:\n s = self.request_string(\"%s (YES/no)\" % msg)\n elif default_option is False:\n s = self.request_string(\"%s (NO/yes)\" % msg)\n else:\n s = self.request_string(\"%s (yes/no)\" % msg)\n if default_option is not None and s == '':\n return default_option\n if s.lower() in ['yes', 'y']:\n ret = True\n elif s.lower() in ['no', 'n']:\n ret = False\n else:\n self.out.error(\"%s is not a valid answer\" % s)\n return ret\n\n def _get_env_password(self, remote_name):\n \"\"\"\n Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_PASSWORD_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_PASSWORD\", None)\n if ret:\n self.out.info(\"Got password '******' from environment\")\n return ret\n\n def _get_env_username(self, remote_name):\n \"\"\"\n Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_LOGIN_USERNAME_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_LOGIN_USERNAME\", None)\n\n if ret:\n self.out.info(\"Got username '%s' from environment\" % ret)\n return ret\n", "path": "conans/client/userio.py"}], "after_files": [{"content": "import os\nimport sys\nfrom conans.client.output import ConanOutput\nfrom conans.errors import InvalidNameException, ConanException\nimport getpass\nfrom six.moves import input as raw_input\n\n\nclass UserIO(object):\n \"\"\"Class to interact with the user, used to show messages and ask for information\"\"\"\n\n def __init__(self, ins=sys.stdin, out=None):\n \"\"\"\n Params:\n ins: input stream\n out: ConanOutput, should have \"write\" method\n \"\"\"\n self._ins = ins\n if not out:\n out = ConanOutput(sys.stdout)\n self.out = out\n self._interactive = True\n\n def disable_input(self):\n self._interactive = False\n\n def _raise_if_non_interactive(self):\n if not self._interactive:\n raise ConanException(\"Conan interactive mode disabled\")\n\n def raw_input(self):\n self._raise_if_non_interactive()\n return raw_input()\n\n def get_pass(self):\n self._raise_if_non_interactive()\n return getpass.getpass(\"\")\n\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n\n if not username:\n if self._interactive:\n self.out.write(\"Remote '%s' username: \" % remote_name)\n username = self.get_username(remote_name)\n\n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n try:\n pwd = self.get_password(remote_name)\n except ConanException:\n raise\n except Exception as e:\n raise ConanException('Cancelled pass %s' % e)\n return username, pwd\n\n def get_username(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_username(remote_name) or self.raw_input()\n\n def get_password(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_password(remote_name) or self.get_pass()\n\n def request_string(self, msg, default_value=None):\n \"\"\"Request user to input a msg\n :param msg Name of the msg\n \"\"\"\n self._raise_if_non_interactive()\n\n if default_value:\n self.out.input_text('%s (%s): ' % (msg, default_value))\n else:\n self.out.input_text('%s: ' % msg)\n s = self._ins.readline().replace(\"\\n\", \"\")\n if default_value is not None and s == '':\n return default_value\n return s\n\n def request_boolean(self, msg, default_option=None):\n \"\"\"Request user to input a boolean\"\"\"\n ret = None\n while ret is None:\n if default_option is True:\n s = self.request_string(\"%s (YES/no)\" % msg)\n elif default_option is False:\n s = self.request_string(\"%s (NO/yes)\" % msg)\n else:\n s = self.request_string(\"%s (yes/no)\" % msg)\n if default_option is not None and s == '':\n return default_option\n if s.lower() in ['yes', 'y']:\n ret = True\n elif s.lower() in ['no', 'n']:\n ret = False\n else:\n self.out.error(\"%s is not a valid answer\" % s)\n return ret\n\n def _get_env_password(self, remote_name):\n \"\"\"\n Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_PASSWORD_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_PASSWORD\", None)\n if ret:\n self.out.info(\"Got password '******' from environment\")\n return ret\n\n def _get_env_username(self, remote_name):\n \"\"\"\n Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_LOGIN_USERNAME_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_LOGIN_USERNAME\", None)\n\n if ret:\n self.out.info(\"Got username '%s' from environment\" % ret)\n return ret\n", "path": "conans/client/userio.py"}]} | 1,659 | 186 |
gh_patches_debug_3843 | rasdani/github-patches | git_diff | jazzband__pip-tools-1105 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--upgrade-package downgrades unrelated pre-release package when --pre not given
<!-- Describe the issue briefly here. -->
#### Environment Versions
1. OS Type: macOS 10.15.4
1. Python version: 3.7.7
1. pip version: 20.0.2
1. pip-tools version: 4.5.1
#### Steps to replicate
(Note: this example will stop working when `gevent` releases 1.5 final but it can be replicated with any other package that currently has a pre-release version.)
1. Example `req.in` file:
```
click<7
gevent
```
2. `pip-compile req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile req.in
#
click==6.7 # via -r req.in
gevent==1.4.0 # via -r req.in
greenlet==0.4.15 # via gevent
```
3. Upgrade gevent to pre-relese
`pip-compile --pre --upgrade-package gevent req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --pre req.in
#
click==6.7 # via -r req.in
gevent==1.5a4 # via -r req.in
greenlet==0.4.15 # via gevent
```
4. Remove version pin of `click` in `.in` file:
```
click
gevent
```
5. Upgrade click:
`pip-compile --upgrade-package click req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile req.in
#
click==6.7 # via -r req.in
gevent==1.4.0 # via -r req.in
greenlet==0.4.15 # via gevent
```
#### Expected result
Once a package has been resolved to a pre-release version it should never "magically" be downgraded. Especially if only unrelated other packages are concerned.
I could see that there may be an argument for a plain `pip-compile` run to revert to the non-prerelease version, but I would disagree even there. But for `--upgrade-package` I see no way where this is correct behaviour.
#### Actual result
Not giving `--pre` at any time after it has been used once and a package is resolved to a pre-release version will downgrade it back to the last released version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/repositories/local.py`
Content:
```
1 # coding: utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from contextlib import contextmanager
5
6 from pip._internal.utils.hashes import FAVORITE_HASH
7
8 from .._compat import PIP_VERSION
9 from .base import BaseRepository
10
11 from piptools.utils import as_tuple, key_from_ireq, make_install_requirement
12
13
14 def ireq_satisfied_by_existing_pin(ireq, existing_pin):
15 """
16 Return True if the given InstallationRequirement is satisfied by the
17 previously encountered version pin.
18 """
19 version = next(iter(existing_pin.req.specifier)).version
20 return version in ireq.req.specifier
21
22
23 class LocalRequirementsRepository(BaseRepository):
24 """
25 The LocalRequirementsRepository proxied the _real_ repository by first
26 checking if a requirement can be satisfied by existing pins (i.e. the
27 result of a previous compile step).
28
29 In effect, if a requirement can be satisfied with a version pinned in the
30 requirements file, we prefer that version over the best match found in
31 PyPI. This keeps updates to the requirements.txt down to a minimum.
32 """
33
34 def __init__(self, existing_pins, proxied_repository):
35 self.repository = proxied_repository
36 self.existing_pins = existing_pins
37
38 @property
39 def options(self):
40 return self.repository.options
41
42 @property
43 def finder(self):
44 return self.repository.finder
45
46 @property
47 def session(self):
48 return self.repository.session
49
50 @property
51 def DEFAULT_INDEX_URL(self):
52 return self.repository.DEFAULT_INDEX_URL
53
54 def clear_caches(self):
55 self.repository.clear_caches()
56
57 def freshen_build_caches(self):
58 self.repository.freshen_build_caches()
59
60 def find_best_match(self, ireq, prereleases=None):
61 key = key_from_ireq(ireq)
62 existing_pin = self.existing_pins.get(key)
63 if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):
64 project, version, _ = as_tuple(existing_pin)
65 return make_install_requirement(
66 project, version, ireq.extras, constraint=ireq.constraint
67 )
68 else:
69 return self.repository.find_best_match(ireq, prereleases)
70
71 def get_dependencies(self, ireq):
72 return self.repository.get_dependencies(ireq)
73
74 def get_hashes(self, ireq):
75 key = key_from_ireq(ireq)
76 existing_pin = self.existing_pins.get(key)
77 if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):
78 if PIP_VERSION[:2] <= (20, 0):
79 hashes = existing_pin.options.get("hashes", {})
80 else:
81 hashes = existing_pin.hash_options
82 hexdigests = hashes.get(FAVORITE_HASH)
83 if hexdigests:
84 return {
85 ":".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests
86 }
87 return self.repository.get_hashes(ireq)
88
89 @contextmanager
90 def allow_all_wheels(self):
91 with self.repository.allow_all_wheels():
92 yield
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/repositories/local.py b/piptools/repositories/local.py
--- a/piptools/repositories/local.py
+++ b/piptools/repositories/local.py
@@ -17,7 +17,9 @@
previously encountered version pin.
"""
version = next(iter(existing_pin.req.specifier)).version
- return version in ireq.req.specifier
+ return ireq.req.specifier.contains(
+ version, prereleases=existing_pin.req.specifier.prereleases
+ )
class LocalRequirementsRepository(BaseRepository):
| {"golden_diff": "diff --git a/piptools/repositories/local.py b/piptools/repositories/local.py\n--- a/piptools/repositories/local.py\n+++ b/piptools/repositories/local.py\n@@ -17,7 +17,9 @@\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n- return version in ireq.req.specifier\n+ return ireq.req.specifier.contains(\n+ version, prereleases=existing_pin.req.specifier.prereleases\n+ )\n \n \n class LocalRequirementsRepository(BaseRepository):\n", "issue": "--upgrade-package downgrades unrelated pre-release package when --pre not given\n<!-- Describe the issue briefly here. -->\r\n\r\n#### Environment Versions\r\n\r\n1. OS Type: macOS 10.15.4\r\n1. Python version: 3.7.7\r\n1. pip version: 20.0.2\r\n1. pip-tools version: 4.5.1\r\n\r\n#### Steps to replicate\r\n\r\n(Note: this example will stop working when `gevent` releases 1.5 final but it can be replicated with any other package that currently has a pre-release version.)\r\n\r\n1. Example `req.in` file:\r\n ```\r\n click<7\r\n gevent\r\n ```\r\n2. `pip-compile req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.4.0 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n3. Upgrade gevent to pre-relese\r\n `pip-compile --pre --upgrade-package gevent req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile --pre req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.5a4 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n4. Remove version pin of `click` in `.in` file:\r\n ```\r\n click\r\n gevent\r\n ```\r\n5. Upgrade click:\r\n `pip-compile --upgrade-package click req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.4.0 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n\r\n#### Expected result\r\n\r\nOnce a package has been resolved to a pre-release version it should never \"magically\" be downgraded. Especially if only unrelated other packages are concerned.\r\n\r\nI could see that there may be an argument for a plain `pip-compile` run to revert to the non-prerelease version, but I would disagree even there. But for `--upgrade-package` I see no way where this is correct behaviour.\r\n\r\n#### Actual result\r\n\r\nNot giving `--pre` at any time after it has been used once and a package is resolved to a pre-release version will downgrade it back to the last released version.\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom contextlib import contextmanager\n\nfrom pip._internal.utils.hashes import FAVORITE_HASH\n\nfrom .._compat import PIP_VERSION\nfrom .base import BaseRepository\n\nfrom piptools.utils import as_tuple, key_from_ireq, make_install_requirement\n\n\ndef ireq_satisfied_by_existing_pin(ireq, existing_pin):\n \"\"\"\n Return True if the given InstallationRequirement is satisfied by the\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n return version in ireq.req.specifier\n\n\nclass LocalRequirementsRepository(BaseRepository):\n \"\"\"\n The LocalRequirementsRepository proxied the _real_ repository by first\n checking if a requirement can be satisfied by existing pins (i.e. the\n result of a previous compile step).\n\n In effect, if a requirement can be satisfied with a version pinned in the\n requirements file, we prefer that version over the best match found in\n PyPI. This keeps updates to the requirements.txt down to a minimum.\n \"\"\"\n\n def __init__(self, existing_pins, proxied_repository):\n self.repository = proxied_repository\n self.existing_pins = existing_pins\n\n @property\n def options(self):\n return self.repository.options\n\n @property\n def finder(self):\n return self.repository.finder\n\n @property\n def session(self):\n return self.repository.session\n\n @property\n def DEFAULT_INDEX_URL(self):\n return self.repository.DEFAULT_INDEX_URL\n\n def clear_caches(self):\n self.repository.clear_caches()\n\n def freshen_build_caches(self):\n self.repository.freshen_build_caches()\n\n def find_best_match(self, ireq, prereleases=None):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n project, version, _ = as_tuple(existing_pin)\n return make_install_requirement(\n project, version, ireq.extras, constraint=ireq.constraint\n )\n else:\n return self.repository.find_best_match(ireq, prereleases)\n\n def get_dependencies(self, ireq):\n return self.repository.get_dependencies(ireq)\n\n def get_hashes(self, ireq):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n if PIP_VERSION[:2] <= (20, 0):\n hashes = existing_pin.options.get(\"hashes\", {})\n else:\n hashes = existing_pin.hash_options\n hexdigests = hashes.get(FAVORITE_HASH)\n if hexdigests:\n return {\n \":\".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests\n }\n return self.repository.get_hashes(ireq)\n\n @contextmanager\n def allow_all_wheels(self):\n with self.repository.allow_all_wheels():\n yield\n", "path": "piptools/repositories/local.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom contextlib import contextmanager\n\nfrom pip._internal.utils.hashes import FAVORITE_HASH\n\nfrom .._compat import PIP_VERSION\nfrom .base import BaseRepository\n\nfrom piptools.utils import as_tuple, key_from_ireq, make_install_requirement\n\n\ndef ireq_satisfied_by_existing_pin(ireq, existing_pin):\n \"\"\"\n Return True if the given InstallationRequirement is satisfied by the\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n return ireq.req.specifier.contains(\n version, prereleases=existing_pin.req.specifier.prereleases\n )\n\n\nclass LocalRequirementsRepository(BaseRepository):\n \"\"\"\n The LocalRequirementsRepository proxied the _real_ repository by first\n checking if a requirement can be satisfied by existing pins (i.e. the\n result of a previous compile step).\n\n In effect, if a requirement can be satisfied with a version pinned in the\n requirements file, we prefer that version over the best match found in\n PyPI. This keeps updates to the requirements.txt down to a minimum.\n \"\"\"\n\n def __init__(self, existing_pins, proxied_repository):\n self.repository = proxied_repository\n self.existing_pins = existing_pins\n\n @property\n def options(self):\n return self.repository.options\n\n @property\n def finder(self):\n return self.repository.finder\n\n @property\n def session(self):\n return self.repository.session\n\n @property\n def DEFAULT_INDEX_URL(self):\n return self.repository.DEFAULT_INDEX_URL\n\n def clear_caches(self):\n self.repository.clear_caches()\n\n def freshen_build_caches(self):\n self.repository.freshen_build_caches()\n\n def find_best_match(self, ireq, prereleases=None):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n project, version, _ = as_tuple(existing_pin)\n return make_install_requirement(\n project, version, ireq.extras, constraint=ireq.constraint\n )\n else:\n return self.repository.find_best_match(ireq, prereleases)\n\n def get_dependencies(self, ireq):\n return self.repository.get_dependencies(ireq)\n\n def get_hashes(self, ireq):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n if PIP_VERSION[:2] <= (20, 0):\n hashes = existing_pin.options.get(\"hashes\", {})\n else:\n hashes = existing_pin.hash_options\n hexdigests = hashes.get(FAVORITE_HASH)\n if hexdigests:\n return {\n \":\".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests\n }\n return self.repository.get_hashes(ireq)\n\n @contextmanager\n def allow_all_wheels(self):\n with self.repository.allow_all_wheels():\n yield\n", "path": "piptools/repositories/local.py"}]} | 1,749 | 121 |
gh_patches_debug_60613 | rasdani/github-patches | git_diff | cloudtools__troposphere-552 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support AutoScalingCreationPolicy
From the docs, this is a top-level property of a [CreationPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html#cfn-attributes-creationpolicy-properties). It is used for the [AutoScalingReplacingPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-replacingupdate) to specify the MinSuccessfulInstancesPercent property.
The docs have a good example of this:
``` json
"UpdatePolicy" : {
"AutoScalingReplacingUpdate" : {
"WillReplace" : "true"
},
"CreationPolicy" : {
"ResourceSignal" : {
"Count" : { "Ref" : "ResourceSignalsOnCreate"},
"Timeout" : "PT10M"
},
"AutoScalingCreationPolicy" : {
"MinSuccessfulInstancesPercent" : { "Ref" : "MinSuccessfulPercentParameter" }
}
}
```
I might take a crack at this but I figured I'd file an issue first if only so that I can reference it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/policies.py`
Content:
```
1 from . import AWSProperty, AWSAttribute, validate_pausetime
2 from .validators import positive_integer, integer, boolean
3
4
5 class AutoScalingRollingUpdate(AWSProperty):
6 props = {
7 'MaxBatchSize': (positive_integer, False),
8 'MinInstancesInService': (integer, False),
9 'MinSuccessfulInstancesPercent': (integer, False),
10 'PauseTime': (validate_pausetime, False),
11 'SuspendProcesses': ([basestring], False),
12 'WaitOnResourceSignals': (boolean, False),
13 }
14
15
16 class AutoScalingScheduledAction(AWSProperty):
17 props = {
18 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),
19 }
20
21
22 class AutoScalingReplacingUpdate(AWSProperty):
23 props = {
24 'WillReplace': (boolean, False),
25 }
26
27
28 class UpdatePolicy(AWSAttribute):
29 props = {
30 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),
31 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),
32 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),
33 }
34
35
36 class ResourceSignal(AWSProperty):
37 props = {
38 'Count': (positive_integer, False),
39 'Timeout': (validate_pausetime, False),
40 }
41
42
43 class CreationPolicy(AWSAttribute):
44 props = {
45 'ResourceSignal': (ResourceSignal, True),
46 }
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/policies.py b/troposphere/policies.py
--- a/troposphere/policies.py
+++ b/troposphere/policies.py
@@ -40,7 +40,14 @@
}
+class AutoScalingCreationPolicy(AWSProperty):
+ props = {
+ 'MinSuccessfulInstancesPercent': (integer, False),
+ }
+
+
class CreationPolicy(AWSAttribute):
props = {
+ 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),
'ResourceSignal': (ResourceSignal, True),
}
| {"golden_diff": "diff --git a/troposphere/policies.py b/troposphere/policies.py\n--- a/troposphere/policies.py\n+++ b/troposphere/policies.py\n@@ -40,7 +40,14 @@\n }\n \n \n+class AutoScalingCreationPolicy(AWSProperty):\n+ props = {\n+ 'MinSuccessfulInstancesPercent': (integer, False),\n+ }\n+\n+\n class CreationPolicy(AWSAttribute):\n props = {\n+ 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "issue": "Support AutoScalingCreationPolicy\nFrom the docs, this is a top-level property of a [CreationPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html#cfn-attributes-creationpolicy-properties). It is used for the [AutoScalingReplacingPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-replacingupdate) to specify the MinSuccessfulInstancesPercent property.\n\nThe docs have a good example of this:\n\n``` json\n\"UpdatePolicy\" : {\n \"AutoScalingReplacingUpdate\" : {\n \"WillReplace\" : \"true\"\n },\n\"CreationPolicy\" : {\n \"ResourceSignal\" : {\n \"Count\" : { \"Ref\" : \"ResourceSignalsOnCreate\"},\n \"Timeout\" : \"PT10M\"\n },\n \"AutoScalingCreationPolicy\" : {\n \"MinSuccessfulInstancesPercent\" : { \"Ref\" : \"MinSuccessfulPercentParameter\" }\n }\n}\n```\n\nI might take a crack at this but I figured I'd file an issue first if only so that I can reference it.\n\n", "before_files": [{"content": "from . import AWSProperty, AWSAttribute, validate_pausetime\nfrom .validators import positive_integer, integer, boolean\n\n\nclass AutoScalingRollingUpdate(AWSProperty):\n props = {\n 'MaxBatchSize': (positive_integer, False),\n 'MinInstancesInService': (integer, False),\n 'MinSuccessfulInstancesPercent': (integer, False),\n 'PauseTime': (validate_pausetime, False),\n 'SuspendProcesses': ([basestring], False),\n 'WaitOnResourceSignals': (boolean, False),\n }\n\n\nclass AutoScalingScheduledAction(AWSProperty):\n props = {\n 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),\n }\n\n\nclass AutoScalingReplacingUpdate(AWSProperty):\n props = {\n 'WillReplace': (boolean, False),\n }\n\n\nclass UpdatePolicy(AWSAttribute):\n props = {\n 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),\n 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),\n 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),\n }\n\n\nclass ResourceSignal(AWSProperty):\n props = {\n 'Count': (positive_integer, False),\n 'Timeout': (validate_pausetime, False),\n }\n\n\nclass CreationPolicy(AWSAttribute):\n props = {\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "path": "troposphere/policies.py"}], "after_files": [{"content": "from . import AWSProperty, AWSAttribute, validate_pausetime\nfrom .validators import positive_integer, integer, boolean\n\n\nclass AutoScalingRollingUpdate(AWSProperty):\n props = {\n 'MaxBatchSize': (positive_integer, False),\n 'MinInstancesInService': (integer, False),\n 'MinSuccessfulInstancesPercent': (integer, False),\n 'PauseTime': (validate_pausetime, False),\n 'SuspendProcesses': ([basestring], False),\n 'WaitOnResourceSignals': (boolean, False),\n }\n\n\nclass AutoScalingScheduledAction(AWSProperty):\n props = {\n 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),\n }\n\n\nclass AutoScalingReplacingUpdate(AWSProperty):\n props = {\n 'WillReplace': (boolean, False),\n }\n\n\nclass UpdatePolicy(AWSAttribute):\n props = {\n 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),\n 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),\n 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),\n }\n\n\nclass ResourceSignal(AWSProperty):\n props = {\n 'Count': (positive_integer, False),\n 'Timeout': (validate_pausetime, False),\n }\n\n\nclass AutoScalingCreationPolicy(AWSProperty):\n props = {\n 'MinSuccessfulInstancesPercent': (integer, False),\n }\n\n\nclass CreationPolicy(AWSAttribute):\n props = {\n 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "path": "troposphere/policies.py"}]} | 883 | 125 |
gh_patches_debug_30995 | rasdani/github-patches | git_diff | pypa__pip-4224 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip search picks older version if returned list of versions are not ordered
* Pip version: 9.0.1
* Python version: 2.7
* Operating System: Ubuntu/CentOS
### Description:
For a list of versions returned by local pypi server that was ill-ordered like
```[{...'versions': ['1.0.249', '1.0.251', '1.0.250'], 'name':...}...]```
search picks the top element among all the versions returned to it.
```version = hit.get('versions', ['-'])[-1]```
at https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L107 and https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L99
Rather it should do something like
```version = highest_version(hit.get('versions', ['-']))```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/commands/search.py`
Content:
```
1 from __future__ import absolute_import
2
3 import logging
4 import sys
5 import textwrap
6
7 from pip.basecommand import Command, SUCCESS
8 from pip.compat import OrderedDict
9 from pip.download import PipXmlrpcTransport
10 from pip.models import PyPI
11 from pip.utils import get_terminal_size
12 from pip.utils.logging import indent_log
13 from pip.exceptions import CommandError
14 from pip.status_codes import NO_MATCHES_FOUND
15 from pip._vendor.packaging.version import parse as parse_version
16 from pip._vendor import pkg_resources
17 from pip._vendor.six.moves import xmlrpc_client
18
19
20 logger = logging.getLogger(__name__)
21
22
23 class SearchCommand(Command):
24 """Search for PyPI packages whose name or summary contains <query>."""
25 name = 'search'
26 usage = """
27 %prog [options] <query>"""
28 summary = 'Search PyPI for packages.'
29
30 def __init__(self, *args, **kw):
31 super(SearchCommand, self).__init__(*args, **kw)
32 self.cmd_opts.add_option(
33 '-i', '--index',
34 dest='index',
35 metavar='URL',
36 default=PyPI.pypi_url,
37 help='Base URL of Python Package Index (default %default)')
38
39 self.parser.insert_option_group(0, self.cmd_opts)
40
41 def run(self, options, args):
42 if not args:
43 raise CommandError('Missing required argument (search query).')
44 query = args
45 pypi_hits = self.search(query, options)
46 hits = transform_hits(pypi_hits)
47
48 terminal_width = None
49 if sys.stdout.isatty():
50 terminal_width = get_terminal_size()[0]
51
52 print_results(hits, terminal_width=terminal_width)
53 if pypi_hits:
54 return SUCCESS
55 return NO_MATCHES_FOUND
56
57 def search(self, query, options):
58 index_url = options.index
59 with self._build_session(options) as session:
60 transport = PipXmlrpcTransport(index_url, session)
61 pypi = xmlrpc_client.ServerProxy(index_url, transport)
62 hits = pypi.search({'name': query, 'summary': query}, 'or')
63 return hits
64
65
66 def transform_hits(hits):
67 """
68 The list from pypi is really a list of versions. We want a list of
69 packages with the list of versions stored inline. This converts the
70 list from pypi into one we can use.
71 """
72 packages = OrderedDict()
73 for hit in hits:
74 name = hit['name']
75 summary = hit['summary']
76 version = hit['version']
77
78 if name not in packages.keys():
79 packages[name] = {
80 'name': name,
81 'summary': summary,
82 'versions': [version],
83 }
84 else:
85 packages[name]['versions'].append(version)
86
87 # if this is the highest version, replace summary and score
88 if version == highest_version(packages[name]['versions']):
89 packages[name]['summary'] = summary
90
91 return list(packages.values())
92
93
94 def print_results(hits, name_column_width=None, terminal_width=None):
95 if not hits:
96 return
97 if name_column_width is None:
98 name_column_width = max([
99 len(hit['name']) + len(hit.get('versions', ['-'])[-1])
100 for hit in hits
101 ]) + 4
102
103 installed_packages = [p.project_name for p in pkg_resources.working_set]
104 for hit in hits:
105 name = hit['name']
106 summary = hit['summary'] or ''
107 version = hit.get('versions', ['-'])[-1]
108 if terminal_width is not None:
109 target_width = terminal_width - name_column_width - 5
110 if target_width > 10:
111 # wrap and indent summary to fit terminal
112 summary = textwrap.wrap(summary, target_width)
113 summary = ('\n' + ' ' * (name_column_width + 3)).join(summary)
114
115 line = '%-*s - %s' % (name_column_width,
116 '%s (%s)' % (name, version), summary)
117 try:
118 logger.info(line)
119 if name in installed_packages:
120 dist = pkg_resources.get_distribution(name)
121 with indent_log():
122 latest = highest_version(hit['versions'])
123 if dist.version == latest:
124 logger.info('INSTALLED: %s (latest)', dist.version)
125 else:
126 logger.info('INSTALLED: %s', dist.version)
127 logger.info('LATEST: %s', latest)
128 except UnicodeEncodeError:
129 pass
130
131
132 def highest_version(versions):
133 return max(versions, key=parse_version)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pip/commands/search.py b/pip/commands/search.py
--- a/pip/commands/search.py
+++ b/pip/commands/search.py
@@ -96,7 +96,7 @@
return
if name_column_width is None:
name_column_width = max([
- len(hit['name']) + len(hit.get('versions', ['-'])[-1])
+ len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))
for hit in hits
]) + 4
@@ -104,7 +104,7 @@
for hit in hits:
name = hit['name']
summary = hit['summary'] or ''
- version = hit.get('versions', ['-'])[-1]
+ latest = highest_version(hit.get('versions', ['-']))
if terminal_width is not None:
target_width = terminal_width - name_column_width - 5
if target_width > 10:
@@ -113,13 +113,12 @@
summary = ('\n' + ' ' * (name_column_width + 3)).join(summary)
line = '%-*s - %s' % (name_column_width,
- '%s (%s)' % (name, version), summary)
+ '%s (%s)' % (name, latest), summary)
try:
logger.info(line)
if name in installed_packages:
dist = pkg_resources.get_distribution(name)
with indent_log():
- latest = highest_version(hit['versions'])
if dist.version == latest:
logger.info('INSTALLED: %s (latest)', dist.version)
else:
| {"golden_diff": "diff --git a/pip/commands/search.py b/pip/commands/search.py\n--- a/pip/commands/search.py\n+++ b/pip/commands/search.py\n@@ -96,7 +96,7 @@\n return\n if name_column_width is None:\n name_column_width = max([\n- len(hit['name']) + len(hit.get('versions', ['-'])[-1])\n+ len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))\n for hit in hits\n ]) + 4\n \n@@ -104,7 +104,7 @@\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n- version = hit.get('versions', ['-'])[-1]\n+ latest = highest_version(hit.get('versions', ['-']))\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n@@ -113,13 +113,12 @@\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n \n line = '%-*s - %s' % (name_column_width,\n- '%s (%s)' % (name, version), summary)\n+ '%s (%s)' % (name, latest), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n- latest = highest_version(hit['versions'])\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n", "issue": "pip search picks older version if returned list of versions are not ordered\n* Pip version: 9.0.1\r\n* Python version: 2.7\r\n* Operating System: Ubuntu/CentOS\r\n\r\n### Description:\r\n\r\nFor a list of versions returned by local pypi server that was ill-ordered like\r\n```[{...'versions': ['1.0.249', '1.0.251', '1.0.250'], 'name':...}...]```\r\n\r\nsearch picks the top element among all the versions returned to it.\r\n```version = hit.get('versions', ['-'])[-1]```\r\n at https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L107 and https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L99\r\n\r\nRather it should do something like\r\n```version = highest_version(hit.get('versions', ['-']))```\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport sys\nimport textwrap\n\nfrom pip.basecommand import Command, SUCCESS\nfrom pip.compat import OrderedDict\nfrom pip.download import PipXmlrpcTransport\nfrom pip.models import PyPI\nfrom pip.utils import get_terminal_size\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import CommandError\nfrom pip.status_codes import NO_MATCHES_FOUND\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.six.moves import xmlrpc_client\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SearchCommand(Command):\n \"\"\"Search for PyPI packages whose name or summary contains <query>.\"\"\"\n name = 'search'\n usage = \"\"\"\n %prog [options] <query>\"\"\"\n summary = 'Search PyPI for packages.'\n\n def __init__(self, *args, **kw):\n super(SearchCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-i', '--index',\n dest='index',\n metavar='URL',\n default=PyPI.pypi_url,\n help='Base URL of Python Package Index (default %default)')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n raise CommandError('Missing required argument (search query).')\n query = args\n pypi_hits = self.search(query, options)\n hits = transform_hits(pypi_hits)\n\n terminal_width = None\n if sys.stdout.isatty():\n terminal_width = get_terminal_size()[0]\n\n print_results(hits, terminal_width=terminal_width)\n if pypi_hits:\n return SUCCESS\n return NO_MATCHES_FOUND\n\n def search(self, query, options):\n index_url = options.index\n with self._build_session(options) as session:\n transport = PipXmlrpcTransport(index_url, session)\n pypi = xmlrpc_client.ServerProxy(index_url, transport)\n hits = pypi.search({'name': query, 'summary': query}, 'or')\n return hits\n\n\ndef transform_hits(hits):\n \"\"\"\n The list from pypi is really a list of versions. We want a list of\n packages with the list of versions stored inline. This converts the\n list from pypi into one we can use.\n \"\"\"\n packages = OrderedDict()\n for hit in hits:\n name = hit['name']\n summary = hit['summary']\n version = hit['version']\n\n if name not in packages.keys():\n packages[name] = {\n 'name': name,\n 'summary': summary,\n 'versions': [version],\n }\n else:\n packages[name]['versions'].append(version)\n\n # if this is the highest version, replace summary and score\n if version == highest_version(packages[name]['versions']):\n packages[name]['summary'] = summary\n\n return list(packages.values())\n\n\ndef print_results(hits, name_column_width=None, terminal_width=None):\n if not hits:\n return\n if name_column_width is None:\n name_column_width = max([\n len(hit['name']) + len(hit.get('versions', ['-'])[-1])\n for hit in hits\n ]) + 4\n\n installed_packages = [p.project_name for p in pkg_resources.working_set]\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n version = hit.get('versions', ['-'])[-1]\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n # wrap and indent summary to fit terminal\n summary = textwrap.wrap(summary, target_width)\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n\n line = '%-*s - %s' % (name_column_width,\n '%s (%s)' % (name, version), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n latest = highest_version(hit['versions'])\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n logger.info('INSTALLED: %s', dist.version)\n logger.info('LATEST: %s', latest)\n except UnicodeEncodeError:\n pass\n\n\ndef highest_version(versions):\n return max(versions, key=parse_version)\n", "path": "pip/commands/search.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport sys\nimport textwrap\n\nfrom pip.basecommand import Command, SUCCESS\nfrom pip.compat import OrderedDict\nfrom pip.download import PipXmlrpcTransport\nfrom pip.models import PyPI\nfrom pip.utils import get_terminal_size\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import CommandError\nfrom pip.status_codes import NO_MATCHES_FOUND\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.six.moves import xmlrpc_client\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SearchCommand(Command):\n \"\"\"Search for PyPI packages whose name or summary contains <query>.\"\"\"\n name = 'search'\n usage = \"\"\"\n %prog [options] <query>\"\"\"\n summary = 'Search PyPI for packages.'\n\n def __init__(self, *args, **kw):\n super(SearchCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-i', '--index',\n dest='index',\n metavar='URL',\n default=PyPI.pypi_url,\n help='Base URL of Python Package Index (default %default)')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n raise CommandError('Missing required argument (search query).')\n query = args\n pypi_hits = self.search(query, options)\n hits = transform_hits(pypi_hits)\n\n terminal_width = None\n if sys.stdout.isatty():\n terminal_width = get_terminal_size()[0]\n\n print_results(hits, terminal_width=terminal_width)\n if pypi_hits:\n return SUCCESS\n return NO_MATCHES_FOUND\n\n def search(self, query, options):\n index_url = options.index\n with self._build_session(options) as session:\n transport = PipXmlrpcTransport(index_url, session)\n pypi = xmlrpc_client.ServerProxy(index_url, transport)\n hits = pypi.search({'name': query, 'summary': query}, 'or')\n return hits\n\n\ndef transform_hits(hits):\n \"\"\"\n The list from pypi is really a list of versions. We want a list of\n packages with the list of versions stored inline. This converts the\n list from pypi into one we can use.\n \"\"\"\n packages = OrderedDict()\n for hit in hits:\n name = hit['name']\n summary = hit['summary']\n version = hit['version']\n\n if name not in packages.keys():\n packages[name] = {\n 'name': name,\n 'summary': summary,\n 'versions': [version],\n }\n else:\n packages[name]['versions'].append(version)\n\n # if this is the highest version, replace summary and score\n if version == highest_version(packages[name]['versions']):\n packages[name]['summary'] = summary\n\n return list(packages.values())\n\n\ndef print_results(hits, name_column_width=None, terminal_width=None):\n if not hits:\n return\n if name_column_width is None:\n name_column_width = max([\n len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))\n for hit in hits\n ]) + 4\n\n installed_packages = [p.project_name for p in pkg_resources.working_set]\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n latest = highest_version(hit.get('versions', ['-']))\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n # wrap and indent summary to fit terminal\n summary = textwrap.wrap(summary, target_width)\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n\n line = '%-*s - %s' % (name_column_width,\n '%s (%s)' % (name, latest), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n logger.info('INSTALLED: %s', dist.version)\n logger.info('LATEST: %s', latest)\n except UnicodeEncodeError:\n pass\n\n\ndef highest_version(versions):\n return max(versions, key=parse_version)\n", "path": "pip/commands/search.py"}]} | 1,740 | 359 |
gh_patches_debug_16875 | rasdani/github-patches | git_diff | getsentry__sentry-python-2105 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.19.1
### Steps to Reproduce
I'm trying to use the `asyncio` integration like this:
```python
sentry_sdk.init(dsn=os.environ.get("SENTRY_DSN"), traces_sample_rate=0.1, integrations=[AsyncioIntegration()])
```
I keep on getting a traceback that seems to be a Sentry-specific issue.
### Expected Result
No tracebacks repeatedly occur
### Actual Result
I see this traceback repeatedly printed in the logs:
```python
Task exception was never retrieved
future: <Task finished name='Task-1512' coro=<patch_asyncio.<locals>._sentry_task_factory.<locals>._coro_creating_hub_and_span() done, defined at /usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py:34> exception=AttributeError("'async_generator_athrow' object has no attribute '__qualname__'")>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py", line 40, in _coro_creating_hub_and_span
with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/asyncio.py`
Content:
```
1 from __future__ import absolute_import
2 import sys
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.consts import OP
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk._types import TYPE_CHECKING
9 from sentry_sdk.utils import event_from_exception
10
11 try:
12 import asyncio
13 from asyncio.tasks import Task
14 except ImportError:
15 raise DidNotEnable("asyncio not available")
16
17
18 if TYPE_CHECKING:
19 from typing import Any
20
21 from sentry_sdk._types import ExcInfo
22
23
24 def patch_asyncio():
25 # type: () -> None
26 orig_task_factory = None
27 try:
28 loop = asyncio.get_running_loop()
29 orig_task_factory = loop.get_task_factory()
30
31 def _sentry_task_factory(loop, coro):
32 # type: (Any, Any) -> Any
33
34 async def _coro_creating_hub_and_span():
35 # type: () -> Any
36 hub = Hub(Hub.current)
37 result = None
38
39 with hub:
40 with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
41 try:
42 result = await coro
43 except Exception:
44 reraise(*_capture_exception(hub))
45
46 return result
47
48 # Trying to use user set task factory (if there is one)
49 if orig_task_factory:
50 return orig_task_factory(loop, _coro_creating_hub_and_span())
51
52 # The default task factory in `asyncio` does not have its own function
53 # but is just a couple of lines in `asyncio.base_events.create_task()`
54 # Those lines are copied here.
55
56 # WARNING:
57 # If the default behavior of the task creation in asyncio changes,
58 # this will break!
59 task = Task(_coro_creating_hub_and_span(), loop=loop)
60 if task._source_traceback: # type: ignore
61 del task._source_traceback[-1] # type: ignore
62
63 return task
64
65 loop.set_task_factory(_sentry_task_factory)
66 except RuntimeError:
67 # When there is no running loop, we have nothing to patch.
68 pass
69
70
71 def _capture_exception(hub):
72 # type: (Hub) -> ExcInfo
73 exc_info = sys.exc_info()
74
75 integration = hub.get_integration(AsyncioIntegration)
76 if integration is not None:
77 # If an integration is there, a client has to be there.
78 client = hub.client # type: Any
79
80 event, hint = event_from_exception(
81 exc_info,
82 client_options=client.options,
83 mechanism={"type": "asyncio", "handled": False},
84 )
85 hub.capture_event(event, hint=hint)
86
87 return exc_info
88
89
90 class AsyncioIntegration(Integration):
91 identifier = "asyncio"
92
93 @staticmethod
94 def setup_once():
95 # type: () -> None
96 patch_asyncio()
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py
--- a/sentry_sdk/integrations/asyncio.py
+++ b/sentry_sdk/integrations/asyncio.py
@@ -21,6 +21,15 @@
from sentry_sdk._types import ExcInfo
+def get_name(coro):
+ # type: (Any) -> str
+ return (
+ getattr(coro, "__qualname__", None)
+ or getattr(coro, "__name__", None)
+ or "coroutine without __name__"
+ )
+
+
def patch_asyncio():
# type: () -> None
orig_task_factory = None
@@ -37,7 +46,7 @@
result = None
with hub:
- with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
+ with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):
try:
result = await coro
except Exception:
| {"golden_diff": "diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py\n--- a/sentry_sdk/integrations/asyncio.py\n+++ b/sentry_sdk/integrations/asyncio.py\n@@ -21,6 +21,15 @@\n from sentry_sdk._types import ExcInfo\n \n \n+def get_name(coro):\n+ # type: (Any) -> str\n+ return (\n+ getattr(coro, \"__qualname__\", None)\n+ or getattr(coro, \"__name__\", None)\n+ or \"coroutine without __name__\"\n+ )\n+\n+\n def patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n@@ -37,7 +46,7 @@\n result = None\n \n with hub:\n- with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n+ with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):\n try:\n result = await coro\n except Exception:\n", "issue": "AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n1.19.1\r\n\r\n### Steps to Reproduce\r\n\r\nI'm trying to use the `asyncio` integration like this:\r\n\r\n```python\r\nsentry_sdk.init(dsn=os.environ.get(\"SENTRY_DSN\"), traces_sample_rate=0.1, integrations=[AsyncioIntegration()])\r\n```\r\n\r\nI keep on getting a traceback that seems to be a Sentry-specific issue.\r\n\r\n### Expected Result\r\n\r\nNo tracebacks repeatedly occur\r\n\r\n### Actual Result\r\n\r\nI see this traceback repeatedly printed in the logs:\r\n\r\n```python\r\nTask exception was never retrieved\r\nfuture: <Task finished name='Task-1512' coro=<patch_asyncio.<locals>._sentry_task_factory.<locals>._coro_creating_hub_and_span() done, defined at /usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py:34> exception=AttributeError(\"'async_generator_athrow' object has no attribute '__qualname__'\")>\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py\", line 40, in _coro_creating_hub_and_span\r\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\r\nAttributeError: 'async_generator_athrow' object has no attribute '__qualname__'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import TYPE_CHECKING\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif TYPE_CHECKING:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> Any\n hub = Hub(Hub.current)\n result = None\n\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n return result\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span())\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import TYPE_CHECKING\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif TYPE_CHECKING:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef get_name(coro):\n # type: (Any) -> str\n return (\n getattr(coro, \"__qualname__\", None)\n or getattr(coro, \"__name__\", None)\n or \"coroutine without __name__\"\n )\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> Any\n hub = Hub(Hub.current)\n result = None\n\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):\n try:\n result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n return result\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span())\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}]} | 1,422 | 235 |
gh_patches_debug_8350 | rasdani/github-patches | git_diff | getsentry__sentry-5984 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto assign should occur as actor
When using 'Fixes XXX' annotation in a commit, I noticed that while Sentry auto assigned to me (expected), it did so on behalf of itself instead of my user.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/receivers/releases.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 from django.db import IntegrityError, transaction
4 from django.db.models.signals import post_save
5
6 from sentry.models import (
7 Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue
8 )
9 from sentry.tasks.clear_expired_resolutions import clear_expired_resolutions
10
11
12 def ensure_release_exists(instance, created, **kwargs):
13 if instance.key != 'sentry:release':
14 return
15
16 if instance.data and instance.data.get('release_id'):
17 return
18
19 try:
20 with transaction.atomic():
21 release = Release.objects.create(
22 organization_id=instance.project.organization_id,
23 version=instance.value,
24 date_added=instance.first_seen,
25 )
26 except IntegrityError:
27 release = Release.objects.get(
28 organization_id=instance.project.organization_id,
29 version=instance.value,
30 )
31 release.update(date_added=instance.first_seen)
32 else:
33 instance.update(data={'release_id': release.id})
34
35 release.add_project(instance.project)
36
37
38 def resolve_group_resolutions(instance, created, **kwargs):
39 if not created:
40 return
41
42 clear_expired_resolutions.delay(release_id=instance.id)
43
44
45 def resolved_in_commit(instance, created, **kwargs):
46 # TODO(dcramer): we probably should support an updated message
47 if not created:
48 return
49
50 groups = instance.find_referenced_groups()
51 for group in groups:
52 try:
53 with transaction.atomic():
54 GroupCommitResolution.objects.create(
55 group_id=group.id,
56 commit_id=instance.id,
57 )
58 if instance.author:
59 user_list = list(instance.author.find_users())
60 else:
61 user_list = ()
62 if user_list:
63 Activity.objects.create(
64 project_id=group.project_id,
65 group=group,
66 type=Activity.SET_RESOLVED_IN_COMMIT,
67 ident=instance.id,
68 user=user_list[0],
69 data={
70 'commit': instance.id,
71 }
72 )
73 GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])
74 else:
75 Activity.objects.create(
76 project_id=group.project_id,
77 group=group,
78 type=Activity.SET_RESOLVED_IN_COMMIT,
79 ident=instance.id,
80 data={
81 'commit': instance.id,
82 }
83 )
84 except IntegrityError:
85 pass
86
87
88 post_save.connect(
89 resolve_group_resolutions, sender=Release, dispatch_uid="resolve_group_resolutions", weak=False
90 )
91
92 post_save.connect(
93 ensure_release_exists, sender=TagValue, dispatch_uid="ensure_release_exists", weak=False
94 )
95
96 post_save.connect(
97 resolved_in_commit,
98 sender=Commit,
99 dispatch_uid="resolved_in_commit",
100 weak=False,
101 )
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/receivers/releases.py b/src/sentry/receivers/releases.py
--- a/src/sentry/receivers/releases.py
+++ b/src/sentry/receivers/releases.py
@@ -70,7 +70,8 @@
'commit': instance.id,
}
)
- GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])
+ GroupAssignee.objects.assign(
+ group=group, assigned_to=user_list[0], acting_user=user_list[0])
else:
Activity.objects.create(
project_id=group.project_id,
| {"golden_diff": "diff --git a/src/sentry/receivers/releases.py b/src/sentry/receivers/releases.py\n--- a/src/sentry/receivers/releases.py\n+++ b/src/sentry/receivers/releases.py\n@@ -70,7 +70,8 @@\n 'commit': instance.id,\n }\n )\n- GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])\n+ GroupAssignee.objects.assign(\n+ group=group, assigned_to=user_list[0], acting_user=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n", "issue": "Auto assign should occur as actor\nWhen using 'Fixes XXX' annotation in a commit, I noticed that while Sentry auto assigned to me (expected), it did so on behalf of itself instead of my user.\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom django.db import IntegrityError, transaction\nfrom django.db.models.signals import post_save\n\nfrom sentry.models import (\n Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue\n)\nfrom sentry.tasks.clear_expired_resolutions import clear_expired_resolutions\n\n\ndef ensure_release_exists(instance, created, **kwargs):\n if instance.key != 'sentry:release':\n return\n\n if instance.data and instance.data.get('release_id'):\n return\n\n try:\n with transaction.atomic():\n release = Release.objects.create(\n organization_id=instance.project.organization_id,\n version=instance.value,\n date_added=instance.first_seen,\n )\n except IntegrityError:\n release = Release.objects.get(\n organization_id=instance.project.organization_id,\n version=instance.value,\n )\n release.update(date_added=instance.first_seen)\n else:\n instance.update(data={'release_id': release.id})\n\n release.add_project(instance.project)\n\n\ndef resolve_group_resolutions(instance, created, **kwargs):\n if not created:\n return\n\n clear_expired_resolutions.delay(release_id=instance.id)\n\n\ndef resolved_in_commit(instance, created, **kwargs):\n # TODO(dcramer): we probably should support an updated message\n if not created:\n return\n\n groups = instance.find_referenced_groups()\n for group in groups:\n try:\n with transaction.atomic():\n GroupCommitResolution.objects.create(\n group_id=group.id,\n commit_id=instance.id,\n )\n if instance.author:\n user_list = list(instance.author.find_users())\n else:\n user_list = ()\n if user_list:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n user=user_list[0],\n data={\n 'commit': instance.id,\n }\n )\n GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n data={\n 'commit': instance.id,\n }\n )\n except IntegrityError:\n pass\n\n\npost_save.connect(\n resolve_group_resolutions, sender=Release, dispatch_uid=\"resolve_group_resolutions\", weak=False\n)\n\npost_save.connect(\n ensure_release_exists, sender=TagValue, dispatch_uid=\"ensure_release_exists\", weak=False\n)\n\npost_save.connect(\n resolved_in_commit,\n sender=Commit,\n dispatch_uid=\"resolved_in_commit\",\n weak=False,\n)\n", "path": "src/sentry/receivers/releases.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom django.db import IntegrityError, transaction\nfrom django.db.models.signals import post_save\n\nfrom sentry.models import (\n Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue\n)\nfrom sentry.tasks.clear_expired_resolutions import clear_expired_resolutions\n\n\ndef ensure_release_exists(instance, created, **kwargs):\n if instance.key != 'sentry:release':\n return\n\n if instance.data and instance.data.get('release_id'):\n return\n\n try:\n with transaction.atomic():\n release = Release.objects.create(\n organization_id=instance.project.organization_id,\n version=instance.value,\n date_added=instance.first_seen,\n )\n except IntegrityError:\n release = Release.objects.get(\n organization_id=instance.project.organization_id,\n version=instance.value,\n )\n release.update(date_added=instance.first_seen)\n else:\n instance.update(data={'release_id': release.id})\n\n release.add_project(instance.project)\n\n\ndef resolve_group_resolutions(instance, created, **kwargs):\n if not created:\n return\n\n clear_expired_resolutions.delay(release_id=instance.id)\n\n\ndef resolved_in_commit(instance, created, **kwargs):\n # TODO(dcramer): we probably should support an updated message\n if not created:\n return\n\n groups = instance.find_referenced_groups()\n for group in groups:\n try:\n with transaction.atomic():\n GroupCommitResolution.objects.create(\n group_id=group.id,\n commit_id=instance.id,\n )\n if instance.author:\n user_list = list(instance.author.find_users())\n else:\n user_list = ()\n if user_list:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n user=user_list[0],\n data={\n 'commit': instance.id,\n }\n )\n GroupAssignee.objects.assign(\n group=group, assigned_to=user_list[0], acting_user=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n data={\n 'commit': instance.id,\n }\n )\n except IntegrityError:\n pass\n\n\npost_save.connect(\n resolve_group_resolutions, sender=Release, dispatch_uid=\"resolve_group_resolutions\", weak=False\n)\n\npost_save.connect(\n ensure_release_exists, sender=TagValue, dispatch_uid=\"ensure_release_exists\", weak=False\n)\n\npost_save.connect(\n resolved_in_commit,\n sender=Commit,\n dispatch_uid=\"resolved_in_commit\",\n weak=False,\n)\n", "path": "src/sentry/receivers/releases.py"}]} | 1,161 | 129 |
gh_patches_debug_61141 | rasdani/github-patches | git_diff | e2nIEE__pandapower-2263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] __format_version__ not increased.
### Issue Description
The `__format_version__` in `_version.py` has not been increased eventhough the format got changed!
This is an issue in the develop branch **not** in master!
In my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.
This is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.
The actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.
If new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.
The breaking commit is 516f8af as it changed the format without changeing the format version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandapower/_version.py`
Content:
```
1 import importlib.metadata
2
3 __version__ = importlib.metadata.version("pandapower")
4 __format_version__ = "2.14.0"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandapower/_version.py b/pandapower/_version.py
--- a/pandapower/_version.py
+++ b/pandapower/_version.py
@@ -1,4 +1,4 @@
import importlib.metadata
__version__ = importlib.metadata.version("pandapower")
-__format_version__ = "2.14.0"
+__format_version__ = "2.15.0"
| {"golden_diff": "diff --git a/pandapower/_version.py b/pandapower/_version.py\n--- a/pandapower/_version.py\n+++ b/pandapower/_version.py\n@@ -1,4 +1,4 @@\n import importlib.metadata\n \n __version__ = importlib.metadata.version(\"pandapower\")\n-__format_version__ = \"2.14.0\"\n+__format_version__ = \"2.15.0\"\n", "issue": "[bug] __format_version__ not increased.\n### Issue Description\r\n\r\nThe `__format_version__` in `_version.py` has not been increased eventhough the format got changed!\r\n\r\nThis is an issue in the develop branch **not** in master!\r\n\r\nIn my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.\r\n\r\nThis is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.\r\nThe actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.\r\n\r\nIf new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.\r\n\r\nThe breaking commit is 516f8af as it changed the format without changeing the format version.\r\n\n", "before_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.14.0\"\n", "path": "pandapower/_version.py"}], "after_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.15.0\"\n", "path": "pandapower/_version.py"}]} | 721 | 97 |
gh_patches_debug_9115 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2637 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider cvs is broken
During the global build at 2021-08-18-14-42-26, spider **cvs** failed with **0 features** and **9870 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/cvs.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/cvs.py`
Content:
```
1 import json
2 import scrapy
3 import re
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAYS = [
8 'Mo',
9 'Tu',
10 'We',
11 'Th',
12 'Fr',
13 'Sa',
14 'Su'
15 ]
16
17
18 class CVSSpider(scrapy.Spider):
19
20 name = "cvs"
21 item_attributes = { 'brand': "CVS", 'brand_wikidata': "Q2078880" }
22 allowed_domains = ["www.cvs.com"]
23 download_delay = 0.5
24 start_urls = (
25 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',
26 )
27
28 def parse_hours(self, hours):
29 opening_hours = OpeningHours()
30
31 for group in hours:
32 if 'closed' in group:
33 continue
34 if 'open 24 hours' in group:
35 days = re.search(r'([a-zA-Z\-]+)\s+open 24 hours', group).groups()[0]
36 open_time, close_time = '00:00:00', '23:59:00'
37 else:
38 try:
39 days, open_time, close_time = re.search(r'([a-zA-Z\-]+)\s+([\d:\sapm]+)-([\d:\sapm]+)', group).groups()
40 except AttributeError:
41 continue # no hours listed, just day
42 try:
43 start_day, end_day = days.split('-')
44 except ValueError:
45 start_day, end_day = days, days
46 for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:
47 if 'm' in open_time:
48 open_time = open_time.strip(' apm') + ":00"
49 if 'm' in close_time:
50 close_time = close_time.strip(' apm') + ":00"
51 opening_hours.add_range(day=day,
52 open_time=open_time.strip(),
53 close_time=close_time.strip(),
54 time_format='%H:%M:%S')
55
56 return opening_hours.as_opening_hours()
57
58 def parse_stores(self, response):
59 try:
60 data = json.loads(response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first())[0]
61 except json.decoder.JSONDecodeError:
62 # one malformed json body on this store:
63 # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076
64 data = response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first()
65 data = re.sub(r'"tops Plaza\s*"', '', data)
66 data = json.loads(data)[0]
67 except TypeError:
68 return # empty store page
69
70 properties = {
71 'name': data["name"],
72 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),
73 'addr_full': data["address"]["streetAddress"].strip(', '),
74 'city': data["address"]["addressLocality"],
75 'state': data["address"]["addressRegion"],
76 'postcode': data["address"]["postalCode"],
77 'country': data["address"]["addressCountry"],
78 'phone': data["address"].get("telephone"),
79 'website': data.get("url") or response.url,
80 'lat': float(data["geo"]["latitude"]),
81 'lon': float(data["geo"]["longitude"]),
82 }
83
84 hours = self.parse_hours(data["openingHours"])
85 if hours:
86 properties["opening_hours"] = hours
87
88 yield GeojsonPointItem(**properties)
89
90 def parse_city_stores(self, response):
91 stores = response.xpath('//div[@class="each-store"]')
92
93 for store in stores:
94
95 direction = store.xpath('normalize-space(.//span[@class="store-number"]/a/@href)').extract_first()
96 if direction:
97 yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)
98
99 def parse_state(self, response):
100 city_urls = response.xpath('//div[@class="states"]/ul/li/a/@href').extract()
101 for path in city_urls:
102 yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)
103
104 def parse(self, response):
105 urls = response.xpath('//div[@class="states"]/ul/li/a/@href').extract()
106 for path in urls:
107 yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/cvs.py b/locations/spiders/cvs.py
--- a/locations/spiders/cvs.py
+++ b/locations/spiders/cvs.py
@@ -77,8 +77,8 @@
'country': data["address"]["addressCountry"],
'phone': data["address"].get("telephone"),
'website': data.get("url") or response.url,
- 'lat': float(data["geo"]["latitude"]),
- 'lon': float(data["geo"]["longitude"]),
+ 'lat': data["geo"]["latitude"] or None,
+ 'lon': data["geo"]["longitude"] or None,
}
hours = self.parse_hours(data["openingHours"])
| {"golden_diff": "diff --git a/locations/spiders/cvs.py b/locations/spiders/cvs.py\n--- a/locations/spiders/cvs.py\n+++ b/locations/spiders/cvs.py\n@@ -77,8 +77,8 @@\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n- 'lat': float(data[\"geo\"][\"latitude\"]),\n- 'lon': float(data[\"geo\"][\"longitude\"]),\n+ 'lat': data[\"geo\"][\"latitude\"] or None,\n+ 'lon': data[\"geo\"][\"longitude\"] or None,\n }\n \n hours = self.parse_hours(data[\"openingHours\"])\n", "issue": "Spider cvs is broken\nDuring the global build at 2021-08-18-14-42-26, spider **cvs** failed with **0 features** and **9870 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/cvs.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson))\n", "before_files": [{"content": "import json\nimport scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = [\n 'Mo',\n 'Tu',\n 'We',\n 'Th',\n 'Fr',\n 'Sa',\n 'Su'\n]\n\n\nclass CVSSpider(scrapy.Spider):\n\n name = \"cvs\"\n item_attributes = { 'brand': \"CVS\", 'brand_wikidata': \"Q2078880\" }\n allowed_domains = [\"www.cvs.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for group in hours:\n if 'closed' in group:\n continue\n if 'open 24 hours' in group:\n days = re.search(r'([a-zA-Z\\-]+)\\s+open 24 hours', group).groups()[0]\n open_time, close_time = '00:00:00', '23:59:00'\n else:\n try:\n days, open_time, close_time = re.search(r'([a-zA-Z\\-]+)\\s+([\\d:\\sapm]+)-([\\d:\\sapm]+)', group).groups()\n except AttributeError:\n continue # no hours listed, just day\n try:\n start_day, end_day = days.split('-')\n except ValueError:\n start_day, end_day = days, days\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:\n if 'm' in open_time:\n open_time = open_time.strip(' apm') + \":00\"\n if 'm' in close_time:\n close_time = close_time.strip(' apm') + \":00\"\n opening_hours.add_range(day=day,\n open_time=open_time.strip(),\n close_time=close_time.strip(),\n time_format='%H:%M:%S')\n\n return opening_hours.as_opening_hours()\n\n def parse_stores(self, response):\n try:\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first())[0]\n except json.decoder.JSONDecodeError:\n # one malformed json body on this store:\n # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076\n data = response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first()\n data = re.sub(r'\"tops Plaza\\s*\"', '', data)\n data = json.loads(data)[0]\n except TypeError:\n return # empty store page\n\n properties = {\n 'name': data[\"name\"],\n 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(', '),\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n }\n\n hours = self.parse_hours(data[\"openingHours\"])\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse_city_stores(self, response):\n stores = response.xpath('//div[@class=\"each-store\"]')\n\n for store in stores:\n\n direction = store.xpath('normalize-space(.//span[@class=\"store-number\"]/a/@href)').extract_first()\n if direction:\n yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/cvs.py"}], "after_files": [{"content": "import json\nimport scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = [\n 'Mo',\n 'Tu',\n 'We',\n 'Th',\n 'Fr',\n 'Sa',\n 'Su'\n]\n\n\nclass CVSSpider(scrapy.Spider):\n\n name = \"cvs\"\n item_attributes = { 'brand': \"CVS\", 'brand_wikidata': \"Q2078880\" }\n allowed_domains = [\"www.cvs.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for group in hours:\n if 'closed' in group:\n continue\n if 'open 24 hours' in group:\n days = re.search(r'([a-zA-Z\\-]+)\\s+open 24 hours', group).groups()[0]\n open_time, close_time = '00:00:00', '23:59:00'\n else:\n try:\n days, open_time, close_time = re.search(r'([a-zA-Z\\-]+)\\s+([\\d:\\sapm]+)-([\\d:\\sapm]+)', group).groups()\n except AttributeError:\n continue # no hours listed, just day\n try:\n start_day, end_day = days.split('-')\n except ValueError:\n start_day, end_day = days, days\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:\n if 'm' in open_time:\n open_time = open_time.strip(' apm') + \":00\"\n if 'm' in close_time:\n close_time = close_time.strip(' apm') + \":00\"\n opening_hours.add_range(day=day,\n open_time=open_time.strip(),\n close_time=close_time.strip(),\n time_format='%H:%M:%S')\n\n return opening_hours.as_opening_hours()\n\n def parse_stores(self, response):\n try:\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first())[0]\n except json.decoder.JSONDecodeError:\n # one malformed json body on this store:\n # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076\n data = response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first()\n data = re.sub(r'\"tops Plaza\\s*\"', '', data)\n data = json.loads(data)[0]\n except TypeError:\n return # empty store page\n\n properties = {\n 'name': data[\"name\"],\n 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(', '),\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n 'lat': data[\"geo\"][\"latitude\"] or None,\n 'lon': data[\"geo\"][\"longitude\"] or None,\n }\n\n hours = self.parse_hours(data[\"openingHours\"])\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse_city_stores(self, response):\n stores = response.xpath('//div[@class=\"each-store\"]')\n\n for store in stores:\n\n direction = store.xpath('normalize-space(.//span[@class=\"store-number\"]/a/@href)').extract_first()\n if direction:\n yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/cvs.py"}]} | 1,654 | 154 |
gh_patches_debug_25010 | rasdani/github-patches | git_diff | beetbox__beets-908 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mbsync: Deal with albums that have multiple copies of the same recording
the current way mbsync plugin used to obtain track mapping list is to use the MusicBrainz recoding ID from each track, it's a workaround to handle "missing or extra tracks". This method is based on an assumption that for each MB release, there are no multiple tracks with same MB recording ID. It usually works, and in my case, only 4 out of 700+ albums disobey this assumption. But for this four albums, I have to fix them by tag track number by hand and re-import.
Considering it's called "mbsync", Why not make an assumption that track number in metadata is not corrupt and use it if possible, or fallback to MB recording ID way if it's corrupted(missing or extra track detected)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/mbsync.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2014, Jakob Schnitzer.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Update library's tags using MusicBrainz.
16 """
17 import logging
18
19 from beets.plugins import BeetsPlugin
20 from beets import autotag, library, ui, util
21 from beets.autotag import hooks
22 from beets import config
23
24 log = logging.getLogger('beets')
25
26
27 def mbsync_singletons(lib, query, move, pretend, write):
28 """Retrieve and apply info from the autotagger for items matched by
29 query.
30 """
31 for item in lib.items(query + ['singleton:true']):
32 if not item.mb_trackid:
33 log.info(u'Skipping singleton {0}: has no mb_trackid'
34 .format(item.title))
35 continue
36
37 # Get the MusicBrainz recording info.
38 track_info = hooks.track_for_mbid(item.mb_trackid)
39 if not track_info:
40 log.info(u'Recording ID not found: {0}'.format(item.mb_trackid))
41 continue
42
43 # Apply.
44 with lib.transaction():
45 autotag.apply_item_metadata(item, track_info)
46 apply_item_changes(lib, item, move, pretend, write)
47
48
49 def mbsync_albums(lib, query, move, pretend, write):
50 """Retrieve and apply info from the autotagger for albums matched by
51 query and their items.
52 """
53 # Process matching albums.
54 for a in lib.albums(query):
55 if not a.mb_albumid:
56 log.info(u'Skipping album {0}: has no mb_albumid'.format(a.id))
57 continue
58
59 items = list(a.items())
60
61 # Get the MusicBrainz album information.
62 album_info = hooks.album_for_mbid(a.mb_albumid)
63 if not album_info:
64 log.info(u'Release ID not found: {0}'.format(a.mb_albumid))
65 continue
66
67 # Construct a track mapping according to MBIDs. This should work
68 # for albums that have missing or extra tracks.
69 mapping = {}
70 for item in items:
71 for track_info in album_info.tracks:
72 if item.mb_trackid == track_info.track_id:
73 mapping[item] = track_info
74 break
75
76 # Apply.
77 with lib.transaction():
78 autotag.apply_metadata(album_info, mapping)
79 changed = False
80 for item in items:
81 item_changed = ui.show_model_changes(item)
82 changed |= item_changed
83 if item_changed:
84 apply_item_changes(lib, item, move, pretend, write)
85
86 if not changed:
87 # No change to any item.
88 continue
89
90 if not pretend:
91 # Update album structure to reflect an item in it.
92 for key in library.Album.item_keys:
93 a[key] = items[0][key]
94 a.store()
95
96 # Move album art (and any inconsistent items).
97 if move and lib.directory in util.ancestry(items[0].path):
98 log.debug(u'moving album {0}'.format(a.id))
99 a.move()
100
101
102 def apply_item_changes(lib, item, move, pretend, write):
103 """Store, move and write the item according to the arguments.
104 """
105 if not pretend:
106 # Move the item if it's in the library.
107 if move and lib.directory in util.ancestry(item.path):
108 item.move(with_album=False)
109
110 if write:
111 item.try_write()
112 item.store()
113
114
115 def mbsync_func(lib, opts, args):
116 """Command handler for the mbsync function.
117 """
118 move = opts.move
119 pretend = opts.pretend
120 write = opts.write
121 query = ui.decargs(args)
122
123 mbsync_singletons(lib, query, move, pretend, write)
124 mbsync_albums(lib, query, move, pretend, write)
125
126
127 class MBSyncPlugin(BeetsPlugin):
128 def __init__(self):
129 super(MBSyncPlugin, self).__init__()
130
131 def commands(self):
132 cmd = ui.Subcommand('mbsync',
133 help='update metadata from musicbrainz')
134 cmd.parser.add_option('-p', '--pretend', action='store_true',
135 help='show all changes but do nothing')
136 cmd.parser.add_option('-M', '--nomove', action='store_false',
137 default=True, dest='move',
138 help="don't move files in library")
139 cmd.parser.add_option('-W', '--nowrite', action='store_false',
140 default=config['import']['write'], dest='write',
141 help="don't write updated metadata to files")
142 cmd.func = mbsync_func
143 return [cmd]
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/beetsplug/mbsync.py b/beetsplug/mbsync.py
--- a/beetsplug/mbsync.py
+++ b/beetsplug/mbsync.py
@@ -64,13 +64,29 @@
log.info(u'Release ID not found: {0}'.format(a.mb_albumid))
continue
+ # Construct an hash mapping recording MBIDs to their information. A
+ # release can have recording MBIDs that appear multiple times in the
+ # same release.
+ track_index = {}
+ for track_info in album_info.tracks:
+ if track_info.track_id in track_index:
+ track_index[track_info.track_id].append(track_info)
+ else:
+ track_index[track_info.track_id] = [track_info]
+
# Construct a track mapping according to MBIDs. This should work
- # for albums that have missing or extra tracks.
+ # for albums that have missing or extra tracks. If a mapping is
+ # ambiguous, the items' disc and track number need to match in order
+ # for an item to be mapped.
mapping = {}
for item in items:
- for track_info in album_info.tracks:
- if item.mb_trackid == track_info.track_id:
- mapping[item] = track_info
+ candidates = track_index.get(item.mb_trackid, [])
+ if len(candidates) == 1:
+ mapping[item] = candidates[0]
+ continue
+ for c in candidates:
+ if c.medium_index == item.track and c.medium == item.disc:
+ mapping[item] = c
break
# Apply.
| {"golden_diff": "diff --git a/beetsplug/mbsync.py b/beetsplug/mbsync.py\n--- a/beetsplug/mbsync.py\n+++ b/beetsplug/mbsync.py\n@@ -64,13 +64,29 @@\n log.info(u'Release ID not found: {0}'.format(a.mb_albumid))\n continue\n \n+ # Construct an hash mapping recording MBIDs to their information. A\n+ # release can have recording MBIDs that appear multiple times in the\n+ # same release.\n+ track_index = {}\n+ for track_info in album_info.tracks:\n+ if track_info.track_id in track_index:\n+ track_index[track_info.track_id].append(track_info)\n+ else:\n+ track_index[track_info.track_id] = [track_info]\n+\n # Construct a track mapping according to MBIDs. This should work\n- # for albums that have missing or extra tracks.\n+ # for albums that have missing or extra tracks. If a mapping is\n+ # ambiguous, the items' disc and track number need to match in order\n+ # for an item to be mapped.\n mapping = {}\n for item in items:\n- for track_info in album_info.tracks:\n- if item.mb_trackid == track_info.track_id:\n- mapping[item] = track_info\n+ candidates = track_index.get(item.mb_trackid, [])\n+ if len(candidates) == 1:\n+ mapping[item] = candidates[0]\n+ continue\n+ for c in candidates:\n+ if c.medium_index == item.track and c.medium == item.disc:\n+ mapping[item] = c\n break\n \n # Apply.\n", "issue": "mbsync: Deal with albums that have multiple copies of the same recording\nthe current way mbsync plugin used to obtain track mapping list is to use the MusicBrainz recoding ID from each track, it's a workaround to handle \"missing or extra tracks\". This method is based on an assumption that for each MB release, there are no multiple tracks with same MB recording ID. It usually works, and in my case, only 4 out of 700+ albums disobey this assumption. But for this four albums, I have to fix them by tag track number by hand and re-import.\n\nConsidering it's called \"mbsync\", Why not make an assumption that track number in metadata is not corrupt and use it if possible, or fallback to MB recording ID way if it's corrupted(missing or extra track detected)\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2014, Jakob Schnitzer.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Update library's tags using MusicBrainz.\n\"\"\"\nimport logging\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import autotag, library, ui, util\nfrom beets.autotag import hooks\nfrom beets import config\n\nlog = logging.getLogger('beets')\n\n\ndef mbsync_singletons(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for items matched by\n query.\n \"\"\"\n for item in lib.items(query + ['singleton:true']):\n if not item.mb_trackid:\n log.info(u'Skipping singleton {0}: has no mb_trackid'\n .format(item.title))\n continue\n\n # Get the MusicBrainz recording info.\n track_info = hooks.track_for_mbid(item.mb_trackid)\n if not track_info:\n log.info(u'Recording ID not found: {0}'.format(item.mb_trackid))\n continue\n\n # Apply.\n with lib.transaction():\n autotag.apply_item_metadata(item, track_info)\n apply_item_changes(lib, item, move, pretend, write)\n\n\ndef mbsync_albums(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for albums matched by\n query and their items.\n \"\"\"\n # Process matching albums.\n for a in lib.albums(query):\n if not a.mb_albumid:\n log.info(u'Skipping album {0}: has no mb_albumid'.format(a.id))\n continue\n\n items = list(a.items())\n\n # Get the MusicBrainz album information.\n album_info = hooks.album_for_mbid(a.mb_albumid)\n if not album_info:\n log.info(u'Release ID not found: {0}'.format(a.mb_albumid))\n continue\n\n # Construct a track mapping according to MBIDs. This should work\n # for albums that have missing or extra tracks.\n mapping = {}\n for item in items:\n for track_info in album_info.tracks:\n if item.mb_trackid == track_info.track_id:\n mapping[item] = track_info\n break\n\n # Apply.\n with lib.transaction():\n autotag.apply_metadata(album_info, mapping)\n changed = False\n for item in items:\n item_changed = ui.show_model_changes(item)\n changed |= item_changed\n if item_changed:\n apply_item_changes(lib, item, move, pretend, write)\n\n if not changed:\n # No change to any item.\n continue\n\n if not pretend:\n # Update album structure to reflect an item in it.\n for key in library.Album.item_keys:\n a[key] = items[0][key]\n a.store()\n\n # Move album art (and any inconsistent items).\n if move and lib.directory in util.ancestry(items[0].path):\n log.debug(u'moving album {0}'.format(a.id))\n a.move()\n\n\ndef apply_item_changes(lib, item, move, pretend, write):\n \"\"\"Store, move and write the item according to the arguments.\n \"\"\"\n if not pretend:\n # Move the item if it's in the library.\n if move and lib.directory in util.ancestry(item.path):\n item.move(with_album=False)\n\n if write:\n item.try_write()\n item.store()\n\n\ndef mbsync_func(lib, opts, args):\n \"\"\"Command handler for the mbsync function.\n \"\"\"\n move = opts.move\n pretend = opts.pretend\n write = opts.write\n query = ui.decargs(args)\n\n mbsync_singletons(lib, query, move, pretend, write)\n mbsync_albums(lib, query, move, pretend, write)\n\n\nclass MBSyncPlugin(BeetsPlugin):\n def __init__(self):\n super(MBSyncPlugin, self).__init__()\n\n def commands(self):\n cmd = ui.Subcommand('mbsync',\n help='update metadata from musicbrainz')\n cmd.parser.add_option('-p', '--pretend', action='store_true',\n help='show all changes but do nothing')\n cmd.parser.add_option('-M', '--nomove', action='store_false',\n default=True, dest='move',\n help=\"don't move files in library\")\n cmd.parser.add_option('-W', '--nowrite', action='store_false',\n default=config['import']['write'], dest='write',\n help=\"don't write updated metadata to files\")\n cmd.func = mbsync_func\n return [cmd]\n", "path": "beetsplug/mbsync.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2014, Jakob Schnitzer.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Update library's tags using MusicBrainz.\n\"\"\"\nimport logging\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import autotag, library, ui, util\nfrom beets.autotag import hooks\nfrom beets import config\n\nlog = logging.getLogger('beets')\n\n\ndef mbsync_singletons(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for items matched by\n query.\n \"\"\"\n for item in lib.items(query + ['singleton:true']):\n if not item.mb_trackid:\n log.info(u'Skipping singleton {0}: has no mb_trackid'\n .format(item.title))\n continue\n\n # Get the MusicBrainz recording info.\n track_info = hooks.track_for_mbid(item.mb_trackid)\n if not track_info:\n log.info(u'Recording ID not found: {0}'.format(item.mb_trackid))\n continue\n\n # Apply.\n with lib.transaction():\n autotag.apply_item_metadata(item, track_info)\n apply_item_changes(lib, item, move, pretend, write)\n\n\ndef mbsync_albums(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for albums matched by\n query and their items.\n \"\"\"\n # Process matching albums.\n for a in lib.albums(query):\n if not a.mb_albumid:\n log.info(u'Skipping album {0}: has no mb_albumid'.format(a.id))\n continue\n\n items = list(a.items())\n\n # Get the MusicBrainz album information.\n album_info = hooks.album_for_mbid(a.mb_albumid)\n if not album_info:\n log.info(u'Release ID not found: {0}'.format(a.mb_albumid))\n continue\n\n # Construct an hash mapping recording MBIDs to their information. A\n # release can have recording MBIDs that appear multiple times in the\n # same release.\n track_index = {}\n for track_info in album_info.tracks:\n if track_info.track_id in track_index:\n track_index[track_info.track_id].append(track_info)\n else:\n track_index[track_info.track_id] = [track_info]\n\n # Construct a track mapping according to MBIDs. This should work\n # for albums that have missing or extra tracks. If a mapping is\n # ambiguous, the items' disc and track number need to match in order\n # for an item to be mapped.\n mapping = {}\n for item in items:\n candidates = track_index.get(item.mb_trackid, [])\n if len(candidates) == 1:\n mapping[item] = candidates[0]\n continue\n for c in candidates:\n if c.medium_index == item.track and c.medium == item.disc:\n mapping[item] = c\n break\n\n # Apply.\n with lib.transaction():\n autotag.apply_metadata(album_info, mapping)\n changed = False\n for item in items:\n item_changed = ui.show_model_changes(item)\n changed |= item_changed\n if item_changed:\n apply_item_changes(lib, item, move, pretend, write)\n\n if not changed:\n # No change to any item.\n continue\n\n if not pretend:\n # Update album structure to reflect an item in it.\n for key in library.Album.item_keys:\n a[key] = items[0][key]\n a.store()\n\n # Move album art (and any inconsistent items).\n if move and lib.directory in util.ancestry(items[0].path):\n log.debug(u'moving album {0}'.format(a.id))\n a.move()\n\n\ndef apply_item_changes(lib, item, move, pretend, write):\n \"\"\"Store, move and write the item according to the arguments.\n \"\"\"\n if not pretend:\n # Move the item if it's in the library.\n if move and lib.directory in util.ancestry(item.path):\n item.move(with_album=False)\n\n if write:\n item.try_write()\n item.store()\n\n\ndef mbsync_func(lib, opts, args):\n \"\"\"Command handler for the mbsync function.\n \"\"\"\n move = opts.move\n pretend = opts.pretend\n write = opts.write\n query = ui.decargs(args)\n\n mbsync_singletons(lib, query, move, pretend, write)\n mbsync_albums(lib, query, move, pretend, write)\n\n\nclass MBSyncPlugin(BeetsPlugin):\n def __init__(self):\n super(MBSyncPlugin, self).__init__()\n\n def commands(self):\n cmd = ui.Subcommand('mbsync',\n help='update metadata from musicbrainz')\n cmd.parser.add_option('-p', '--pretend', action='store_true',\n help='show all changes but do nothing')\n cmd.parser.add_option('-M', '--nomove', action='store_false',\n default=True, dest='move',\n help=\"don't move files in library\")\n cmd.parser.add_option('-W', '--nowrite', action='store_false',\n default=config['import']['write'], dest='write',\n help=\"don't write updated metadata to files\")\n cmd.func = mbsync_func\n return [cmd]\n", "path": "beetsplug/mbsync.py"}]} | 1,873 | 365 |
gh_patches_debug_29204 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-426 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in 45.state-management sample
To create the user profile property , it should refer the UserState but in the sample its referring the
conversationstate.
Current code : self.user_profile = self.conversation_state.create_property("UserProfile")
Expected code : self.user_profile = self.user_state.create_property("UserProfile")
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samples/45.state-management/bots/state_management_bot.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import time
5 import pytz
6 from datetime import datetime
7
8 from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
9 from botbuilder.schema import ChannelAccount
10
11 from data_models import ConversationData, UserProfile
12
13
14 class StateManagementBot(ActivityHandler):
15 def __init__(self, conversation_state: ConversationState, user_state: UserState):
16 if conversation_state is None:
17 raise TypeError(
18 "[StateManagementBot]: Missing parameter. conversation_state is required but None was given"
19 )
20 if user_state is None:
21 raise TypeError(
22 "[StateManagementBot]: Missing parameter. user_state is required but None was given"
23 )
24
25 self.conversation_state = conversation_state
26 self.user_state = user_state
27
28 self.conversation_data = self.conversation_state.create_property(
29 "ConversationData"
30 )
31 self.user_profile = self.conversation_state.create_property("UserProfile")
32
33 async def on_turn(self, turn_context: TurnContext):
34 await super().on_turn(turn_context)
35
36 await self.conversation_state.save_changes(turn_context)
37 await self.user_state.save_changes(turn_context)
38
39 async def on_members_added_activity(
40 self, members_added: [ChannelAccount], turn_context: TurnContext
41 ):
42 for member in members_added:
43 if member.id != turn_context.activity.recipient.id:
44 await turn_context.send_activity(
45 "Welcome to State Bot Sample. Type anything to get started."
46 )
47
48 async def on_message_activity(self, turn_context: TurnContext):
49 # Get the state properties from the turn context.
50 user_profile = await self.user_profile.get(turn_context, UserProfile)
51 conversation_data = await self.conversation_data.get(
52 turn_context, ConversationData
53 )
54
55 if user_profile.name is None:
56 # First time around this is undefined, so we will prompt user for name.
57 if conversation_data.prompted_for_user_name:
58 # Set the name to what the user provided.
59 user_profile.name = turn_context.activity.text
60
61 # Acknowledge that we got their name.
62 await turn_context.send_activity(
63 f"Thanks { user_profile.name }. To see conversation data, type anything."
64 )
65
66 # Reset the flag to allow the bot to go though the cycle again.
67 conversation_data.prompted_for_user_name = False
68 else:
69 # Prompt the user for their name.
70 await turn_context.send_activity("What is your name?")
71
72 # Set the flag to true, so we don't prompt in the next turn.
73 conversation_data.prompted_for_user_name = True
74 else:
75 # Add message details to the conversation data.
76 conversation_data.timestamp = self.__datetime_from_utc_to_local(
77 turn_context.activity.timestamp
78 )
79 conversation_data.channel_id = turn_context.activity.channel_id
80
81 # Display state data.
82 await turn_context.send_activity(
83 f"{ user_profile.name } sent: { turn_context.activity.text }"
84 )
85 await turn_context.send_activity(
86 f"Message received at: { conversation_data.timestamp }"
87 )
88 await turn_context.send_activity(
89 f"Message received from: { conversation_data.channel_id }"
90 )
91
92 def __datetime_from_utc_to_local(self, utc_datetime):
93 now_timestamp = time.time()
94 offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(
95 now_timestamp
96 )
97 result = utc_datetime + offset
98 return result.strftime("%I:%M:%S %p, %A, %B %d of %Y")
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py
--- a/samples/45.state-management/bots/state_management_bot.py
+++ b/samples/45.state-management/bots/state_management_bot.py
@@ -2,7 +2,6 @@
# Licensed under the MIT License.
import time
-import pytz
from datetime import datetime
from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
@@ -25,10 +24,10 @@
self.conversation_state = conversation_state
self.user_state = user_state
- self.conversation_data = self.conversation_state.create_property(
+ self.conversation_data_accessor = self.conversation_state.create_property(
"ConversationData"
)
- self.user_profile = self.conversation_state.create_property("UserProfile")
+ self.user_profile_accessor = self.user_state.create_property("UserProfile")
async def on_turn(self, turn_context: TurnContext):
await super().on_turn(turn_context)
@@ -47,8 +46,8 @@
async def on_message_activity(self, turn_context: TurnContext):
# Get the state properties from the turn context.
- user_profile = await self.user_profile.get(turn_context, UserProfile)
- conversation_data = await self.conversation_data.get(
+ user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)
+ conversation_data = await self.conversation_data_accessor.get(
turn_context, ConversationData
)
| {"golden_diff": "diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py\n--- a/samples/45.state-management/bots/state_management_bot.py\n+++ b/samples/45.state-management/bots/state_management_bot.py\n@@ -2,7 +2,6 @@\n # Licensed under the MIT License.\n \n import time\n-import pytz\n from datetime import datetime\n \n from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\n@@ -25,10 +24,10 @@\n self.conversation_state = conversation_state\n self.user_state = user_state\n \n- self.conversation_data = self.conversation_state.create_property(\n+ self.conversation_data_accessor = self.conversation_state.create_property(\n \"ConversationData\"\n )\n- self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n+ self.user_profile_accessor = self.user_state.create_property(\"UserProfile\")\n \n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n@@ -47,8 +46,8 @@\n \n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n- user_profile = await self.user_profile.get(turn_context, UserProfile)\n- conversation_data = await self.conversation_data.get(\n+ user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)\n+ conversation_data = await self.conversation_data_accessor.get(\n turn_context, ConversationData\n )\n", "issue": "Bug in 45.state-management sample\n\r\nTo create the user profile property , it should refer the UserState but in the sample its referring the \r\nconversationstate.\r\n\r\nCurrent code : self.user_profile = self.conversation_state.create_property(\"UserProfile\")\r\n\r\nExpected code : self.user_profile = self.user_state.create_property(\"UserProfile\")\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport time\nimport pytz\nfrom datetime import datetime\n\nfrom botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\nfrom botbuilder.schema import ChannelAccount\n\nfrom data_models import ConversationData, UserProfile\n\n\nclass StateManagementBot(ActivityHandler):\n def __init__(self, conversation_state: ConversationState, user_state: UserState):\n if conversation_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. conversation_state is required but None was given\"\n )\n if user_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. user_state is required but None was given\"\n )\n\n self.conversation_state = conversation_state\n self.user_state = user_state\n\n self.conversation_data = self.conversation_state.create_property(\n \"ConversationData\"\n )\n self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n\n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n\n await self.conversation_state.save_changes(turn_context)\n await self.user_state.save_changes(turn_context)\n\n async def on_members_added_activity(\n self, members_added: [ChannelAccount], turn_context: TurnContext\n ):\n for member in members_added:\n if member.id != turn_context.activity.recipient.id:\n await turn_context.send_activity(\n \"Welcome to State Bot Sample. Type anything to get started.\"\n )\n\n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n user_profile = await self.user_profile.get(turn_context, UserProfile)\n conversation_data = await self.conversation_data.get(\n turn_context, ConversationData\n )\n\n if user_profile.name is None:\n # First time around this is undefined, so we will prompt user for name.\n if conversation_data.prompted_for_user_name:\n # Set the name to what the user provided.\n user_profile.name = turn_context.activity.text\n\n # Acknowledge that we got their name.\n await turn_context.send_activity(\n f\"Thanks { user_profile.name }. To see conversation data, type anything.\"\n )\n\n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n else:\n # Prompt the user for their name.\n await turn_context.send_activity(\"What is your name?\")\n\n # Set the flag to true, so we don't prompt in the next turn.\n conversation_data.prompted_for_user_name = True\n else:\n # Add message details to the conversation data.\n conversation_data.timestamp = self.__datetime_from_utc_to_local(\n turn_context.activity.timestamp\n )\n conversation_data.channel_id = turn_context.activity.channel_id\n\n # Display state data.\n await turn_context.send_activity(\n f\"{ user_profile.name } sent: { turn_context.activity.text }\"\n )\n await turn_context.send_activity(\n f\"Message received at: { conversation_data.timestamp }\"\n )\n await turn_context.send_activity(\n f\"Message received from: { conversation_data.channel_id }\"\n )\n\n def __datetime_from_utc_to_local(self, utc_datetime):\n now_timestamp = time.time()\n offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(\n now_timestamp\n )\n result = utc_datetime + offset\n return result.strftime(\"%I:%M:%S %p, %A, %B %d of %Y\")\n", "path": "samples/45.state-management/bots/state_management_bot.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport time\nfrom datetime import datetime\n\nfrom botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\nfrom botbuilder.schema import ChannelAccount\n\nfrom data_models import ConversationData, UserProfile\n\n\nclass StateManagementBot(ActivityHandler):\n def __init__(self, conversation_state: ConversationState, user_state: UserState):\n if conversation_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. conversation_state is required but None was given\"\n )\n if user_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. user_state is required but None was given\"\n )\n\n self.conversation_state = conversation_state\n self.user_state = user_state\n\n self.conversation_data_accessor = self.conversation_state.create_property(\n \"ConversationData\"\n )\n self.user_profile_accessor = self.user_state.create_property(\"UserProfile\")\n\n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n\n await self.conversation_state.save_changes(turn_context)\n await self.user_state.save_changes(turn_context)\n\n async def on_members_added_activity(\n self, members_added: [ChannelAccount], turn_context: TurnContext\n ):\n for member in members_added:\n if member.id != turn_context.activity.recipient.id:\n await turn_context.send_activity(\n \"Welcome to State Bot Sample. Type anything to get started.\"\n )\n\n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)\n conversation_data = await self.conversation_data_accessor.get(\n turn_context, ConversationData\n )\n\n if user_profile.name is None:\n # First time around this is undefined, so we will prompt user for name.\n if conversation_data.prompted_for_user_name:\n # Set the name to what the user provided.\n user_profile.name = turn_context.activity.text\n\n # Acknowledge that we got their name.\n await turn_context.send_activity(\n f\"Thanks { user_profile.name }. To see conversation data, type anything.\"\n )\n\n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n else:\n # Prompt the user for their name.\n await turn_context.send_activity(\"What is your name?\")\n\n # Set the flag to true, so we don't prompt in the next turn.\n conversation_data.prompted_for_user_name = True\n else:\n # Add message details to the conversation data.\n conversation_data.timestamp = self.__datetime_from_utc_to_local(\n turn_context.activity.timestamp\n )\n conversation_data.channel_id = turn_context.activity.channel_id\n\n # Display state data.\n await turn_context.send_activity(\n f\"{ user_profile.name } sent: { turn_context.activity.text }\"\n )\n await turn_context.send_activity(\n f\"Message received at: { conversation_data.timestamp }\"\n )\n await turn_context.send_activity(\n f\"Message received from: { conversation_data.channel_id }\"\n )\n\n def __datetime_from_utc_to_local(self, utc_datetime):\n now_timestamp = time.time()\n offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(\n now_timestamp\n )\n result = utc_datetime + offset\n return result.strftime(\"%I:%M:%S %p, %A, %B %d of %Y\")\n", "path": "samples/45.state-management/bots/state_management_bot.py"}]} | 1,280 | 334 |
gh_patches_debug_5466 | rasdani/github-patches | git_diff | docker__docker-py-820 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
POST /volumes is now POST /volumes/create
https://github.com/docker/docker/pull/17136
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/api/volume.py`
Content:
```
1 from .. import utils
2
3
4 class VolumeApiMixin(object):
5 @utils.minimum_version('1.21')
6 def volumes(self, filters=None):
7 params = {
8 'filter': utils.convert_filters(filters) if filters else None
9 }
10 url = self._url('/volumes')
11 return self._result(self._get(url, params=params), True)
12
13 @utils.minimum_version('1.21')
14 def create_volume(self, name, driver=None, driver_opts=None):
15 url = self._url('/volumes')
16 if driver_opts is not None and not isinstance(driver_opts, dict):
17 raise TypeError('driver_opts must be a dictionary')
18
19 data = {
20 'Name': name,
21 'Driver': driver,
22 'DriverOpts': driver_opts,
23 }
24 return self._result(self._post_json(url, data=data), True)
25
26 @utils.minimum_version('1.21')
27 def inspect_volume(self, name):
28 url = self._url('/volumes/{0}', name)
29 return self._result(self._get(url), True)
30
31 @utils.minimum_version('1.21')
32 def remove_volume(self, name):
33 url = self._url('/volumes/{0}', name)
34 resp = self._delete(url)
35 self._raise_for_status(resp)
36 return True
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/api/volume.py b/docker/api/volume.py
--- a/docker/api/volume.py
+++ b/docker/api/volume.py
@@ -12,7 +12,7 @@
@utils.minimum_version('1.21')
def create_volume(self, name, driver=None, driver_opts=None):
- url = self._url('/volumes')
+ url = self._url('/volumes/create')
if driver_opts is not None and not isinstance(driver_opts, dict):
raise TypeError('driver_opts must be a dictionary')
| {"golden_diff": "diff --git a/docker/api/volume.py b/docker/api/volume.py\n--- a/docker/api/volume.py\n+++ b/docker/api/volume.py\n@@ -12,7 +12,7 @@\n \n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n- url = self._url('/volumes')\n+ url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n", "issue": "POST /volumes is now POST /volumes/create\nhttps://github.com/docker/docker/pull/17136\n\n", "before_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filter': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}], "after_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filter': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}]} | 634 | 120 |
gh_patches_debug_5879 | rasdani/github-patches | git_diff | inventree__InvenTree-1860 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migration warns for phantom part changes
Here is the warning:
```
Your models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
Running `manage.py makemigrations` does **not** generate new migration file...
Running `manage.py showmigrations part` shows all part migrations are complete.
I found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/InvenTree/fields.py`
Content:
```
1 """ Custom fields used in InvenTree """
2
3 # -*- coding: utf-8 -*-
4 from __future__ import unicode_literals
5 import sys
6
7 from .validators import allowable_url_schemes
8
9 from django.utils.translation import ugettext_lazy as _
10
11 from django.forms.fields import URLField as FormURLField
12 from django.db import models as models
13 from django.core import validators
14 from django import forms
15
16 from decimal import Decimal
17
18 from djmoney.models.fields import MoneyField as ModelMoneyField
19 from djmoney.forms.fields import MoneyField
20 from djmoney.models.validators import MinMoneyValidator
21
22 import InvenTree.helpers
23
24
25 class InvenTreeURLFormField(FormURLField):
26 """ Custom URL form field with custom scheme validators """
27
28 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
29
30
31 class InvenTreeURLField(models.URLField):
32 """ Custom URL field which has custom scheme validators """
33
34 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
35
36 def formfield(self, **kwargs):
37 return super().formfield(**{
38 'form_class': InvenTreeURLFormField
39 })
40
41
42 def money_kwargs():
43 """ returns the database settings for MoneyFields """
44 from common.settings import currency_code_mappings, currency_code_default
45
46 kwargs = {}
47 kwargs['currency_choices'] = currency_code_mappings()
48 kwargs['default_currency'] = currency_code_default()
49 return kwargs
50
51
52 class InvenTreeModelMoneyField(ModelMoneyField):
53 """
54 Custom MoneyField for clean migrations while using dynamic currency settings
55 """
56
57 def __init__(self, **kwargs):
58 # detect if creating migration
59 if 'makemigrations' in sys.argv:
60 # remove currency information for a clean migration
61 kwargs['default_currency'] = ''
62 kwargs['currency_choices'] = []
63 else:
64 # set defaults
65 kwargs.update(money_kwargs())
66
67 # Set a minimum value validator
68 validators = kwargs.get('validators', [])
69
70 if len(validators) == 0:
71 validators.append(
72 MinMoneyValidator(0),
73 )
74
75 kwargs['validators'] = validators
76
77 super().__init__(**kwargs)
78
79 def formfield(self, **kwargs):
80 """ override form class to use own function """
81 kwargs['form_class'] = InvenTreeMoneyField
82 return super().formfield(**kwargs)
83
84
85 class InvenTreeMoneyField(MoneyField):
86 """ custom MoneyField for clean migrations while using dynamic currency settings """
87 def __init__(self, *args, **kwargs):
88 # override initial values with the real info from database
89 kwargs.update(money_kwargs())
90 super().__init__(*args, **kwargs)
91
92
93 class DatePickerFormField(forms.DateField):
94 """
95 Custom date-picker field
96 """
97
98 def __init__(self, **kwargs):
99
100 help_text = kwargs.get('help_text', _('Enter date'))
101 label = kwargs.get('label', None)
102 required = kwargs.get('required', False)
103 initial = kwargs.get('initial', None)
104
105 widget = forms.DateInput(
106 attrs={
107 'type': 'date',
108 }
109 )
110
111 forms.DateField.__init__(
112 self,
113 required=required,
114 initial=initial,
115 help_text=help_text,
116 widget=widget,
117 label=label
118 )
119
120
121 def round_decimal(value, places):
122 """
123 Round value to the specified number of places.
124 """
125
126 if value is not None:
127 # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options
128 return value.quantize(Decimal(10) ** -places)
129 return value
130
131
132 class RoundingDecimalFormField(forms.DecimalField):
133 def to_python(self, value):
134 value = super(RoundingDecimalFormField, self).to_python(value)
135 value = round_decimal(value, self.decimal_places)
136 return value
137
138 def prepare_value(self, value):
139 """
140 Override the 'prepare_value' method, to remove trailing zeros when displaying.
141 Why? It looks nice!
142 """
143
144 if type(value) == Decimal:
145 return InvenTree.helpers.normalize(value)
146 else:
147 return value
148
149
150 class RoundingDecimalField(models.DecimalField):
151 def to_python(self, value):
152 value = super(RoundingDecimalField, self).to_python(value)
153 return round_decimal(value, self.decimal_places)
154
155 def formfield(self, **kwargs):
156 defaults = {
157 'form_class': RoundingDecimalFormField
158 }
159
160 defaults.update(kwargs)
161
162 return super().formfield(**kwargs)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py
--- a/InvenTree/InvenTree/fields.py
+++ b/InvenTree/InvenTree/fields.py
@@ -55,7 +55,7 @@
def __init__(self, **kwargs):
# detect if creating migration
- if 'makemigrations' in sys.argv:
+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:
# remove currency information for a clean migration
kwargs['default_currency'] = ''
kwargs['currency_choices'] = []
| {"golden_diff": "diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py\n--- a/InvenTree/InvenTree/fields.py\n+++ b/InvenTree/InvenTree/fields.py\n@@ -55,7 +55,7 @@\n \n def __init__(self, **kwargs):\n # detect if creating migration\n- if 'makemigrations' in sys.argv:\n+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n", "issue": "Migration warns for phantom part changes \nHere is the warning:\r\n\r\n```\r\nYour models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.\r\nRun 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.\r\n```\r\n\r\nRunning `manage.py makemigrations` does **not** generate new migration file...\r\n\r\nRunning `manage.py showmigrations part` shows all part migrations are complete.\r\n\r\nI found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.\n", "before_files": [{"content": "\"\"\" Custom fields used in InvenTree \"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .validators import allowable_url_schemes\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.forms.fields import URLField as FormURLField\nfrom django.db import models as models\nfrom django.core import validators\nfrom django import forms\n\nfrom decimal import Decimal\n\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.validators import MinMoneyValidator\n\nimport InvenTree.helpers\n\n\nclass InvenTreeURLFormField(FormURLField):\n \"\"\" Custom URL form field with custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\" Custom URL field which has custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def formfield(self, **kwargs):\n return super().formfield(**{\n 'form_class': InvenTreeURLFormField\n })\n\n\ndef money_kwargs():\n \"\"\" returns the database settings for MoneyFields \"\"\"\n from common.settings import currency_code_mappings, currency_code_default\n\n kwargs = {}\n kwargs['currency_choices'] = currency_code_mappings()\n kwargs['default_currency'] = currency_code_default()\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"\n Custom MoneyField for clean migrations while using dynamic currency settings\n \"\"\"\n \n def __init__(self, **kwargs):\n # detect if creating migration\n if 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n if len(validators) == 0:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\" override form class to use own function \"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\" custom MoneyField for clean migrations while using dynamic currency settings \"\"\"\n def __init__(self, *args, **kwargs):\n # override initial values with the real info from database\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"\n Custom date-picker field\n \"\"\"\n\n def __init__(self, **kwargs):\n\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"\n Round value to the specified number of places.\n \"\"\"\n\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalFormField, self).to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"\n Override the 'prepare_value' method, to remove trailing zeros when displaying.\n Why? It looks nice!\n \"\"\"\n\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalField, self).to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n defaults = {\n 'form_class': RoundingDecimalFormField\n }\n\n defaults.update(kwargs)\n\n return super().formfield(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}], "after_files": [{"content": "\"\"\" Custom fields used in InvenTree \"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .validators import allowable_url_schemes\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.forms.fields import URLField as FormURLField\nfrom django.db import models as models\nfrom django.core import validators\nfrom django import forms\n\nfrom decimal import Decimal\n\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.validators import MinMoneyValidator\n\nimport InvenTree.helpers\nimport common.settings\n\n\nclass InvenTreeURLFormField(FormURLField):\n \"\"\" Custom URL form field with custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\" Custom URL field which has custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def formfield(self, **kwargs):\n return super().formfield(**{\n 'form_class': InvenTreeURLFormField\n })\n\n\ndef money_kwargs():\n \"\"\" returns the database settings for MoneyFields \"\"\"\n kwargs = {}\n kwargs['currency_choices'] = common.settings.currency_code_mappings()\n kwargs['default_currency'] = common.settings.currency_code_default\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"\n Custom MoneyField for clean migrations while using dynamic currency settings\n \"\"\"\n \n def __init__(self, **kwargs):\n # detect if creating migration\n if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n if len(validators) == 0:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\" override form class to use own function \"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\" custom MoneyField for clean migrations while using dynamic currency settings \"\"\"\n def __init__(self, *args, **kwargs):\n # override initial values with the real info from database\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"\n Custom date-picker field\n \"\"\"\n\n def __init__(self, **kwargs):\n\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"\n Round value to the specified number of places.\n \"\"\"\n\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalFormField, self).to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"\n Override the 'prepare_value' method, to remove trailing zeros when displaying.\n Why? It looks nice!\n \"\"\"\n\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalField, self).to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n defaults = {\n 'form_class': RoundingDecimalFormField\n }\n\n defaults.update(kwargs)\n\n return super().formfield(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}]} | 1,748 | 144 |
gh_patches_debug_14841 | rasdani/github-patches | git_diff | koxudaxi__datamodel-code-generator-421 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IPv4Address doesn't import from pydantic.validators
**Describe the bug**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic import IPv4Address
```
This isn't a valid import.
**To Reproduce**
Example schema:
```yaml
openapi: 3.0.0
info:
version: 0.0.1
title: Foo API
paths:
/foo:
get:
responses:
"200":
description: Success
components:
schemas:
Foo:
type: object
properties:
ip:
type: string
format: ipv4
```
Used commandline:
```
$ datamodel-codegen --input openapi.yaml
```
**Expected behavior**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic.validators import IPv4Address
```
**Version:**
- OS: MacOS
- Python version: `3.9.2`
- datamodel-code-generator version: `0.8.2`
**Additional context**
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datamodel_code_generator/model/pydantic/imports.py`
Content:
```
1 from datamodel_code_generator.imports import Import
2
3 IMPORT_CONSTR = Import.from_full_path('pydantic.constr')
4 IMPORT_CONINT = Import.from_full_path('pydantic.conint')
5 IMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')
6 IMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')
7 IMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')
8 IMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')
9 IMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')
10 IMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')
11 IMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')
12 IMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')
13 IMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')
14 IMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')
15 IMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')
16 IMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')
17 IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
18 IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
19 IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
20 IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
21 IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
22 IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
23 IMPORT_FIELD = Import.from_full_path('pydantic.Field')
24 IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
25 IMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')
26 IMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')
27 IMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')
28 IMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')
29 IMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py
--- a/datamodel_code_generator/model/pydantic/imports.py
+++ b/datamodel_code_generator/model/pydantic/imports.py
@@ -17,8 +17,8 @@
IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')
+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')
IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
IMPORT_FIELD = Import.from_full_path('pydantic.Field')
IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
| {"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py\n--- a/datamodel_code_generator/model/pydantic/imports.py\n+++ b/datamodel_code_generator/model/pydantic/imports.py\n@@ -17,8 +17,8 @@\n IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\n IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\n IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\n-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\n-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\n+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')\n+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')\n IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\n IMPORT_FIELD = Import.from_full_path('pydantic.Field')\n IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\n", "issue": "IPv4Address doesn't import from pydantic.validators\n**Describe the bug**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic import IPv4Address\r\n```\r\n\r\nThis isn't a valid import.\r\n\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```yaml\r\nopenapi: 3.0.0\r\n\r\ninfo:\r\n version: 0.0.1\r\n title: Foo API\r\n\r\npaths:\r\n /foo:\r\n get:\r\n responses:\r\n \"200\":\r\n description: Success\r\n\r\ncomponents:\r\n schemas:\r\n Foo:\r\n type: object\r\n properties:\r\n ip:\r\n type: string\r\n format: ipv4\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input openapi.yaml\r\n```\r\n\r\n**Expected behavior**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic.validators import IPv4Address\r\n```\r\n\r\n**Version:**\r\n - OS: MacOS\r\n - Python version: `3.9.2`\r\n - datamodel-code-generator version: `0.8.2`\r\n\r\n**Additional context**\r\nNone\r\n\n", "before_files": [{"content": "from datamodel_code_generator.imports import Import\n\nIMPORT_CONSTR = Import.from_full_path('pydantic.constr')\nIMPORT_CONINT = Import.from_full_path('pydantic.conint')\nIMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')\nIMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')\nIMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')\nIMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')\nIMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')\nIMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')\nIMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')\nIMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')\nIMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')\nIMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')\nIMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')\nIMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')\nIMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\nIMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\nIMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\nIMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\nIMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\nIMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\nIMPORT_FIELD = Import.from_full_path('pydantic.Field')\nIMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\nIMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')\nIMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')\nIMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')\nIMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')\nIMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')\n", "path": "datamodel_code_generator/model/pydantic/imports.py"}], "after_files": [{"content": "from datamodel_code_generator.imports import Import\n\nIMPORT_CONSTR = Import.from_full_path('pydantic.constr')\nIMPORT_CONINT = Import.from_full_path('pydantic.conint')\nIMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')\nIMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')\nIMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')\nIMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')\nIMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')\nIMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')\nIMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')\nIMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')\nIMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')\nIMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')\nIMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')\nIMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')\nIMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\nIMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\nIMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\nIMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')\nIMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')\nIMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\nIMPORT_FIELD = Import.from_full_path('pydantic.Field')\nIMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\nIMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')\nIMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')\nIMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')\nIMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')\nIMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')\n", "path": "datamodel_code_generator/model/pydantic/imports.py"}]} | 999 | 230 |
gh_patches_debug_417 | rasdani/github-patches | git_diff | python__python-docs-es-1712 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Translate 'library/base64.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.
Meanwhile, the English version is shown.
Current stats for `library/base64.po`:
* Fuzzy: 4
* Percent translated: 90.9%
* Entries: 50 / 55
* Untranslated: 5
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/translate.py`
Content:
```
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":doc:`[^`]+`",
46 "``[^`]+``",
47 "`[^`]+`__",
48 "`[^`]+`_",
49 "\*\*.+\*\*", # bold text between **
50 "\*.+\*", # italic text between *
51 ]
52
53 _exps = [re.compile(e) for e in _patterns]
54
55 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
56 """
57 Parameters:
58 string containing the text to translate
59
60 Returns:
61 dictionary containing all the placeholder text as keys
62 and the correct value.
63 """
64
65 i = 0
66 d: Dict[str, str] = {}
67 for exp in _exps:
68 matches = exp.findall(s)
69 if DEBUG:
70 print(exp, matches)
71 for match in matches:
72 ph = f"XASDF{str(i).zfill(2)}"
73 s = s.replace(match, ph)
74 if ph in d and VERBOSE:
75 print(f"Error: {ph} is already in the dictionary")
76 print("new", match)
77 print("old", d[ph])
78 d[ph] = match
79 i += 1
80 return d, s
81
82
83 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
84 for ph, value in placeholders.items():
85 translated_text = translated_text.replace(ph, value)
86 if DEBUG:
87 print(ph, value)
88 print(translated_text)
89 return translated_text
90
91
92 if __name__ == "__main__":
93 filename = sys.argv[1]
94 if not os.path.isfile(filename):
95 print(f"File not found: '{filename}'")
96 sys.exit(-1)
97
98 po = polib.pofile(filename)
99 translator = GoogleTranslator(source="en", target="es")
100
101 for entry in po:
102 # If the entry has already a translation, skip.
103 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
104 continue
105
106 print("\nEN|", entry.msgid)
107 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
108 if VERBOSE:
109 print(temp_text)
110 print(placeholders)
111
112 # Translate the temporary text without sphinx statements
113 translated_text = translator.translate(temp_text)
114
115 # Recover sphinx statements
116 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
117 print("ES|", real_text)
118
119 # Replace the po file translated entry
120 entry.msgstr = real_text
121
122 # Save the file after all the entries are translated
123 po.save()
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -42,6 +42,7 @@
":program:`[^`]+`",
":keyword:`[^`]+`",
":RFC:`[^`]+`",
+ ":rfc:`[^`]+`",
":doc:`[^`]+`",
"``[^`]+``",
"`[^`]+`__",
| {"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -42,6 +42,7 @@\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n+ \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n", "issue": "Translate 'library/base64.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `library/base64.po`:\n\n* Fuzzy: 4\n* Percent translated: 90.9%\n* Entries: 50 / 55\n* Untranslated: 5\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}], "after_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]} | 1,572 | 103 |
gh_patches_debug_5952 | rasdani/github-patches | git_diff | Kinto__kinto-386 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Activate POST on collections
```
$ curl -H "Content-Type: application/json" \
-X POST -d '{"data": {"test": "some_data"}}' --user testuser:abc123 \
https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections
{"errno":115,"message":"Method not allowed on this endpoint.","code":405,"error":"Method Not Allowed"}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/collections.py`
Content:
```
1 import colander
2 import jsonschema
3 from cliquet import resource
4 from jsonschema import exceptions as jsonschema_exceptions
5
6 from kinto.views import NameGenerator
7
8
9 class JSONSchemaMapping(colander.SchemaNode):
10 def schema_type(self, **kw):
11 return colander.Mapping(unknown='preserve')
12
13 def deserialize(self, cstruct=colander.null):
14 # Start by deserializing a simple mapping.
15 validated = super(JSONSchemaMapping, self).deserialize(cstruct)
16
17 # In case it is optional in parent schema.
18 if not validated or validated in (colander.null, colander.drop):
19 return validated
20
21 try:
22 jsonschema.Draft4Validator.check_schema(validated)
23 except jsonschema_exceptions.SchemaError as e:
24 self.raise_invalid(e.path.pop() + e.message)
25 return validated
26
27
28 class CollectionSchema(resource.ResourceSchema):
29 schema = JSONSchemaMapping(missing=colander.drop)
30 cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)
31
32 class Options:
33 preserve_unknown = True
34
35
36 @resource.register(name='collection',
37 collection_methods=('GET',),
38 collection_path='/buckets/{{bucket_id}}/collections',
39 record_path='/buckets/{{bucket_id}}/collections/{{id}}')
40 class Collection(resource.ProtectedResource):
41 mapping = CollectionSchema()
42 permissions = ('read', 'write', 'record:create')
43
44 def __init__(self, *args, **kwargs):
45 super(Collection, self).__init__(*args, **kwargs)
46 self.model.id_generator = NameGenerator()
47
48 def get_parent_id(self, request):
49 bucket_id = request.matchdict['bucket_id']
50 parent_id = '/buckets/%s' % bucket_id
51 return parent_id
52
53 def delete(self):
54 result = super(Collection, self).delete()
55
56 # Delete records.
57 storage = self.model.storage
58 parent_id = '%s/collections/%s' % (self.model.parent_id,
59 self.record_id)
60 storage.delete_all(collection_id='record',
61 parent_id=parent_id,
62 with_deleted=False)
63 storage.purge_deleted(collection_id='record', parent_id=parent_id)
64
65 return result
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/collections.py b/kinto/views/collections.py
--- a/kinto/views/collections.py
+++ b/kinto/views/collections.py
@@ -34,7 +34,7 @@
@resource.register(name='collection',
- collection_methods=('GET',),
+ collection_methods=('GET', 'POST'),
collection_path='/buckets/{{bucket_id}}/collections',
record_path='/buckets/{{bucket_id}}/collections/{{id}}')
class Collection(resource.ProtectedResource):
| {"golden_diff": "diff --git a/kinto/views/collections.py b/kinto/views/collections.py\n--- a/kinto/views/collections.py\n+++ b/kinto/views/collections.py\n@@ -34,7 +34,7 @@\n \n \n @resource.register(name='collection',\n- collection_methods=('GET',),\n+ collection_methods=('GET', 'POST'),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\n class Collection(resource.ProtectedResource):\n", "issue": "Activate POST on collections\n```\n$ curl -H \"Content-Type: application/json\" \\\n -X POST -d '{\"data\": {\"test\": \"some_data\"}}' --user testuser:abc123 \\\n https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections\n\n{\"errno\":115,\"message\":\"Method not allowed on this endpoint.\",\"code\":405,\"error\":\"Method Not Allowed\"}\n```\n\n", "before_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)\n\n class Options:\n preserve_unknown = True\n\n\[email protected](name='collection',\n collection_methods=('GET',),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.model.storage\n parent_id = '%s/collections/%s' % (self.model.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}], "after_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)\n\n class Options:\n preserve_unknown = True\n\n\[email protected](name='collection',\n collection_methods=('GET', 'POST'),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.model.storage\n parent_id = '%s/collections/%s' % (self.model.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}]} | 943 | 108 |
gh_patches_debug_5335 | rasdani/github-patches | git_diff | Nitrate__Nitrate-415 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
import xml says Worng xml_version
import xml in not working says worng xml_version 1.1
i export the test case and generate xml and try to import same not work
thanks in advance
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/tcms/settings/product.py`
Content:
```
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 # For Kerberos authentication, uncomment out RemoteUserMiddleware.
22 # MIDDLEWARE += (
23 # 'django.contrib.auth.middleware.RemoteUserMiddleware',
24 # )
25
26 # Remote kerberos authentication backends
27 # AUTHENTICATION_BACKENDS = (
28 # 'tcms.auth.backends.ModAuthKerbBackend',
29 # )
30
31 # To enable database routers for read/write separation.
32 # DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']
33
34 # Kerberos realm
35 # KRB5_REALM = 'EXAMPLE.COM'
36
37 # User authentication by Bugzilla settings
38 # BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'
39
40
41 TEMPLATES[0].update({
42 'DIRS': ['/usr/share/nitrate/templates'],
43 })
44
45 # Set the default send mail address
46 EMAIL_HOST = 'smtp.example.com'
47 EMAIL_FROM = '[email protected]'
48
49 # Site-specific messages
50
51 # First run - to determine if it needs to prompt user or not.
52 FIRST_RUN = False
53
54 # You can add a help link on the footer of home page as following format:
55 # ('http://foo.com', 'foo')
56 FOOTER_LINKS = (
57 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
58 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
59 )
60
61 # added for nitrate3.4 compatibility
62 DEFAULT_GROUPS = ['default']
63 TESTOPIA_XML_VERSION = '1.0'
64
65 # admin settings
66 ADMINS = (
67 # ('Your Name', '[email protected]'),
68 )
69
70 DEFAULT_PAGE_SIZE = 100
71
```
Path: `docker/released/product.py`
Content:
```
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 AUTHENTICATION_BACKENDS = (
22 'django.contrib.auth.backends.ModelBackend',
23 )
24
25 TEMPLATES[0].update({
26 'DIRS': ['/usr/share/nitrate/templates'],
27 })
28
29 # Set the default send mail address
30 EMAIL_HOST = 'smtp.example.com'
31 EMAIL_FROM = '[email protected]'
32
33 # Site-specific messages
34
35 # First run - to determine if it needs to prompt user or not.
36 FIRST_RUN = False
37
38 # You can add a help link on the footer of home page as following format:
39 # ('http://foo.com', 'foo')
40 FOOTER_LINKS = (
41 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
42 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
43 )
44
45 # added for nitrate3.4 compatibility
46 DEFAULT_GROUPS = ['default']
47 TESTOPIA_XML_VERSION = '1.0'
48
49 ADMINS = (
50 )
51
52 DEFAULT_PAGE_SIZE = 100
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/released/product.py b/docker/released/product.py
--- a/docker/released/product.py
+++ b/docker/released/product.py
@@ -44,7 +44,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
ADMINS = (
)
diff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py
--- a/src/tcms/settings/product.py
+++ b/src/tcms/settings/product.py
@@ -60,7 +60,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
# admin settings
ADMINS = (
| {"golden_diff": "diff --git a/docker/released/product.py b/docker/released/product.py\n--- a/docker/released/product.py\n+++ b/docker/released/product.py\n@@ -44,7 +44,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n ADMINS = (\n )\ndiff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py\n--- a/src/tcms/settings/product.py\n+++ b/src/tcms/settings/product.py\n@@ -60,7 +60,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n # admin settings\n ADMINS = (\n", "issue": "import xml says Worng xml_version\nimport xml in not working says worng xml_version 1.1\r\n\r\ni export the test case and generate xml and try to import same not work\r\n\r\nthanks in advance\n", "before_files": [{"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\n# For Kerberos authentication, uncomment out RemoteUserMiddleware.\n# MIDDLEWARE += (\n# 'django.contrib.auth.middleware.RemoteUserMiddleware',\n# )\n\n# Remote kerberos authentication backends\n# AUTHENTICATION_BACKENDS = (\n# 'tcms.auth.backends.ModAuthKerbBackend',\n# )\n\n# To enable database routers for read/write separation.\n# DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']\n\n# Kerberos realm\n# KRB5_REALM = 'EXAMPLE.COM'\n\n# User authentication by Bugzilla settings\n# BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'\n\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\n# admin settings\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "src/tcms/settings/product.py"}, {"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\nADMINS = (\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "docker/released/product.py"}], "after_files": [{"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\n# For Kerberos authentication, uncomment out RemoteUserMiddleware.\n# MIDDLEWARE += (\n# 'django.contrib.auth.middleware.RemoteUserMiddleware',\n# )\n\n# Remote kerberos authentication backends\n# AUTHENTICATION_BACKENDS = (\n# 'tcms.auth.backends.ModAuthKerbBackend',\n# )\n\n# To enable database routers for read/write separation.\n# DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']\n\n# Kerberos realm\n# KRB5_REALM = 'EXAMPLE.COM'\n\n# User authentication by Bugzilla settings\n# BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'\n\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\n\n# admin settings\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "src/tcms/settings/product.py"}, {"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\n\nADMINS = (\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "docker/released/product.py"}]} | 1,401 | 167 |
gh_patches_debug_25743 | rasdani/github-patches | git_diff | getsentry__sentry-python-3099 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
2.2.1
### Steps to Reproduce
```console
$ docker run --rm -it ubuntu:22.04
root@e264f830878b:/# apt update
root@e264f830878b:/# apt install -y python3-apport virtualenv
root@e264f830878b:/# virtualenv venv
root@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk
…
Successfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1
root@e264f830878b:/# cat > test.py <<EOF
exec(open("venv/bin/activate_this.py").read(), {"__file__": "venv/bin/activate_this.py"})
import sentry_sdk
sentry_sdk.init(dsn="https://[email protected]/1234")
import exceptiongroup
EOF
root@e264f830878b:/# python3 test.py
```
### Expected Result
No error.
### Actual Result
```pytb
Traceback (most recent call last):
File "//test.py", line 4, in <module>
import exceptiongroup
File "/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py", line 20, in <module>
from ._formatting import (
File "/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py", line 394, in <module>
assert sys.excepthook is apport_python_hook.apport_excepthook
AssertionError
Sentry is attempting to send 2 pending events
Waiting up to 2 seconds
Press Ctrl-C to quit
```
The [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is
```python
if getattr(sys.excepthook, "__name__", None) in (
"apport_excepthook",
# on ubuntu 22.10 the hook was renamed to partial_apport_excepthook
"partial_apport_excepthook",
):
…
import apport_python_hook
assert sys.excepthook is apport_python_hook.apport_excepthook
```
which fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of
- #2906
(cc @sentrivana)
This is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it’s a popular library; for example, it’s a dependency of IPython.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/excepthook.py`
Content:
```
1 import sys
2
3 import sentry_sdk
4 from sentry_sdk.utils import (
5 capture_internal_exceptions,
6 ensure_integration_enabled,
7 event_from_exception,
8 )
9 from sentry_sdk.integrations import Integration
10
11 from sentry_sdk._types import TYPE_CHECKING
12
13 if TYPE_CHECKING:
14 from typing import Callable
15 from typing import Any
16 from typing import Type
17 from typing import Optional
18
19 from types import TracebackType
20
21 Excepthook = Callable[
22 [Type[BaseException], BaseException, Optional[TracebackType]],
23 Any,
24 ]
25
26
27 class ExcepthookIntegration(Integration):
28 identifier = "excepthook"
29
30 always_run = False
31
32 def __init__(self, always_run=False):
33 # type: (bool) -> None
34
35 if not isinstance(always_run, bool):
36 raise ValueError(
37 "Invalid value for always_run: %s (must be type boolean)"
38 % (always_run,)
39 )
40 self.always_run = always_run
41
42 @staticmethod
43 def setup_once():
44 # type: () -> None
45 sys.excepthook = _make_excepthook(sys.excepthook)
46
47
48 def _make_excepthook(old_excepthook):
49 # type: (Excepthook) -> Excepthook
50 @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
51 def sentry_sdk_excepthook(type_, value, traceback):
52 # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
53 integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
54
55 if _should_send(integration.always_run):
56 with capture_internal_exceptions():
57 event, hint = event_from_exception(
58 (type_, value, traceback),
59 client_options=sentry_sdk.get_client().options,
60 mechanism={"type": "excepthook", "handled": False},
61 )
62 sentry_sdk.capture_event(event, hint=hint)
63
64 return old_excepthook(type_, value, traceback)
65
66 return sentry_sdk_excepthook
67
68
69 def _should_send(always_run=False):
70 # type: (bool) -> bool
71 if always_run:
72 return True
73
74 if hasattr(sys, "ps1"):
75 # Disable the excepthook for interactive Python shells, otherwise
76 # every typo gets sent to Sentry.
77 return False
78
79 return True
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py
--- a/sentry_sdk/integrations/excepthook.py
+++ b/sentry_sdk/integrations/excepthook.py
@@ -3,7 +3,6 @@
import sentry_sdk
from sentry_sdk.utils import (
capture_internal_exceptions,
- ensure_integration_enabled,
event_from_exception,
)
from sentry_sdk.integrations import Integration
@@ -47,11 +46,16 @@
def _make_excepthook(old_excepthook):
# type: (Excepthook) -> Excepthook
- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
def sentry_sdk_excepthook(type_, value, traceback):
# type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
+ # Note: If we replace this with ensure_integration_enabled then
+ # we break the exceptiongroup backport;
+ # See: https://github.com/getsentry/sentry-python/issues/3097
+ if integration is None:
+ return old_excepthook(type_, value, traceback)
+
if _should_send(integration.always_run):
with capture_internal_exceptions():
event, hint = event_from_exception(
| {"golden_diff": "diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py\n--- a/sentry_sdk/integrations/excepthook.py\n+++ b/sentry_sdk/integrations/excepthook.py\n@@ -3,7 +3,6 @@\n import sentry_sdk\n from sentry_sdk.utils import (\n capture_internal_exceptions,\n- ensure_integration_enabled,\n event_from_exception,\n )\n from sentry_sdk.integrations import Integration\n@@ -47,11 +46,16 @@\n \n def _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n \n+ # Note: If we replace this with ensure_integration_enabled then\n+ # we break the exceptiongroup backport;\n+ # See: https://github.com/getsentry/sentry-python/issues/3097\n+ if integration is None:\n+ return old_excepthook(type_, value, traceback)\n+\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n", "issue": "`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n2.2.1\r\n\r\n### Steps to Reproduce\r\n\r\n```console\r\n$ docker run --rm -it ubuntu:22.04\r\nroot@e264f830878b:/# apt update\r\nroot@e264f830878b:/# apt install -y python3-apport virtualenv\r\nroot@e264f830878b:/# virtualenv venv\r\nroot@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk\r\n\u2026\r\nSuccessfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1\r\nroot@e264f830878b:/# cat > test.py <<EOF\r\nexec(open(\"venv/bin/activate_this.py\").read(), {\"__file__\": \"venv/bin/activate_this.py\"})\r\nimport sentry_sdk\r\nsentry_sdk.init(dsn=\"https://[email protected]/1234\")\r\nimport exceptiongroup\r\nEOF\r\nroot@e264f830878b:/# python3 test.py\r\n```\r\n\r\n### Expected Result\r\n\r\nNo error.\r\n\r\n### Actual Result\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"//test.py\", line 4, in <module>\r\n import exceptiongroup\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py\", line 20, in <module>\r\n from ._formatting import (\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py\", line 394, in <module>\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\nAssertionError\r\nSentry is attempting to send 2 pending events\r\nWaiting up to 2 seconds\r\nPress Ctrl-C to quit\r\n```\r\n\r\nThe [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is\r\n\r\n```python\r\nif getattr(sys.excepthook, \"__name__\", None) in (\r\n \"apport_excepthook\",\r\n # on ubuntu 22.10 the hook was renamed to partial_apport_excepthook\r\n \"partial_apport_excepthook\",\r\n):\r\n \u2026\r\n import apport_python_hook\r\n\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\n```\r\n\r\nwhich fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of\r\n\r\n- #2906\r\n\r\n(cc @sentrivana)\r\n\r\nThis is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it\u2019s a popular library; for example, it\u2019s a dependency of IPython.\n", "before_files": [{"content": "import sys\n\nimport sentry_sdk\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n ensure_integration_enabled,\n event_from_exception,\n)\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable\n from typing import Any\n from typing import Type\n from typing import Optional\n\n from types import TracebackType\n\n Excepthook = Callable[\n [Type[BaseException], BaseException, Optional[TracebackType]],\n Any,\n ]\n\n\nclass ExcepthookIntegration(Integration):\n identifier = \"excepthook\"\n\n always_run = False\n\n def __init__(self, always_run=False):\n # type: (bool) -> None\n\n if not isinstance(always_run, bool):\n raise ValueError(\n \"Invalid value for always_run: %s (must be type boolean)\"\n % (always_run,)\n )\n self.always_run = always_run\n\n @staticmethod\n def setup_once():\n # type: () -> None\n sys.excepthook = _make_excepthook(sys.excepthook)\n\n\ndef _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n (type_, value, traceback),\n client_options=sentry_sdk.get_client().options,\n mechanism={\"type\": \"excepthook\", \"handled\": False},\n )\n sentry_sdk.capture_event(event, hint=hint)\n\n return old_excepthook(type_, value, traceback)\n\n return sentry_sdk_excepthook\n\n\ndef _should_send(always_run=False):\n # type: (bool) -> bool\n if always_run:\n return True\n\n if hasattr(sys, \"ps1\"):\n # Disable the excepthook for interactive Python shells, otherwise\n # every typo gets sent to Sentry.\n return False\n\n return True\n", "path": "sentry_sdk/integrations/excepthook.py"}], "after_files": [{"content": "import sys\n\nimport sentry_sdk\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n event_from_exception,\n)\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable\n from typing import Any\n from typing import Type\n from typing import Optional\n\n from types import TracebackType\n\n Excepthook = Callable[\n [Type[BaseException], BaseException, Optional[TracebackType]],\n Any,\n ]\n\n\nclass ExcepthookIntegration(Integration):\n identifier = \"excepthook\"\n\n always_run = False\n\n def __init__(self, always_run=False):\n # type: (bool) -> None\n\n if not isinstance(always_run, bool):\n raise ValueError(\n \"Invalid value for always_run: %s (must be type boolean)\"\n % (always_run,)\n )\n self.always_run = always_run\n\n @staticmethod\n def setup_once():\n # type: () -> None\n sys.excepthook = _make_excepthook(sys.excepthook)\n\n\ndef _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n\n # Note: If we replace this with ensure_integration_enabled then\n # we break the exceptiongroup backport;\n # See: https://github.com/getsentry/sentry-python/issues/3097\n if integration is None:\n return old_excepthook(type_, value, traceback)\n\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n (type_, value, traceback),\n client_options=sentry_sdk.get_client().options,\n mechanism={\"type\": \"excepthook\", \"handled\": False},\n )\n sentry_sdk.capture_event(event, hint=hint)\n\n return old_excepthook(type_, value, traceback)\n\n return sentry_sdk_excepthook\n\n\ndef _should_send(always_run=False):\n # type: (bool) -> bool\n if always_run:\n return True\n\n if hasattr(sys, \"ps1\"):\n # Disable the excepthook for interactive Python shells, otherwise\n # every typo gets sent to Sentry.\n return False\n\n return True\n", "path": "sentry_sdk/integrations/excepthook.py"}]} | 1,675 | 321 |
gh_patches_debug_30470 | rasdani/github-patches | git_diff | vega__altair-2643 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
x-axis tick labels in Natural Disasters case study need clean up
See:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/natural_disasters.py`
Content:
```
1 """
2 Natural Disasters
3 -----------------
4 This example shows a visualization of global deaths from natural disasters.
5 """
6 # category: case studies
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.disasters.url
11
12 alt.Chart(source).mark_circle(
13 opacity=0.8,
14 stroke='black',
15 strokeWidth=1
16 ).encode(
17 alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
18 alt.Y('Entity:N'),
19 alt.Size('Deaths:Q',
20 scale=alt.Scale(range=[0, 4000]),
21 legend=alt.Legend(title='Annual Global Deaths')
22 ),
23 alt.Color('Entity:N', legend=None)
24 ).properties(
25 width=450,
26 height=320
27 ).transform_filter(
28 alt.datum.Entity != 'All natural disasters'
29 )
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py
--- a/altair/examples/natural_disasters.py
+++ b/altair/examples/natural_disasters.py
@@ -1,7 +1,7 @@
"""
-Natural Disasters
------------------
-This example shows a visualization of global deaths from natural disasters.
+Global Deaths from Natural Disasters
+------------------------------------
+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.
"""
# category: case studies
import altair as alt
@@ -9,21 +9,44 @@
source = data.disasters.url
-alt.Chart(source).mark_circle(
+alt.Chart(source).transform_filter(
+ alt.datum.Entity != 'All natural disasters'
+).mark_circle(
opacity=0.8,
stroke='black',
- strokeWidth=1
+ strokeWidth=1,
+ strokeOpacity=0.4
).encode(
- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
- alt.Y('Entity:N'),
- alt.Size('Deaths:Q',
- scale=alt.Scale(range=[0, 4000]),
- legend=alt.Legend(title='Annual Global Deaths')
+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),
+ y=alt.Y(
+ 'Entity:N',
+ sort=alt.EncodingSortField(field="Deaths", op="sum", order='descending'),
+ title=None
+ ),
+ size=alt.Size('Deaths:Q',
+ scale=alt.Scale(range=[0, 2500]),
+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')
),
- alt.Color('Entity:N', legend=None)
+ color=alt.Color('Entity:N', legend=None),
+ tooltip=[
+ "Entity:N",
+ alt.Tooltip("Year:T", format='%Y'),
+ alt.Tooltip("Deaths:Q", format='~s')
+ ],
).properties(
width=450,
- height=320
-).transform_filter(
- alt.datum.Entity != 'All natural disasters'
+ height=320,
+ title=alt.TitleParams(
+ text="Global Deaths from Natural Disasters (1900-2017)",
+ subtitle="The size of the bubble represents the total death count per year, by type of disaster",
+ anchor='start'
+ )
+).configure_axisY(
+ domain=False,
+ ticks=False,
+ offset=10
+).configure_axisX(
+ grid=False,
+).configure_view(
+ stroke=None
)
| {"golden_diff": "diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py\n--- a/altair/examples/natural_disasters.py\n+++ b/altair/examples/natural_disasters.py\n@@ -1,7 +1,7 @@\n \"\"\"\n-Natural Disasters\n------------------\n-This example shows a visualization of global deaths from natural disasters.\n+Global Deaths from Natural Disasters\n+------------------------------------\n+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.\n \"\"\"\n # category: case studies\n import altair as alt\n@@ -9,21 +9,44 @@\n \n source = data.disasters.url\n \n-alt.Chart(source).mark_circle(\n+alt.Chart(source).transform_filter(\n+ alt.datum.Entity != 'All natural disasters'\n+).mark_circle(\n opacity=0.8,\n stroke='black',\n- strokeWidth=1\n+ strokeWidth=1,\n+ strokeOpacity=0.4\n ).encode(\n- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n- alt.Y('Entity:N'),\n- alt.Size('Deaths:Q',\n- scale=alt.Scale(range=[0, 4000]),\n- legend=alt.Legend(title='Annual Global Deaths')\n+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),\n+ y=alt.Y(\n+ 'Entity:N',\n+ sort=alt.EncodingSortField(field=\"Deaths\", op=\"sum\", order='descending'),\n+ title=None\n+ ),\n+ size=alt.Size('Deaths:Q',\n+ scale=alt.Scale(range=[0, 2500]),\n+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')\n ),\n- alt.Color('Entity:N', legend=None)\n+ color=alt.Color('Entity:N', legend=None),\n+ tooltip=[\n+ \"Entity:N\", \n+ alt.Tooltip(\"Year:T\", format='%Y'), \n+ alt.Tooltip(\"Deaths:Q\", format='~s')\n+ ],\n ).properties(\n width=450,\n- height=320\n-).transform_filter(\n- alt.datum.Entity != 'All natural disasters'\n+ height=320,\n+ title=alt.TitleParams(\n+ text=\"Global Deaths from Natural Disasters (1900-2017)\",\n+ subtitle=\"The size of the bubble represents the total death count per year, by type of disaster\",\n+ anchor='start'\n+ )\n+).configure_axisY(\n+ domain=False,\n+ ticks=False,\n+ offset=10\n+).configure_axisX(\n+ grid=False,\n+).configure_view(\n+ stroke=None\n )\n", "issue": "x-axis tick labels in Natural Disasters case study need clean up\nSee:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nNatural Disasters\n-----------------\nThis example shows a visualization of global deaths from natural disasters.\n\"\"\"\n# category: case studies\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.disasters.url\n\nalt.Chart(source).mark_circle(\n opacity=0.8,\n stroke='black',\n strokeWidth=1\n).encode(\n alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n alt.Y('Entity:N'),\n alt.Size('Deaths:Q',\n scale=alt.Scale(range=[0, 4000]),\n legend=alt.Legend(title='Annual Global Deaths')\n ),\n alt.Color('Entity:N', legend=None)\n).properties(\n width=450,\n height=320\n).transform_filter(\n alt.datum.Entity != 'All natural disasters'\n)\n", "path": "altair/examples/natural_disasters.py"}], "after_files": [{"content": "\"\"\"\nGlobal Deaths from Natural Disasters\n------------------------------------\nThis example shows a proportional symbols visualization of deaths from natural disasters by year and type.\n\"\"\"\n# category: case studies\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.disasters.url\n\nalt.Chart(source).transform_filter(\n alt.datum.Entity != 'All natural disasters'\n).mark_circle(\n opacity=0.8,\n stroke='black',\n strokeWidth=1,\n strokeOpacity=0.4\n).encode(\n x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),\n y=alt.Y(\n 'Entity:N',\n sort=alt.EncodingSortField(field=\"Deaths\", op=\"sum\", order='descending'),\n title=None\n ),\n size=alt.Size('Deaths:Q',\n scale=alt.Scale(range=[0, 2500]),\n legend=alt.Legend(title='Deaths', clipHeight=30, format='s')\n ),\n color=alt.Color('Entity:N', legend=None),\n tooltip=[\n \"Entity:N\", \n alt.Tooltip(\"Year:T\", format='%Y'), \n alt.Tooltip(\"Deaths:Q\", format='~s')\n ],\n).properties(\n width=450,\n height=320,\n title=alt.TitleParams(\n text=\"Global Deaths from Natural Disasters (1900-2017)\",\n subtitle=\"The size of the bubble represents the total death count per year, by type of disaster\",\n anchor='start'\n )\n).configure_axisY(\n domain=False,\n ticks=False,\n offset=10\n).configure_axisX(\n grid=False,\n).configure_view(\n stroke=None\n)\n", "path": "altair/examples/natural_disasters.py"}]} | 590 | 609 |
gh_patches_debug_31552 | rasdani/github-patches | git_diff | CTFd__CTFd-1516 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change Configs detail API GET/PATCH for a more structured response
The API endpoints for GET, PATCH /api/v1/configs/{config_key} return badly structured data. This should return better structured data.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/api/v1/config.py`
Content:
```
1 from typing import List
2
3 from flask import request
4 from flask_restx import Namespace, Resource
5
6 from CTFd.api.v1.helpers.models import build_model_filters
7 from CTFd.api.v1.helpers.request import validate_args
8 from CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic
9 from CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse
10 from CTFd.cache import clear_config, clear_standings
11 from CTFd.constants import RawEnum
12 from CTFd.models import Configs, db
13 from CTFd.schemas.config import ConfigSchema
14 from CTFd.utils import get_config, set_config
15 from CTFd.utils.decorators import admins_only
16
17 configs_namespace = Namespace("configs", description="Endpoint to retrieve Configs")
18
19 ConfigModel = sqlalchemy_to_pydantic(Configs)
20
21
22 class ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):
23 data: ConfigModel
24
25
26 class ConfigListSuccessResponse(APIListSuccessResponse):
27 data: List[ConfigModel]
28
29
30 configs_namespace.schema_model(
31 "ConfigDetailedSuccessResponse", ConfigDetailedSuccessResponse.apidoc()
32 )
33
34 configs_namespace.schema_model(
35 "ConfigListSuccessResponse", ConfigListSuccessResponse.apidoc()
36 )
37
38
39 @configs_namespace.route("")
40 class ConfigList(Resource):
41 @admins_only
42 @configs_namespace.doc(
43 description="Endpoint to get Config objects in bulk",
44 responses={
45 200: ("Success", "ConfigListSuccessResponse"),
46 400: (
47 "An error occured processing the provided or stored data",
48 "APISimpleErrorResponse",
49 ),
50 },
51 )
52 @validate_args(
53 {
54 "key": (str, None),
55 "value": (str, None),
56 "q": (str, None),
57 "field": (RawEnum("ConfigFields", {"key": "key", "value": "value"}), None),
58 },
59 location="query",
60 )
61 def get(self, query_args):
62 q = query_args.pop("q", None)
63 field = str(query_args.pop("field", None))
64 filters = build_model_filters(model=Configs, query=q, field=field)
65
66 configs = Configs.query.filter_by(**query_args).filter(*filters).all()
67 schema = ConfigSchema(many=True)
68 response = schema.dump(configs)
69 if response.errors:
70 return {"success": False, "errors": response.errors}, 400
71
72 return {"success": True, "data": response.data}
73
74 @admins_only
75 @configs_namespace.doc(
76 description="Endpoint to get create a Config object",
77 responses={
78 200: ("Success", "ConfigDetailedSuccessResponse"),
79 400: (
80 "An error occured processing the provided or stored data",
81 "APISimpleErrorResponse",
82 ),
83 },
84 )
85 def post(self):
86 req = request.get_json()
87 schema = ConfigSchema()
88 response = schema.load(req)
89
90 if response.errors:
91 return {"success": False, "errors": response.errors}, 400
92
93 db.session.add(response.data)
94 db.session.commit()
95
96 response = schema.dump(response.data)
97 db.session.close()
98
99 clear_config()
100 clear_standings()
101
102 return {"success": True, "data": response.data}
103
104 @admins_only
105 @configs_namespace.doc(
106 description="Endpoint to get patch Config objects in bulk",
107 responses={200: ("Success", "APISimpleSuccessResponse")},
108 )
109 def patch(self):
110 req = request.get_json()
111
112 for key, value in req.items():
113 set_config(key=key, value=value)
114
115 clear_config()
116 clear_standings()
117
118 return {"success": True}
119
120
121 @configs_namespace.route("/<config_key>")
122 class Config(Resource):
123 @admins_only
124 # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
125 def get(self, config_key):
126
127 return {"success": True, "data": get_config(config_key)}
128
129 @admins_only
130 # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
131 def patch(self, config_key):
132 config = Configs.query.filter_by(key=config_key).first()
133 data = request.get_json()
134 if config:
135 schema = ConfigSchema(instance=config, partial=True)
136 response = schema.load(data)
137 else:
138 schema = ConfigSchema()
139 data["key"] = config_key
140 response = schema.load(data)
141 db.session.add(response.data)
142
143 if response.errors:
144 return response.errors, 400
145
146 db.session.commit()
147
148 response = schema.dump(response.data)
149 db.session.close()
150
151 clear_config()
152 clear_standings()
153
154 return {"success": True, "data": response.data}
155
156 @admins_only
157 @configs_namespace.doc(
158 description="Endpoint to delete a Config object",
159 responses={200: ("Success", "APISimpleSuccessResponse")},
160 )
161 def delete(self, config_key):
162 config = Configs.query.filter_by(key=config_key).first_or_404()
163
164 db.session.delete(config)
165 db.session.commit()
166 db.session.close()
167
168 clear_config()
169 clear_standings()
170
171 return {"success": True}
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/api/v1/config.py b/CTFd/api/v1/config.py
--- a/CTFd/api/v1/config.py
+++ b/CTFd/api/v1/config.py
@@ -11,7 +11,7 @@
from CTFd.constants import RawEnum
from CTFd.models import Configs, db
from CTFd.schemas.config import ConfigSchema
-from CTFd.utils import get_config, set_config
+from CTFd.utils import set_config
from CTFd.utils.decorators import admins_only
configs_namespace = Namespace("configs", description="Endpoint to retrieve Configs")
@@ -121,13 +121,33 @@
@configs_namespace.route("/<config_key>")
class Config(Resource):
@admins_only
- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
+ @configs_namespace.doc(
+ description="Endpoint to get a specific Config object",
+ responses={
+ 200: ("Success", "ConfigDetailedSuccessResponse"),
+ 400: (
+ "An error occured processing the provided or stored data",
+ "APISimpleErrorResponse",
+ ),
+ },
+ )
def get(self, config_key):
-
- return {"success": True, "data": get_config(config_key)}
+ config = Configs.query.filter_by(key=config_key).first_or_404()
+ schema = ConfigSchema()
+ response = schema.dump(config)
+ return {"success": True, "data": response.data}
@admins_only
- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
+ @configs_namespace.doc(
+ description="Endpoint to edit a specific Config object",
+ responses={
+ 200: ("Success", "ConfigDetailedSuccessResponse"),
+ 400: (
+ "An error occured processing the provided or stored data",
+ "APISimpleErrorResponse",
+ ),
+ },
+ )
def patch(self, config_key):
config = Configs.query.filter_by(key=config_key).first()
data = request.get_json()
| {"golden_diff": "diff --git a/CTFd/api/v1/config.py b/CTFd/api/v1/config.py\n--- a/CTFd/api/v1/config.py\n+++ b/CTFd/api/v1/config.py\n@@ -11,7 +11,7 @@\n from CTFd.constants import RawEnum\n from CTFd.models import Configs, db\n from CTFd.schemas.config import ConfigSchema\n-from CTFd.utils import get_config, set_config\n+from CTFd.utils import set_config\n from CTFd.utils.decorators import admins_only\n \n configs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n@@ -121,13 +121,33 @@\n @configs_namespace.route(\"/<config_key>\")\n class Config(Resource):\n @admins_only\n- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n+ @configs_namespace.doc(\n+ description=\"Endpoint to get a specific Config object\",\n+ responses={\n+ 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n+ 400: (\n+ \"An error occured processing the provided or stored data\",\n+ \"APISimpleErrorResponse\",\n+ ),\n+ },\n+ )\n def get(self, config_key):\n-\n- return {\"success\": True, \"data\": get_config(config_key)}\n+ config = Configs.query.filter_by(key=config_key).first_or_404()\n+ schema = ConfigSchema()\n+ response = schema.dump(config)\n+ return {\"success\": True, \"data\": response.data}\n \n @admins_only\n- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n+ @configs_namespace.doc(\n+ description=\"Endpoint to edit a specific Config object\",\n+ responses={\n+ 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n+ 400: (\n+ \"An error occured processing the provided or stored data\",\n+ \"APISimpleErrorResponse\",\n+ ),\n+ },\n+ )\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n", "issue": "Change Configs detail API GET/PATCH for a more structured response\nThe API endpoints for GET, PATCH /api/v1/configs/{config_key} return badly structured data. This should return better structured data. \n", "before_files": [{"content": "from typing import List\n\nfrom flask import request\nfrom flask_restx import Namespace, Resource\n\nfrom CTFd.api.v1.helpers.models import build_model_filters\nfrom CTFd.api.v1.helpers.request import validate_args\nfrom CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic\nfrom CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse\nfrom CTFd.cache import clear_config, clear_standings\nfrom CTFd.constants import RawEnum\nfrom CTFd.models import Configs, db\nfrom CTFd.schemas.config import ConfigSchema\nfrom CTFd.utils import get_config, set_config\nfrom CTFd.utils.decorators import admins_only\n\nconfigs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n\nConfigModel = sqlalchemy_to_pydantic(Configs)\n\n\nclass ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):\n data: ConfigModel\n\n\nclass ConfigListSuccessResponse(APIListSuccessResponse):\n data: List[ConfigModel]\n\n\nconfigs_namespace.schema_model(\n \"ConfigDetailedSuccessResponse\", ConfigDetailedSuccessResponse.apidoc()\n)\n\nconfigs_namespace.schema_model(\n \"ConfigListSuccessResponse\", ConfigListSuccessResponse.apidoc()\n)\n\n\n@configs_namespace.route(\"\")\nclass ConfigList(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get Config objects in bulk\",\n responses={\n 200: (\"Success\", \"ConfigListSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n @validate_args(\n {\n \"key\": (str, None),\n \"value\": (str, None),\n \"q\": (str, None),\n \"field\": (RawEnum(\"ConfigFields\", {\"key\": \"key\", \"value\": \"value\"}), None),\n },\n location=\"query\",\n )\n def get(self, query_args):\n q = query_args.pop(\"q\", None)\n field = str(query_args.pop(\"field\", None))\n filters = build_model_filters(model=Configs, query=q, field=field)\n\n configs = Configs.query.filter_by(**query_args).filter(*filters).all()\n schema = ConfigSchema(many=True)\n response = schema.dump(configs)\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get create a Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def post(self):\n req = request.get_json()\n schema = ConfigSchema()\n response = schema.load(req)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n db.session.add(response.data)\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get patch Config objects in bulk\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def patch(self):\n req = request.get_json()\n\n for key, value in req.items():\n set_config(key=key, value=value)\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n\n\n@configs_namespace.route(\"/<config_key>\")\nclass Config(Resource):\n @admins_only\n # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n def get(self, config_key):\n\n return {\"success\": True, \"data\": get_config(config_key)}\n\n @admins_only\n # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n if config:\n schema = ConfigSchema(instance=config, partial=True)\n response = schema.load(data)\n else:\n schema = ConfigSchema()\n data[\"key\"] = config_key\n response = schema.load(data)\n db.session.add(response.data)\n\n if response.errors:\n return response.errors, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to delete a Config object\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def delete(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n\n db.session.delete(config)\n db.session.commit()\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n", "path": "CTFd/api/v1/config.py"}], "after_files": [{"content": "from typing import List\n\nfrom flask import request\nfrom flask_restx import Namespace, Resource\n\nfrom CTFd.api.v1.helpers.models import build_model_filters\nfrom CTFd.api.v1.helpers.request import validate_args\nfrom CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic\nfrom CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse\nfrom CTFd.cache import clear_config, clear_standings\nfrom CTFd.constants import RawEnum\nfrom CTFd.models import Configs, db\nfrom CTFd.schemas.config import ConfigSchema\nfrom CTFd.utils import set_config\nfrom CTFd.utils.decorators import admins_only\n\nconfigs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n\nConfigModel = sqlalchemy_to_pydantic(Configs)\n\n\nclass ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):\n data: ConfigModel\n\n\nclass ConfigListSuccessResponse(APIListSuccessResponse):\n data: List[ConfigModel]\n\n\nconfigs_namespace.schema_model(\n \"ConfigDetailedSuccessResponse\", ConfigDetailedSuccessResponse.apidoc()\n)\n\nconfigs_namespace.schema_model(\n \"ConfigListSuccessResponse\", ConfigListSuccessResponse.apidoc()\n)\n\n\n@configs_namespace.route(\"\")\nclass ConfigList(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get Config objects in bulk\",\n responses={\n 200: (\"Success\", \"ConfigListSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n @validate_args(\n {\n \"key\": (str, None),\n \"value\": (str, None),\n \"q\": (str, None),\n \"field\": (RawEnum(\"ConfigFields\", {\"key\": \"key\", \"value\": \"value\"}), None),\n },\n location=\"query\",\n )\n def get(self, query_args):\n q = query_args.pop(\"q\", None)\n field = str(query_args.pop(\"field\", None))\n filters = build_model_filters(model=Configs, query=q, field=field)\n\n configs = Configs.query.filter_by(**query_args).filter(*filters).all()\n schema = ConfigSchema(many=True)\n response = schema.dump(configs)\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get create a Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def post(self):\n req = request.get_json()\n schema = ConfigSchema()\n response = schema.load(req)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n db.session.add(response.data)\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get patch Config objects in bulk\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def patch(self):\n req = request.get_json()\n\n for key, value in req.items():\n set_config(key=key, value=value)\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n\n\n@configs_namespace.route(\"/<config_key>\")\nclass Config(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get a specific Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def get(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n schema = ConfigSchema()\n response = schema.dump(config)\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to edit a specific Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n if config:\n schema = ConfigSchema(instance=config, partial=True)\n response = schema.load(data)\n else:\n schema = ConfigSchema()\n data[\"key\"] = config_key\n response = schema.load(data)\n db.session.add(response.data)\n\n if response.errors:\n return response.errors, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to delete a Config object\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def delete(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n\n db.session.delete(config)\n db.session.commit()\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n", "path": "CTFd/api/v1/config.py"}]} | 1,864 | 485 |
gh_patches_debug_15874 | rasdani/github-patches | git_diff | kubeflow__pipelines-4104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/kfp/dsl/_component_bridge.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 from typing import Any, Mapping
17 from ..components.structures import ComponentSpec, ComponentReference
18 from ..components._components import _default_component_name, _resolve_command_line_and_paths
19 from ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table
20 from .. import dsl
21
22
23 def _create_container_op_from_component_and_arguments(
24 component_spec: ComponentSpec,
25 arguments: Mapping[str, Any],
26 component_ref: ComponentReference = None,
27 ) -> 'dsl.ContainerOp':
28 # Check types of the reference arguments and serialize PipelineParams
29 arguments = arguments.copy()
30 for input_name, argument_value in arguments.items():
31 if isinstance(argument_value, dsl.PipelineParam):
32 input_type = component_spec._inputs_dict[input_name].type
33 reference_type = argument_value.param_type
34 dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input "{}" of component "{}": '.format(input_name, component_spec.name))
35
36 arguments[input_name] = str(argument_value)
37
38 resolved_cmd = _resolve_command_line_and_paths(
39 component_spec=component_spec,
40 arguments=arguments,
41 )
42
43 container_spec = component_spec.implementation.container
44
45 task = dsl.ContainerOp(
46 name=component_spec.name or _default_component_name,
47 image=container_spec.image,
48 command=resolved_cmd.command,
49 arguments=resolved_cmd.args,
50 file_outputs=resolved_cmd.output_paths,
51 artifact_argument_paths=[
52 dsl.InputArgumentPath(
53 argument=arguments[input_name],
54 input=input_name,
55 path=path,
56 )
57 for input_name, path in resolved_cmd.input_paths.items()
58 ],
59 )
60
61 component_meta = copy.copy(component_spec)
62 task._set_metadata(component_meta)
63 component_ref_without_spec = copy.copy(component_ref)
64 component_ref_without_spec.spec = None
65 task._component_ref = component_ref_without_spec
66
67 # Previously, ContainerOp had strict requirements for the output names, so we had to
68 # convert all the names before passing them to the ContainerOp constructor.
69 # Outputs with non-pythonic names could not be accessed using their original names.
70 # Now ContainerOp supports any output names, so we're now using the original output names.
71 # However to support legacy pipelines, we're also adding output references with pythonic names.
72 # TODO: Add warning when people use the legacy output names.
73 output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering
74 output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)
75 for output_name in output_names:
76 pythonic_output_name = output_name_to_python[output_name]
77 # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)
78 if pythonic_output_name not in task.outputs and output_name in task.outputs:
79 task.outputs[pythonic_output_name] = task.outputs[output_name]
80
81 if container_spec.env:
82 from kubernetes import client as k8s_client
83 for name, value in container_spec.env.items():
84 task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
85
86 if component_spec.metadata:
87 for key, value in (component_spec.metadata.annotations or {}).items():
88 task.add_pod_annotation(key, value)
89 for key, value in (component_spec.metadata.labels or {}).items():
90 task.add_pod_label(key, value)
91
92 return task
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py
--- a/sdk/python/kfp/dsl/_component_bridge.py
+++ b/sdk/python/kfp/dsl/_component_bridge.py
@@ -84,9 +84,13 @@
task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
if component_spec.metadata:
- for key, value in (component_spec.metadata.annotations or {}).items():
+ annotations = component_spec.metadata.annotations or {}
+ for key, value in annotations.items():
task.add_pod_annotation(key, value)
for key, value in (component_spec.metadata.labels or {}).items():
task.add_pod_label(key, value)
+ # Disabling the caching for the volatile components by default
+ if annotations.get('volatile_component', 'false') == 'true':
+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'
return task
| {"golden_diff": "diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py\n--- a/sdk/python/kfp/dsl/_component_bridge.py\n+++ b/sdk/python/kfp/dsl/_component_bridge.py\n@@ -84,9 +84,13 @@\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n \n if component_spec.metadata:\n- for key, value in (component_spec.metadata.annotations or {}).items():\n+ annotations = component_spec.metadata.annotations or {}\n+ for key, value in annotations.items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n+ # Disabling the caching for the volatile components by default\n+ if annotations.get('volatile_component', 'false') == 'true':\n+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'\n \n return task\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Mapping\nfrom ..components.structures import ComponentSpec, ComponentReference\nfrom ..components._components import _default_component_name, _resolve_command_line_and_paths\nfrom ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table\nfrom .. import dsl\n\n\ndef _create_container_op_from_component_and_arguments(\n component_spec: ComponentSpec,\n arguments: Mapping[str, Any],\n component_ref: ComponentReference = None,\n) -> 'dsl.ContainerOp':\n # Check types of the reference arguments and serialize PipelineParams\n arguments = arguments.copy()\n for input_name, argument_value in arguments.items():\n if isinstance(argument_value, dsl.PipelineParam):\n input_type = component_spec._inputs_dict[input_name].type\n reference_type = argument_value.param_type\n dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input \"{}\" of component \"{}\": '.format(input_name, component_spec.name))\n\n arguments[input_name] = str(argument_value)\n\n resolved_cmd = _resolve_command_line_and_paths(\n component_spec=component_spec,\n arguments=arguments,\n )\n\n container_spec = component_spec.implementation.container\n\n task = dsl.ContainerOp(\n name=component_spec.name or _default_component_name,\n image=container_spec.image,\n command=resolved_cmd.command,\n arguments=resolved_cmd.args,\n file_outputs=resolved_cmd.output_paths,\n artifact_argument_paths=[\n dsl.InputArgumentPath(\n argument=arguments[input_name],\n input=input_name,\n path=path,\n )\n for input_name, path in resolved_cmd.input_paths.items()\n ],\n )\n\n component_meta = copy.copy(component_spec)\n task._set_metadata(component_meta)\n component_ref_without_spec = copy.copy(component_ref)\n component_ref_without_spec.spec = None\n task._component_ref = component_ref_without_spec\n\n # Previously, ContainerOp had strict requirements for the output names, so we had to\n # convert all the names before passing them to the ContainerOp constructor.\n # Outputs with non-pythonic names could not be accessed using their original names.\n # Now ContainerOp supports any output names, so we're now using the original output names.\n # However to support legacy pipelines, we're also adding output references with pythonic names.\n # TODO: Add warning when people use the legacy output names.\n output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering\n output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)\n for output_name in output_names:\n pythonic_output_name = output_name_to_python[output_name]\n # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)\n if pythonic_output_name not in task.outputs and output_name in task.outputs:\n task.outputs[pythonic_output_name] = task.outputs[output_name]\n\n if container_spec.env:\n from kubernetes import client as k8s_client\n for name, value in container_spec.env.items():\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n\n if component_spec.metadata:\n for key, value in (component_spec.metadata.annotations or {}).items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n\n return task\n", "path": "sdk/python/kfp/dsl/_component_bridge.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Mapping\nfrom ..components.structures import ComponentSpec, ComponentReference\nfrom ..components._components import _default_component_name, _resolve_command_line_and_paths\nfrom ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table\nfrom .. import dsl\n\n\ndef _create_container_op_from_component_and_arguments(\n component_spec: ComponentSpec,\n arguments: Mapping[str, Any],\n component_ref: ComponentReference = None,\n) -> 'dsl.ContainerOp':\n # Check types of the reference arguments and serialize PipelineParams\n arguments = arguments.copy()\n for input_name, argument_value in arguments.items():\n if isinstance(argument_value, dsl.PipelineParam):\n input_type = component_spec._inputs_dict[input_name].type\n reference_type = argument_value.param_type\n dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input \"{}\" of component \"{}\": '.format(input_name, component_spec.name))\n\n arguments[input_name] = str(argument_value)\n\n resolved_cmd = _resolve_command_line_and_paths(\n component_spec=component_spec,\n arguments=arguments,\n )\n\n container_spec = component_spec.implementation.container\n\n task = dsl.ContainerOp(\n name=component_spec.name or _default_component_name,\n image=container_spec.image,\n command=resolved_cmd.command,\n arguments=resolved_cmd.args,\n file_outputs=resolved_cmd.output_paths,\n artifact_argument_paths=[\n dsl.InputArgumentPath(\n argument=arguments[input_name],\n input=input_name,\n path=path,\n )\n for input_name, path in resolved_cmd.input_paths.items()\n ],\n )\n\n component_meta = copy.copy(component_spec)\n task._set_metadata(component_meta)\n component_ref_without_spec = copy.copy(component_ref)\n component_ref_without_spec.spec = None\n task._component_ref = component_ref_without_spec\n\n # Previously, ContainerOp had strict requirements for the output names, so we had to\n # convert all the names before passing them to the ContainerOp constructor.\n # Outputs with non-pythonic names could not be accessed using their original names.\n # Now ContainerOp supports any output names, so we're now using the original output names.\n # However to support legacy pipelines, we're also adding output references with pythonic names.\n # TODO: Add warning when people use the legacy output names.\n output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering\n output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)\n for output_name in output_names:\n pythonic_output_name = output_name_to_python[output_name]\n # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)\n if pythonic_output_name not in task.outputs and output_name in task.outputs:\n task.outputs[pythonic_output_name] = task.outputs[output_name]\n\n if container_spec.env:\n from kubernetes import client as k8s_client\n for name, value in container_spec.env.items():\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n\n if component_spec.metadata:\n annotations = component_spec.metadata.annotations or {}\n for key, value in annotations.items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n # Disabling the caching for the volatile components by default\n if annotations.get('volatile_component', 'false') == 'true':\n task.execution_options.caching_strategy.max_cache_staleness = 'P0D'\n\n return task\n", "path": "sdk/python/kfp/dsl/_component_bridge.py"}]} | 1,667 | 215 |
gh_patches_debug_5825 | rasdani/github-patches | git_diff | Kinto__kinto-500 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
POST with If-None-Match: * and provided id in body always return 412
Detected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205
See https://github.com/mozilla-services/cliquet/issues/673
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import codecs
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 def read_file(filename):
10 """Open a related file and return its content."""
11 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
12 content = f.read()
13 return content
14
15 README = read_file('README.rst')
16 CHANGELOG = read_file('CHANGELOG.rst')
17 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
18
19 REQUIREMENTS = [
20 'waitress',
21 'cliquet>=3,<4',
22 'jsonschema',
23 ]
24
25 POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
26 'cliquet[postgresql]>=3,<4'
27 ]
28
29 MONITORING_REQUIREMENTS = REQUIREMENTS + [
30 'cliquet[monitoring]>=3,<4'
31 ]
32
33 FXA_REQUIREMENTS = REQUIREMENTS + [
34 'cliquet-fxa<2'
35 ]
36
37 ENTRY_POINTS = {
38 'paste.app_factory': [
39 'main = kinto:main',
40 ],
41 'console_scripts': [
42 'kinto = kinto.__main__:main'
43 ],
44 }
45
46 DEPENDENCY_LINKS = [
47 ]
48
49 setup(name='kinto',
50 version='1.12.0.dev0',
51 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
52 long_description=README + "\n\n" + CHANGELOG + "\n\n" + CONTRIBUTORS,
53 license='Apache License (2.0)',
54 classifiers=[
55 "Programming Language :: Python",
56 "Programming Language :: Python :: 2",
57 "Programming Language :: Python :: 2.7",
58 "Programming Language :: Python :: 3",
59 "Programming Language :: Python :: 3.4",
60 "Programming Language :: Python :: 3.5",
61 "Programming Language :: Python :: Implementation :: CPython",
62 "Programming Language :: Python :: Implementation :: PyPy",
63 "Topic :: Internet :: WWW/HTTP",
64 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
65 "License :: OSI Approved :: Apache Software License"
66 ],
67 keywords="web sync json storage",
68 author='Mozilla Services',
69 author_email='[email protected]',
70 url='https://github.com/Kinto/kinto',
71 packages=find_packages(),
72 include_package_data=True,
73 zip_safe=False,
74 install_requires=REQUIREMENTS,
75 extras_require={
76 'postgresql': POSTGRESQL_REQUIREMENTS,
77 'monitoring': MONITORING_REQUIREMENTS,
78 'fxa': FXA_REQUIREMENTS,
79 ":python_version=='2.7'": ["functools32"],
80 },
81 test_suite="kinto.tests",
82 entry_points=ENTRY_POINTS,
83 dependency_links=DEPENDENCY_LINKS)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,16 +18,16 @@
REQUIREMENTS = [
'waitress',
- 'cliquet>=3,<4',
+ 'cliquet>=3.1,<4',
'jsonschema',
]
POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[postgresql]>=3,<4'
+ 'cliquet[postgresql]>=3.1,<4'
]
MONITORING_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[monitoring]>=3,<4'
+ 'cliquet[monitoring]>=3.1,<4'
]
FXA_REQUIREMENTS = REQUIREMENTS + [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,16 +18,16 @@\n \n REQUIREMENTS = [\n 'waitress',\n- 'cliquet>=3,<4',\n+ 'cliquet>=3.1,<4',\n 'jsonschema',\n ]\n \n POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[postgresql]>=3,<4'\n+ 'cliquet[postgresql]>=3.1,<4'\n ]\n \n MONITORING_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[monitoring]>=3,<4'\n+ 'cliquet[monitoring]>=3.1,<4'\n ]\n \n FXA_REQUIREMENTS = REQUIREMENTS + [\n", "issue": "POST with If-None-Match: * and provided id in body always return 412\nDetected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205\n\nSee https://github.com/mozilla-services/cliquet/issues/673\n\n", "before_files": [{"content": "import codecs\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'waitress',\n 'cliquet>=3,<4',\n 'jsonschema',\n]\n\nPOSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[postgresql]>=3,<4'\n]\n\nMONITORING_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[monitoring]>=3,<4'\n]\n\nFXA_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet-fxa<2'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\nDEPENDENCY_LINKS = [\n]\n\nsetup(name='kinto',\n version='1.12.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIREMENTS,\n 'monitoring': MONITORING_REQUIREMENTS,\n 'fxa': FXA_REQUIREMENTS,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n entry_points=ENTRY_POINTS,\n dependency_links=DEPENDENCY_LINKS)\n", "path": "setup.py"}], "after_files": [{"content": "import codecs\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'waitress',\n 'cliquet>=3.1,<4',\n 'jsonschema',\n]\n\nPOSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[postgresql]>=3.1,<4'\n]\n\nMONITORING_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[monitoring]>=3.1,<4'\n]\n\nFXA_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet-fxa<2'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\nDEPENDENCY_LINKS = [\n]\n\nsetup(name='kinto',\n version='1.12.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIREMENTS,\n 'monitoring': MONITORING_REQUIREMENTS,\n 'fxa': FXA_REQUIREMENTS,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n entry_points=ENTRY_POINTS,\n dependency_links=DEPENDENCY_LINKS)\n", "path": "setup.py"}]} | 1,091 | 166 |
gh_patches_debug_19521 | rasdani/github-patches | git_diff | streamlink__streamlink-453 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Less violent way of closing player when stream ends
Currently streamlink uses SIGKILL to close the player when a stream ends. This prevents the player from doing its own cleanup. For example, mpv leaves DPMS/screensaver disabled because of this. I know there is --player-no-close option, but that has an unwanted side-effect of not closing the player immediately in some situations.
I suggest fixing it by using SIGTERM instead:
```diff
diff -bur streamlink-0.1.0-orig/src/streamlink_cli/output.py streamlink-0.1.0/src/streamlink_cli/output.py
--- streamlink-0.1.0-orig/src/streamlink_cli/output.py 2016-11-21 21:56:29.000000000 +0200
+++ streamlink-0.1.0/src/streamlink_cli/output.py 2016-12-08 22:08:23.000000000 +0200
@@ -161,7 +161,7 @@
if self.kill:
with ignored(Exception):
- self.player.kill()
+ self.player.terminate()
self.player.wait()
def _write(self, data):
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink_cli/output.py`
Content:
```
1 import os
2 import shlex
3 import subprocess
4 import sys
5
6 from time import sleep
7
8 import re
9
10 from .compat import is_win32, stdout
11 from .constants import DEFAULT_PLAYER_ARGUMENTS
12 from .utils import ignored
13
14 if is_win32:
15 import msvcrt
16
17
18 class Output(object):
19 def __init__(self):
20 self.opened = False
21
22 def open(self):
23 self._open()
24 self.opened = True
25
26 def close(self):
27 if self.opened:
28 self._close()
29
30 self.opened = False
31
32 def write(self, data):
33 if not self.opened:
34 raise IOError("Output is not opened")
35
36 return self._write(data)
37
38 def _open(self):
39 pass
40
41 def _close(self):
42 pass
43
44 def _write(self, data):
45 pass
46
47
48 class FileOutput(Output):
49 def __init__(self, filename=None, fd=None):
50 super(FileOutput, self).__init__()
51 self.filename = filename
52 self.fd = fd
53
54 def _open(self):
55 if self.filename:
56 self.fd = open(self.filename, "wb")
57
58 if is_win32:
59 msvcrt.setmode(self.fd.fileno(), os.O_BINARY)
60
61 def _close(self):
62 if self.fd is not stdout:
63 self.fd.close()
64
65 def _write(self, data):
66 self.fd.write(data)
67
68
69 class PlayerOutput(Output):
70 def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,
71 namedpipe=None):
72 super(PlayerOutput, self).__init__()
73 self.cmd = cmd
74 self.args = args
75 self.kill = kill
76 self.call = call
77 self.quiet = quiet
78
79 self.filename = filename
80 self.namedpipe = namedpipe
81 self.http = http
82
83 if self.namedpipe or self.filename or self.http:
84 self.stdin = sys.stdin
85 else:
86 self.stdin = subprocess.PIPE
87
88 if self.quiet:
89 self.stdout = open(os.devnull, "w")
90 self.stderr = open(os.devnull, "w")
91 else:
92 self.stdout = sys.stdout
93 self.stderr = sys.stderr
94
95 @property
96 def running(self):
97 sleep(0.5)
98 self.player.poll()
99 return self.player.returncode is None
100
101 def _create_arguments(self):
102 if self.namedpipe:
103 filename = self.namedpipe.path
104 elif self.filename:
105 filename = self.filename
106 elif self.http:
107 filename = self.http.url
108 else:
109 filename = "-"
110
111 args = self.args.format(filename=filename)
112 cmd = self.cmd
113 if is_win32:
114 return cmd + " " + args
115
116 return shlex.split(cmd) + shlex.split(args)
117
118 def _open(self):
119 try:
120 if self.call and self.filename:
121 self._open_call()
122 else:
123 self._open_subprocess()
124 finally:
125 if self.quiet:
126 # Output streams no longer needed in parent process
127 self.stdout.close()
128 self.stderr.close()
129
130 def _open_call(self):
131 subprocess.call(self._create_arguments(),
132 stdout=self.stdout,
133 stderr=self.stderr)
134
135 def _open_subprocess(self):
136 # Force bufsize=0 on all Python versions to avoid writing the
137 # unflushed buffer when closing a broken input pipe
138 self.player = subprocess.Popen(self._create_arguments(),
139 stdin=self.stdin, bufsize=0,
140 stdout=self.stdout,
141 stderr=self.stderr)
142 # Wait 0.5 seconds to see if program exited prematurely
143 if not self.running:
144 raise OSError("Process exited prematurely")
145
146 if self.namedpipe:
147 self.namedpipe.open("wb")
148 elif self.http:
149 self.http.open()
150
151 def _close(self):
152 # Close input to the player first to signal the end of the
153 # stream and allow the player to terminate of its own accord
154 if self.namedpipe:
155 self.namedpipe.close()
156 elif self.http:
157 self.http.close()
158 elif not self.filename:
159 self.player.stdin.close()
160
161 if self.kill:
162 with ignored(Exception):
163 self.player.kill()
164 self.player.wait()
165
166 def _write(self, data):
167 if self.namedpipe:
168 self.namedpipe.write(data)
169 elif self.http:
170 self.http.write(data)
171 else:
172 self.player.stdin.write(data)
173
174
175 __all__ = ["PlayerOutput", "FileOutput"]
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py
--- a/src/streamlink_cli/output.py
+++ b/src/streamlink_cli/output.py
@@ -67,6 +67,8 @@
class PlayerOutput(Output):
+ PLAYER_TERMINATE_TIMEOUT = 10.0
+
def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,
namedpipe=None):
super(PlayerOutput, self).__init__()
@@ -160,7 +162,15 @@
if self.kill:
with ignored(Exception):
- self.player.kill()
+ self.player.terminate()
+ if not is_win32:
+ t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT
+ while not self.player.poll() and t < timeout:
+ sleep(0.5)
+ t += 0.5
+
+ if not self.player.returncode:
+ self.player.kill()
self.player.wait()
def _write(self, data):
| {"golden_diff": "diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py\n--- a/src/streamlink_cli/output.py\n+++ b/src/streamlink_cli/output.py\n@@ -67,6 +67,8 @@\n \n \n class PlayerOutput(Output):\n+ PLAYER_TERMINATE_TIMEOUT = 10.0\n+\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n@@ -160,7 +162,15 @@\n \n if self.kill:\n with ignored(Exception):\n- self.player.kill()\n+ self.player.terminate()\n+ if not is_win32:\n+ t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n+ while not self.player.poll() and t < timeout:\n+ sleep(0.5)\n+ t += 0.5\n+\n+ if not self.player.returncode:\n+ self.player.kill()\n self.player.wait()\n \n def _write(self, data):\n", "issue": "Less violent way of closing player when stream ends\nCurrently streamlink uses SIGKILL to close the player when a stream ends. This prevents the player from doing its own cleanup. For example, mpv leaves DPMS/screensaver disabled because of this. I know there is --player-no-close option, but that has an unwanted side-effect of not closing the player immediately in some situations.\r\n\r\nI suggest fixing it by using SIGTERM instead:\r\n```diff\r\ndiff -bur streamlink-0.1.0-orig/src/streamlink_cli/output.py streamlink-0.1.0/src/streamlink_cli/output.py\r\n--- streamlink-0.1.0-orig/src/streamlink_cli/output.py 2016-11-21 21:56:29.000000000 +0200\r\n+++ streamlink-0.1.0/src/streamlink_cli/output.py 2016-12-08 22:08:23.000000000 +0200\r\n@@ -161,7 +161,7 @@\r\n \r\n if self.kill:\r\n with ignored(Exception):\r\n- self.player.kill()\r\n+ self.player.terminate()\r\n self.player.wait()\r\n \r\n def _write(self, data):\r\n```\n", "before_files": [{"content": "import os\nimport shlex\nimport subprocess\nimport sys\n\nfrom time import sleep\n\nimport re\n\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n\n def _write(self, data):\n self.fd.write(data)\n\n\nclass PlayerOutput(Output):\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n self.player.poll()\n return self.player.returncode is None\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n if is_win32:\n return cmd + \" \" + args\n\n return shlex.split(cmd) + shlex.split(args)\n\n def _open(self):\n try:\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n subprocess.call(self._create_arguments(),\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n self.player = subprocess.Popen(self._create_arguments(),\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}], "after_files": [{"content": "import os\nimport shlex\nimport subprocess\nimport sys\n\nfrom time import sleep\n\nimport re\n\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n\n def _write(self, data):\n self.fd.write(data)\n\n\nclass PlayerOutput(Output):\n PLAYER_TERMINATE_TIMEOUT = 10.0\n\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n self.player.poll()\n return self.player.returncode is None\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n if is_win32:\n return cmd + \" \" + args\n\n return shlex.split(cmd) + shlex.split(args)\n\n def _open(self):\n try:\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n subprocess.call(self._create_arguments(),\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n self.player = subprocess.Popen(self._create_arguments(),\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.terminate()\n if not is_win32:\n t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n while not self.player.poll() and t < timeout:\n sleep(0.5)\n t += 0.5\n\n if not self.player.returncode:\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}]} | 1,948 | 238 |
gh_patches_debug_4727 | rasdani/github-patches | git_diff | kserve__kserve-658 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Help wanted] Add e2e test for canary rollout
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kfserving/kfserving/constants/constants.py`
Content:
```
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 # KFServing K8S constants
18 KFSERVING_GROUP = 'serving.kubeflow.org'
19 KFSERVING_KIND = 'InferenceService'
20 KFSERVING_PLURAL = 'inferenceservices'
21 KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
22
23 KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
24
25 # INFERENCESERVICE credentials common constants
26 INFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'
27 INFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'
28 DEFAULT_SECRET_NAME = "kfserving-secret-"
29 DEFAULT_SA_NAME = "kfserving-service-credentials"
30
31 # S3 credentials constants
32 S3_ACCESS_KEY_ID_DEFAULT_NAME = "awsAccessKeyID"
33 S3_SECRET_ACCESS_KEY_DEFAULT_NAME = "awsSecretAccessKey"
34 S3_DEFAULT_CREDS_FILE = '~/.aws/credentials'
35
36 # GCS credentials constants
37 GCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'
38 GCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'
39
40 # Azure credentials constants
41 AZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py
--- a/python/kfserving/kfserving/constants/constants.py
+++ b/python/kfserving/kfserving/constants/constants.py
@@ -19,6 +19,7 @@
KFSERVING_KIND = 'InferenceService'
KFSERVING_PLURAL = 'inferenceservices'
KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION
KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
| {"golden_diff": "diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py\n--- a/python/kfserving/kfserving/constants/constants.py\n+++ b/python/kfserving/kfserving/constants/constants.py\n@@ -19,6 +19,7 @@\n KFSERVING_KIND = 'InferenceService'\n KFSERVING_PLURAL = 'inferenceservices'\n KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION\n \n KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n", "issue": "[Help wanted] Add e2e test for canary rollout\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\n[A clear and concise description of what you want to happen.]\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# KFServing K8S constants\nKFSERVING_GROUP = 'serving.kubeflow.org'\nKFSERVING_KIND = 'InferenceService'\nKFSERVING_PLURAL = 'inferenceservices'\nKFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n\nKFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n\n# INFERENCESERVICE credentials common constants\nINFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'\nINFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'\nDEFAULT_SECRET_NAME = \"kfserving-secret-\"\nDEFAULT_SA_NAME = \"kfserving-service-credentials\"\n\n# S3 credentials constants\nS3_ACCESS_KEY_ID_DEFAULT_NAME = \"awsAccessKeyID\"\nS3_SECRET_ACCESS_KEY_DEFAULT_NAME = \"awsSecretAccessKey\"\nS3_DEFAULT_CREDS_FILE = '~/.aws/credentials'\n\n# GCS credentials constants\nGCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'\nGCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'\n\n# Azure credentials constants\nAZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'\n", "path": "python/kfserving/kfserving/constants/constants.py"}], "after_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# KFServing K8S constants\nKFSERVING_GROUP = 'serving.kubeflow.org'\nKFSERVING_KIND = 'InferenceService'\nKFSERVING_PLURAL = 'inferenceservices'\nKFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\nKFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION\n\nKFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n\n# INFERENCESERVICE credentials common constants\nINFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'\nINFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'\nDEFAULT_SECRET_NAME = \"kfserving-secret-\"\nDEFAULT_SA_NAME = \"kfserving-service-credentials\"\n\n# S3 credentials constants\nS3_ACCESS_KEY_ID_DEFAULT_NAME = \"awsAccessKeyID\"\nS3_SECRET_ACCESS_KEY_DEFAULT_NAME = \"awsSecretAccessKey\"\nS3_DEFAULT_CREDS_FILE = '~/.aws/credentials'\n\n# GCS credentials constants\nGCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'\nGCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'\n\n# Azure credentials constants\nAZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'\n", "path": "python/kfserving/kfserving/constants/constants.py"}]} | 801 | 154 |
gh_patches_debug_31066 | rasdani/github-patches | git_diff | getsentry__sentry-python-434 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception: raise OSError("handle is closed")
When I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.
```
from concurrent.futures.process import ProcessPoolExecutor
import sentry_sdk
sentry_sdk.init(dsn="")
def test():
...
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=4) as worker:
worker.submit(test)
```
The exception:
```
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/threading.py`
Content:
```
1 from __future__ import absolute_import
2
3 import sys
4
5 from threading import Thread
6
7 from sentry_sdk import Hub
8 from sentry_sdk._compat import reraise
9 from sentry_sdk.utils import event_from_exception
10 from sentry_sdk.integrations import Integration
11
12 from sentry_sdk._types import MYPY
13
14 if MYPY:
15 from typing import Any
16
17
18 class ThreadingIntegration(Integration):
19 identifier = "threading"
20
21 def __init__(self, propagate_hub=False):
22 self.propagate_hub = propagate_hub
23
24 @staticmethod
25 def setup_once():
26 # type: () -> None
27 old_start = Thread.start
28
29 def sentry_start(self, *a, **kw):
30 hub = Hub.current
31 integration = hub.get_integration(ThreadingIntegration)
32 if integration is not None:
33 if not integration.propagate_hub:
34 hub_ = None
35 else:
36 hub_ = Hub(hub)
37
38 self.run = _wrap_run(hub_, self.run)
39
40 return old_start(self, *a, **kw) # type: ignore
41
42 Thread.start = sentry_start # type: ignore
43
44
45 def _wrap_run(parent_hub, old_run):
46 def run(*a, **kw):
47 hub = parent_hub or Hub.current
48
49 with hub:
50 try:
51 return old_run(*a, **kw)
52 except Exception:
53 reraise(*_capture_exception())
54
55 return run
56
57
58 def _capture_exception():
59 hub = Hub.current
60 exc_info = sys.exc_info()
61
62 if hub.get_integration(ThreadingIntegration) is not None:
63 # If an integration is there, a client has to be there.
64 client = hub.client # type: Any
65
66 event, hint = event_from_exception(
67 exc_info,
68 client_options=client.options,
69 mechanism={"type": "threading", "handled": False},
70 )
71 hub.capture_event(event, hint=hint)
72
73 return exc_info
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py
--- a/sentry_sdk/integrations/threading.py
+++ b/sentry_sdk/integrations/threading.py
@@ -1,15 +1,13 @@
from __future__ import absolute_import
import sys
-
-from threading import Thread
+from threading import Thread, current_thread
from sentry_sdk import Hub
from sentry_sdk._compat import reraise
-from sentry_sdk.utils import event_from_exception
-from sentry_sdk.integrations import Integration
-
from sentry_sdk._types import MYPY
+from sentry_sdk.integrations import Integration
+from sentry_sdk.utils import event_from_exception
if MYPY:
from typing import Any
@@ -34,21 +32,26 @@
hub_ = None
else:
hub_ = Hub(hub)
-
- self.run = _wrap_run(hub_, self.run)
+ # Patching instance methods in `start()` creates a reference cycle if
+ # done in a naive way. See
+ # https://github.com/getsentry/sentry-python/pull/434
+ #
+ # In threading module, using current_thread API will access current thread instance
+ # without holding it to avoid a reference cycle in an easier way.
+ self.run = _wrap_run(hub_, self.run.__func__)
return old_start(self, *a, **kw) # type: ignore
Thread.start = sentry_start # type: ignore
-def _wrap_run(parent_hub, old_run):
+def _wrap_run(parent_hub, old_run_func):
def run(*a, **kw):
hub = parent_hub or Hub.current
-
with hub:
try:
- return old_run(*a, **kw)
+ self = current_thread()
+ return old_run_func(self, *a, **kw)
except Exception:
reraise(*_capture_exception())
| {"golden_diff": "diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py\n--- a/sentry_sdk/integrations/threading.py\n+++ b/sentry_sdk/integrations/threading.py\n@@ -1,15 +1,13 @@\n from __future__ import absolute_import\n \n import sys\n-\n-from threading import Thread\n+from threading import Thread, current_thread\n \n from sentry_sdk import Hub\n from sentry_sdk._compat import reraise\n-from sentry_sdk.utils import event_from_exception\n-from sentry_sdk.integrations import Integration\n-\n from sentry_sdk._types import MYPY\n+from sentry_sdk.integrations import Integration\n+from sentry_sdk.utils import event_from_exception\n \n if MYPY:\n from typing import Any\n@@ -34,21 +32,26 @@\n hub_ = None\n else:\n hub_ = Hub(hub)\n-\n- self.run = _wrap_run(hub_, self.run)\n+ # Patching instance methods in `start()` creates a reference cycle if\n+ # done in a naive way. See\n+ # https://github.com/getsentry/sentry-python/pull/434\n+ #\n+ # In threading module, using current_thread API will access current thread instance\n+ # without holding it to avoid a reference cycle in an easier way.\n+ self.run = _wrap_run(hub_, self.run.__func__)\n \n return old_start(self, *a, **kw) # type: ignore\n \n Thread.start = sentry_start # type: ignore\n \n \n-def _wrap_run(parent_hub, old_run):\n+def _wrap_run(parent_hub, old_run_func):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n-\n with hub:\n try:\n- return old_run(*a, **kw)\n+ self = current_thread()\n+ return old_run_func(self, *a, **kw)\n except Exception:\n reraise(*_capture_exception())\n", "issue": "Exception: raise OSError(\"handle is closed\")\nWhen I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.\r\n\r\n```\r\nfrom concurrent.futures.process import ProcessPoolExecutor\r\n\r\nimport sentry_sdk\r\n\r\nsentry_sdk.init(dsn=\"\")\r\n\r\n\r\ndef test():\r\n ...\r\n\r\n\r\nif __name__ == \"__main__\":\r\n with ProcessPoolExecutor(max_workers=4) as worker:\r\n worker.submit(test)\r\n```\r\n\r\nThe exception:\r\n```\r\nError in atexit._run_exitfuncs:\r\nTraceback (most recent call last):\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 101, in _python_exit\r\n thread_wakeup.wakeup()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 89, in wakeup\r\n self._writer.send_bytes(b\"\")\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 183, in send_bytes\r\n self._check_closed()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 136, in _check_closed\r\n raise OSError(\"handle is closed\")\r\nOSError: handle is closed\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom threading import Thread\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.utils import event_from_exception\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n\n\nclass ThreadingIntegration(Integration):\n identifier = \"threading\"\n\n def __init__(self, propagate_hub=False):\n self.propagate_hub = propagate_hub\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_start = Thread.start\n\n def sentry_start(self, *a, **kw):\n hub = Hub.current\n integration = hub.get_integration(ThreadingIntegration)\n if integration is not None:\n if not integration.propagate_hub:\n hub_ = None\n else:\n hub_ = Hub(hub)\n\n self.run = _wrap_run(hub_, self.run)\n\n return old_start(self, *a, **kw) # type: ignore\n\n Thread.start = sentry_start # type: ignore\n\n\ndef _wrap_run(parent_hub, old_run):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n\n with hub:\n try:\n return old_run(*a, **kw)\n except Exception:\n reraise(*_capture_exception())\n\n return run\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(ThreadingIntegration) is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"threading\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/threading.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport sys\nfrom threading import Thread, current_thread\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.utils import event_from_exception\n\nif MYPY:\n from typing import Any\n\n\nclass ThreadingIntegration(Integration):\n identifier = \"threading\"\n\n def __init__(self, propagate_hub=False):\n self.propagate_hub = propagate_hub\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_start = Thread.start\n\n def sentry_start(self, *a, **kw):\n hub = Hub.current\n integration = hub.get_integration(ThreadingIntegration)\n if integration is not None:\n if not integration.propagate_hub:\n hub_ = None\n else:\n hub_ = Hub(hub)\n # Patching instance methods in `start()` creates a reference cycle if\n # done in a naive way. See\n # https://github.com/getsentry/sentry-python/pull/434\n #\n # In threading module, using current_thread API will access current thread instance\n # without holding it to avoid a reference cycle in an easier way.\n self.run = _wrap_run(hub_, self.run.__func__)\n\n return old_start(self, *a, **kw) # type: ignore\n\n Thread.start = sentry_start # type: ignore\n\n\ndef _wrap_run(parent_hub, old_run_func):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n with hub:\n try:\n self = current_thread()\n return old_run_func(self, *a, **kw)\n except Exception:\n reraise(*_capture_exception())\n\n return run\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(ThreadingIntegration) is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"threading\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/threading.py"}]} | 1,126 | 441 |
gh_patches_debug_18060 | rasdani/github-patches | git_diff | scrapy__scrapy-4378 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated
There is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.
It would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.
Related to https://github.com/scrapy/scrapy/issues/4356
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/settings/deprecated.py`
Content:
```
1 import warnings
2 from scrapy.exceptions import ScrapyDeprecationWarning
3
4 DEPRECATED_SETTINGS = [
5 ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),
6 ('RESPONSE_CLASSES', 'no longer supported'),
7 ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),
8 ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),
9 ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
10 ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
11 ('SQLITE_DB', 'no longer supported'),
12 ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
13 ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
14 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
15 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
16 ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
17 ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
18 ]
19
20
21 def check_deprecated_settings(settings):
22 deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]
23 if deprecated:
24 msg = "You are using the following settings which are deprecated or obsolete"
25 msg += " (ask [email protected] for alternatives):"
26 msg = msg + "\n " + "\n ".join("%s: %s" % x for x in deprecated)
27 warnings.warn(msg, ScrapyDeprecationWarning)
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py
--- a/scrapy/settings/deprecated.py
+++ b/scrapy/settings/deprecated.py
@@ -9,10 +9,8 @@
('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
('SQLITE_DB', 'no longer supported'),
- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
]
| {"golden_diff": "diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py\n--- a/scrapy/settings/deprecated.py\n+++ b/scrapy/settings/deprecated.py\n@@ -9,10 +9,8 @@\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n ]\n", "issue": "Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated\nThere is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.\r\n\r\nIt would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.\r\n\r\nRelated to https://github.com/scrapy/scrapy/issues/4356\n", "before_files": [{"content": "import warnings\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\nDEPRECATED_SETTINGS = [\n ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),\n ('RESPONSE_CLASSES', 'no longer supported'),\n ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),\n ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n]\n\n\ndef check_deprecated_settings(settings):\n deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]\n if deprecated:\n msg = \"You are using the following settings which are deprecated or obsolete\"\n msg += \" (ask [email protected] for alternatives):\"\n msg = msg + \"\\n \" + \"\\n \".join(\"%s: %s\" % x for x in deprecated)\n warnings.warn(msg, ScrapyDeprecationWarning)\n", "path": "scrapy/settings/deprecated.py"}], "after_files": [{"content": "import warnings\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\nDEPRECATED_SETTINGS = [\n ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),\n ('RESPONSE_CLASSES', 'no longer supported'),\n ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),\n ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n]\n\n\ndef check_deprecated_settings(settings):\n deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]\n if deprecated:\n msg = \"You are using the following settings which are deprecated or obsolete\"\n msg += \" (ask [email protected] for alternatives):\"\n msg = msg + \"\\n \" + \"\\n \".join(\"%s: %s\" % x for x in deprecated)\n warnings.warn(msg, ScrapyDeprecationWarning)\n", "path": "scrapy/settings/deprecated.py"}]} | 740 | 218 |
gh_patches_debug_22000 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2442 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: colossalai run failed with unknown reason
### 🐛 Describe the bug
Some users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.
```text
Error: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1
```
### Environment
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/cli/launcher/multinode_runner.py`
Content:
```
1 import fabric
2 from .hostinfo import HostInfo, HostInfoList
3 from multiprocessing import Pipe, Process
4 from multiprocessing import connection as mp_connection
5 import click
6
7
8 def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
9 send_conn: mp_connection.Connection, env: dict) -> None:
10 """
11 Use fabric connection to execute command on local or remote hosts.
12
13 Args:
14 hostinfo (HostInfo): host information
15 workdir (str): the directory to execute the command
16 recv_conn (multiprocessing.connection.Connection): receive messages from the master sender
17 send_conn (multiprocessing.connection.Connection): send messages to the master receiver
18 env (dict): a dictionary for environment variables
19 """
20
21 fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)
22 finish = False
23 env_msg = ' '.join([f'{k}=\"{v}\"' for k, v in env.items()])
24
25 # keep listening until exit
26 while not finish:
27 # receive cmd
28 cmds = recv_conn.recv()
29
30 if cmds == 'exit':
31 # exit from the loop
32 finish = True
33 break
34 else:
35 # execute the commands
36 try:
37 # cd to execute directory
38 with fab_conn.cd(workdir):
39 # propagate the runtime environment
40 with fab_conn.prefix(f"export {env_msg}"):
41 if hostinfo.is_local_host:
42 # execute on the local machine
43 fab_conn.local(cmds, hide=False)
44 else:
45 # execute on the remote machine
46 fab_conn.run(cmds, hide=False)
47 send_conn.send('success')
48 except:
49 click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
50 send_conn.send('failure')
51
52 # shutdown
53 send_conn.send("finish")
54 fab_conn.close()
55
56
57 class MultiNodeRunner:
58 """
59 A runner to execute commands on an array of machines. This runner
60 is inspired by Nezha (https://github.com/zhuzilin/NeZha).
61 """
62
63 def __init__(self):
64 self.processes = {}
65 self.master_send_conns = {}
66 self.master_recv_conns = {}
67
68 def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:
69 """
70 Establish connections to a list of hosts
71
72 Args:
73 host_info_list (HostInfoList): a list of HostInfo objects
74 workdir (str): the directory where command is executed
75 env (dict): environment variables to propagate to hosts
76 """
77 for hostinfo in host_info_list:
78 master_send_conn, worker_recv_conn = Pipe()
79 master_recv_conn, worker_send_conn = Pipe()
80 p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))
81 p.start()
82 self.processes[hostinfo.hostname] = p
83 self.master_recv_conns[hostinfo.hostname] = master_recv_conn
84 self.master_send_conns[hostinfo.hostname] = master_send_conn
85
86 def send(self, hostinfo: HostInfo, cmd: str) -> None:
87 """
88 Send a command to a local/remote host.
89
90 Args:
91 hostinfo (HostInfo): host information
92 cmd (str): the command to execute
93 """
94
95 assert hostinfo.hostname in self.master_send_conns, \
96 f'{hostinfo} is not found in the current connections'
97 conn = self.master_send_conns[hostinfo.hostname]
98 conn.send(cmd)
99
100 def stop_all(self) -> None:
101 """
102 Stop connections to all hosts.
103 """
104
105 for hostname, conn in self.master_send_conns.items():
106 conn.send('exit')
107
108 def recv_from_all(self) -> dict:
109 """
110 Receive messages from all hosts
111
112 Returns:
113 msg_from_node (dict): a dictionry which contains messages from each node
114 """
115
116 msg_from_node = dict()
117 for hostname, conn in self.master_recv_conns.items():
118 msg_from_node[hostname] = conn.recv()
119 return msg_from_node
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py
--- a/colossalai/cli/launcher/multinode_runner.py
+++ b/colossalai/cli/launcher/multinode_runner.py
@@ -1,8 +1,10 @@
-import fabric
-from .hostinfo import HostInfo, HostInfoList
from multiprocessing import Pipe, Process
from multiprocessing import connection as mp_connection
+
import click
+import fabric
+
+from .hostinfo import HostInfo, HostInfoList
def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
@@ -45,8 +47,10 @@
# execute on the remote machine
fab_conn.run(cmds, hide=False)
send_conn.send('success')
- except:
- click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
+ except Exception as e:
+ click.echo(
+ f"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}"
+ )
send_conn.send('failure')
# shutdown
| {"golden_diff": "diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py\n--- a/colossalai/cli/launcher/multinode_runner.py\n+++ b/colossalai/cli/launcher/multinode_runner.py\n@@ -1,8 +1,10 @@\n-import fabric\n-from .hostinfo import HostInfo, HostInfoList\n from multiprocessing import Pipe, Process\n from multiprocessing import connection as mp_connection\n+\n import click\n+import fabric\n+\n+from .hostinfo import HostInfo, HostInfoList\n \n \n def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n@@ -45,8 +47,10 @@\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n- except:\n- click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n+ except Exception as e:\n+ click.echo(\n+ f\"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}\"\n+ )\n send_conn.send('failure')\n \n # shutdown\n", "issue": "[BUG]: colossalai run failed with unknown reason\n### \ud83d\udc1b Describe the bug\n\nSome users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.\r\n\r\n```text\r\nError: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1\r\n```\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "import fabric\nfrom .hostinfo import HostInfo, HostInfoList\nfrom multiprocessing import Pipe, Process\nfrom multiprocessing import connection as mp_connection\nimport click\n\n\ndef run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n send_conn: mp_connection.Connection, env: dict) -> None:\n \"\"\"\n Use fabric connection to execute command on local or remote hosts.\n\n Args:\n hostinfo (HostInfo): host information\n workdir (str): the directory to execute the command\n recv_conn (multiprocessing.connection.Connection): receive messages from the master sender\n send_conn (multiprocessing.connection.Connection): send messages to the master receiver\n env (dict): a dictionary for environment variables\n \"\"\"\n\n fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)\n finish = False\n env_msg = ' '.join([f'{k}=\\\"{v}\\\"' for k, v in env.items()])\n\n # keep listening until exit\n while not finish:\n # receive cmd\n cmds = recv_conn.recv()\n\n if cmds == 'exit':\n # exit from the loop\n finish = True\n break\n else:\n # execute the commands\n try:\n # cd to execute directory\n with fab_conn.cd(workdir):\n # propagate the runtime environment\n with fab_conn.prefix(f\"export {env_msg}\"):\n if hostinfo.is_local_host:\n # execute on the local machine\n fab_conn.local(cmds, hide=False)\n else:\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n except:\n click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n send_conn.send('failure')\n\n # shutdown\n send_conn.send(\"finish\")\n fab_conn.close()\n\n\nclass MultiNodeRunner:\n \"\"\"\n A runner to execute commands on an array of machines. This runner\n is inspired by Nezha (https://github.com/zhuzilin/NeZha).\n \"\"\"\n\n def __init__(self):\n self.processes = {}\n self.master_send_conns = {}\n self.master_recv_conns = {}\n\n def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:\n \"\"\"\n Establish connections to a list of hosts\n\n Args:\n host_info_list (HostInfoList): a list of HostInfo objects\n workdir (str): the directory where command is executed\n env (dict): environment variables to propagate to hosts\n \"\"\"\n for hostinfo in host_info_list:\n master_send_conn, worker_recv_conn = Pipe()\n master_recv_conn, worker_send_conn = Pipe()\n p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))\n p.start()\n self.processes[hostinfo.hostname] = p\n self.master_recv_conns[hostinfo.hostname] = master_recv_conn\n self.master_send_conns[hostinfo.hostname] = master_send_conn\n\n def send(self, hostinfo: HostInfo, cmd: str) -> None:\n \"\"\"\n Send a command to a local/remote host.\n\n Args:\n hostinfo (HostInfo): host information\n cmd (str): the command to execute\n \"\"\"\n\n assert hostinfo.hostname in self.master_send_conns, \\\n f'{hostinfo} is not found in the current connections'\n conn = self.master_send_conns[hostinfo.hostname]\n conn.send(cmd)\n\n def stop_all(self) -> None:\n \"\"\"\n Stop connections to all hosts.\n \"\"\"\n\n for hostname, conn in self.master_send_conns.items():\n conn.send('exit')\n\n def recv_from_all(self) -> dict:\n \"\"\"\n Receive messages from all hosts\n\n Returns:\n msg_from_node (dict): a dictionry which contains messages from each node\n \"\"\"\n\n msg_from_node = dict()\n for hostname, conn in self.master_recv_conns.items():\n msg_from_node[hostname] = conn.recv()\n return msg_from_node\n", "path": "colossalai/cli/launcher/multinode_runner.py"}], "after_files": [{"content": "from multiprocessing import Pipe, Process\nfrom multiprocessing import connection as mp_connection\n\nimport click\nimport fabric\n\nfrom .hostinfo import HostInfo, HostInfoList\n\n\ndef run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n send_conn: mp_connection.Connection, env: dict) -> None:\n \"\"\"\n Use fabric connection to execute command on local or remote hosts.\n\n Args:\n hostinfo (HostInfo): host information\n workdir (str): the directory to execute the command\n recv_conn (multiprocessing.connection.Connection): receive messages from the master sender\n send_conn (multiprocessing.connection.Connection): send messages to the master receiver\n env (dict): a dictionary for environment variables\n \"\"\"\n\n fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)\n finish = False\n env_msg = ' '.join([f'{k}=\\\"{v}\\\"' for k, v in env.items()])\n\n # keep listening until exit\n while not finish:\n # receive cmd\n cmds = recv_conn.recv()\n\n if cmds == 'exit':\n # exit from the loop\n finish = True\n break\n else:\n # execute the commands\n try:\n # cd to execute directory\n with fab_conn.cd(workdir):\n # propagate the runtime environment\n with fab_conn.prefix(f\"export {env_msg}\"):\n if hostinfo.is_local_host:\n # execute on the local machine\n fab_conn.local(cmds, hide=False)\n else:\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n except Exception as e:\n click.echo(\n f\"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}\"\n )\n send_conn.send('failure')\n\n # shutdown\n send_conn.send(\"finish\")\n fab_conn.close()\n\n\nclass MultiNodeRunner:\n \"\"\"\n A runner to execute commands on an array of machines. This runner\n is inspired by Nezha (https://github.com/zhuzilin/NeZha).\n \"\"\"\n\n def __init__(self):\n self.processes = {}\n self.master_send_conns = {}\n self.master_recv_conns = {}\n\n def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:\n \"\"\"\n Establish connections to a list of hosts\n\n Args:\n host_info_list (HostInfoList): a list of HostInfo objects\n workdir (str): the directory where command is executed\n env (dict): environment variables to propagate to hosts\n \"\"\"\n for hostinfo in host_info_list:\n master_send_conn, worker_recv_conn = Pipe()\n master_recv_conn, worker_send_conn = Pipe()\n p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))\n p.start()\n self.processes[hostinfo.hostname] = p\n self.master_recv_conns[hostinfo.hostname] = master_recv_conn\n self.master_send_conns[hostinfo.hostname] = master_send_conn\n\n def send(self, hostinfo: HostInfo, cmd: str) -> None:\n \"\"\"\n Send a command to a local/remote host.\n\n Args:\n hostinfo (HostInfo): host information\n cmd (str): the command to execute\n \"\"\"\n\n assert hostinfo.hostname in self.master_send_conns, \\\n f'{hostinfo} is not found in the current connections'\n conn = self.master_send_conns[hostinfo.hostname]\n conn.send(cmd)\n\n def stop_all(self) -> None:\n \"\"\"\n Stop connections to all hosts.\n \"\"\"\n\n for hostname, conn in self.master_send_conns.items():\n conn.send('exit')\n\n def recv_from_all(self) -> dict:\n \"\"\"\n Receive messages from all hosts\n\n Returns:\n msg_from_node (dict): a dictionry which contains messages from each node\n \"\"\"\n\n msg_from_node = dict()\n for hostname, conn in self.master_recv_conns.items():\n msg_from_node[hostname] = conn.recv()\n return msg_from_node\n", "path": "colossalai/cli/launcher/multinode_runner.py"}]} | 1,566 | 268 |
gh_patches_debug_32894 | rasdani/github-patches | git_diff | facebookresearch__hydra-609 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] Allow @hydra.main() to take a config object and pass it through
# 🚀 Feature Request
Allow @hydra.main() to take a config and pass it through
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/main.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import functools
3 from typing import Callable, Optional
4
5 from ._internal.utils import get_args_parser, run_hydra
6 from .types import TaskFunction
7
8
9 def main(
10 config_path: Optional[str] = None,
11 config_name: Optional[str] = None,
12 strict: Optional[bool] = None,
13 ) -> Callable[[TaskFunction], Callable[[], None]]:
14 """
15 :param config_path: the config path, a directory relative to the declaring python file.
16 :param config_name: the name of the config (usually the file name without the .yaml extension)
17 :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an
18 existing key or if the code is accessing a non existent key
19 """
20
21 def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
22 @functools.wraps(task_function)
23 def decorated_main() -> None:
24 run_hydra(
25 args_parser=get_args_parser(),
26 task_function=task_function,
27 config_path=config_path,
28 config_name=config_name,
29 strict=strict,
30 )
31
32 return decorated_main
33
34 return main_decorator
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/main.py b/hydra/main.py
--- a/hydra/main.py
+++ b/hydra/main.py
@@ -1,6 +1,8 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import functools
-from typing import Callable, Optional
+from typing import Any, Callable, Optional
+
+from omegaconf import DictConfig
from ._internal.utils import get_args_parser, run_hydra
from .types import TaskFunction
@@ -10,7 +12,7 @@
config_path: Optional[str] = None,
config_name: Optional[str] = None,
strict: Optional[bool] = None,
-) -> Callable[[TaskFunction], Callable[[], None]]:
+) -> Callable[[TaskFunction], Any]:
"""
:param config_path: the config path, a directory relative to the declaring python file.
:param config_name: the name of the config (usually the file name without the .yaml extension)
@@ -20,14 +22,20 @@
def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
@functools.wraps(task_function)
- def decorated_main() -> None:
- run_hydra(
- args_parser=get_args_parser(),
- task_function=task_function,
- config_path=config_path,
- config_name=config_name,
- strict=strict,
- )
+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:
+ if cfg_passthrough is not None:
+ return task_function(cfg_passthrough)
+ else:
+ args = get_args_parser()
+ # no return value from run_hydra() as it may sometime actually run the task_function
+ # multiple times (--multirun)
+ run_hydra(
+ args_parser=args,
+ task_function=task_function,
+ config_path=config_path,
+ config_name=config_name,
+ strict=strict,
+ )
return decorated_main
| {"golden_diff": "diff --git a/hydra/main.py b/hydra/main.py\n--- a/hydra/main.py\n+++ b/hydra/main.py\n@@ -1,6 +1,8 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import functools\n-from typing import Callable, Optional\n+from typing import Any, Callable, Optional\n+\n+from omegaconf import DictConfig\n \n from ._internal.utils import get_args_parser, run_hydra\n from .types import TaskFunction\n@@ -10,7 +12,7 @@\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n-) -> Callable[[TaskFunction], Callable[[], None]]:\n+) -> Callable[[TaskFunction], Any]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n@@ -20,14 +22,20 @@\n \n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n- def decorated_main() -> None:\n- run_hydra(\n- args_parser=get_args_parser(),\n- task_function=task_function,\n- config_path=config_path,\n- config_name=config_name,\n- strict=strict,\n- )\n+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:\n+ if cfg_passthrough is not None:\n+ return task_function(cfg_passthrough)\n+ else:\n+ args = get_args_parser()\n+ # no return value from run_hydra() as it may sometime actually run the task_function\n+ # multiple times (--multirun)\n+ run_hydra(\n+ args_parser=args,\n+ task_function=task_function,\n+ config_path=config_path,\n+ config_name=config_name,\n+ strict=strict,\n+ )\n \n return decorated_main\n", "issue": "[Feature Request] Allow @hydra.main() to take a config object and pass it through\n# \ud83d\ude80 Feature Request\r\n\r\nAllow @hydra.main() to take a config and pass it through\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport functools\nfrom typing import Callable, Optional\n\nfrom ._internal.utils import get_args_parser, run_hydra\nfrom .types import TaskFunction\n\n\ndef main(\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n) -> Callable[[TaskFunction], Callable[[], None]]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an\n existing key or if the code is accessing a non existent key\n \"\"\"\n\n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n def decorated_main() -> None:\n run_hydra(\n args_parser=get_args_parser(),\n task_function=task_function,\n config_path=config_path,\n config_name=config_name,\n strict=strict,\n )\n\n return decorated_main\n\n return main_decorator\n", "path": "hydra/main.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport functools\nfrom typing import Any, Callable, Optional\n\nfrom omegaconf import DictConfig\n\nfrom ._internal.utils import get_args_parser, run_hydra\nfrom .types import TaskFunction\n\n\ndef main(\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n) -> Callable[[TaskFunction], Any]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an\n existing key or if the code is accessing a non existent key\n \"\"\"\n\n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:\n if cfg_passthrough is not None:\n return task_function(cfg_passthrough)\n else:\n args = get_args_parser()\n # no return value from run_hydra() as it may sometime actually run the task_function\n # multiple times (--multirun)\n run_hydra(\n args_parser=args,\n task_function=task_function,\n config_path=config_path,\n config_name=config_name,\n strict=strict,\n )\n\n return decorated_main\n\n return main_decorator\n", "path": "hydra/main.py"}]} | 627 | 444 |
gh_patches_debug_354 | rasdani/github-patches | git_diff | sanic-org__sanic-1343 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin versions for LTS release
I think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.
@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """
2 Sanic
3 """
4 import codecs
5 import os
6 import re
7 from distutils.errors import DistutilsPlatformError
8 from distutils.util import strtobool
9
10 from setuptools import setup
11
12
13 def open_local(paths, mode='r', encoding='utf8'):
14 path = os.path.join(
15 os.path.abspath(os.path.dirname(__file__)),
16 *paths
17 )
18
19 return codecs.open(path, mode, encoding)
20
21
22 with open_local(['sanic', '__init__.py'], encoding='latin1') as fp:
23 try:
24 version = re.findall(r"^__version__ = '([^']+)'\r?$",
25 fp.read(), re.M)[0]
26 except IndexError:
27 raise RuntimeError('Unable to determine version.')
28
29
30 with open_local(['README.rst']) as rm:
31 long_description = rm.read()
32
33 setup_kwargs = {
34 'name': 'sanic',
35 'version': version,
36 'url': 'http://github.com/channelcat/sanic/',
37 'license': 'MIT',
38 'author': 'Channel Cat',
39 'author_email': '[email protected]',
40 'description': (
41 'A microframework based on uvloop, httptools, and learnings of flask'),
42 'long_description': long_description,
43 'packages': ['sanic'],
44 'platforms': 'any',
45 'classifiers': [
46 'Development Status :: 4 - Beta',
47 'Environment :: Web Environment',
48 'License :: OSI Approved :: MIT License',
49 'Programming Language :: Python :: 3.5',
50 'Programming Language :: Python :: 3.6',
51 ],
52 }
53
54 env_dependency = '; sys_platform != "win32" and implementation_name == "cpython"'
55 ujson = 'ujson>=1.35' + env_dependency
56 uvloop = 'uvloop>=0.5.3' + env_dependency
57
58 requirements = [
59 'httptools>=0.0.9',
60 uvloop,
61 ujson,
62 'aiofiles>=0.3.0',
63 'websockets>=5.0,<6.0',
64 'multidict>=4.0,<5.0',
65 ]
66 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
67 print("Installing without uJSON")
68 requirements.remove(ujson)
69
70 # 'nt' means windows OS
71 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
72 print("Installing without uvLoop")
73 requirements.remove(uvloop)
74
75 setup_kwargs['install_requires'] = requirements
76 setup(**setup_kwargs)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
uvloop = 'uvloop>=0.5.3' + env_dependency
requirements = [
- 'httptools>=0.0.9',
+ 'httptools>=0.0.10',
uvloop,
ujson,
'aiofiles>=0.3.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n uvloop = 'uvloop>=0.5.3' + env_dependency\n \n requirements = [\n- 'httptools>=0.0.9',\n+ 'httptools>=0.0.10',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n", "issue": "Pin versions for LTS release\nI think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.\r\n\r\n@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins \n", "before_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nfrom distutils.errors import DistutilsPlatformError\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\n\n\ndef open_local(paths, mode='r', encoding='utf8'):\n path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n *paths\n )\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local(['sanic', '__init__.py'], encoding='latin1') as fp:\n try:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError('Unable to determine version.')\n\n\nwith open_local(['README.rst']) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n 'name': 'sanic',\n 'version': version,\n 'url': 'http://github.com/channelcat/sanic/',\n 'license': 'MIT',\n 'author': 'Channel Cat',\n 'author_email': '[email protected]',\n 'description': (\n 'A microframework based on uvloop, httptools, and learnings of flask'),\n 'long_description': long_description,\n 'packages': ['sanic'],\n 'platforms': 'any',\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n}\n\nenv_dependency = '; sys_platform != \"win32\" and implementation_name == \"cpython\"'\nujson = 'ujson>=1.35' + env_dependency\nuvloop = 'uvloop>=0.5.3' + env_dependency\n\nrequirements = [\n 'httptools>=0.0.9',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n 'websockets>=5.0,<6.0',\n 'multidict>=4.0,<5.0',\n]\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nsetup_kwargs['install_requires'] = requirements\nsetup(**setup_kwargs)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nfrom distutils.errors import DistutilsPlatformError\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\n\n\ndef open_local(paths, mode='r', encoding='utf8'):\n path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n *paths\n )\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local(['sanic', '__init__.py'], encoding='latin1') as fp:\n try:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError('Unable to determine version.')\n\n\nwith open_local(['README.rst']) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n 'name': 'sanic',\n 'version': version,\n 'url': 'http://github.com/channelcat/sanic/',\n 'license': 'MIT',\n 'author': 'Channel Cat',\n 'author_email': '[email protected]',\n 'description': (\n 'A microframework based on uvloop, httptools, and learnings of flask'),\n 'long_description': long_description,\n 'packages': ['sanic'],\n 'platforms': 'any',\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n}\n\nenv_dependency = '; sys_platform != \"win32\" and implementation_name == \"cpython\"'\nujson = 'ujson>=1.35' + env_dependency\nuvloop = 'uvloop>=0.5.3' + env_dependency\n\nrequirements = [\n 'httptools>=0.0.10',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n 'websockets>=5.0,<6.0',\n 'multidict>=4.0,<5.0',\n]\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nsetup_kwargs['install_requires'] = requirements\nsetup(**setup_kwargs)\n", "path": "setup.py"}]} | 1,019 | 99 |
gh_patches_debug_32361 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3420 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Chat/coati/models/gpt/gpt_actor.py`
Content:
```
1 from typing import Optional
2
3 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
4 from transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel
5
6 from ..base import Actor
7
8
9 class GPTActor(Actor):
10 """
11 GPT Actor model.
12
13 Args:
14 pretrained (str): Pretrained model name or path.
15 config (GPT2Config): Model config.
16 checkpoint (bool): Enable gradient checkpointing.
17 lora_rank (int): Rank of the LoRa layer.
18 lora_train_bias (str): Bias training strategy for the LoRa layer.
19 """
20
21 def __init__(self,
22 pretrained: Optional[str] = None,
23 config: Optional[GPT2Config] = None,
24 checkpoint: bool = False,
25 lora_rank: int = 0,
26 lora_train_bias: str = 'none') -> None:
27 if pretrained is not None:
28 model = GPT2LMHeadModel.from_pretrained(pretrained)
29 elif config is not None:
30 model = GPT2LMHeadModel(config)
31 else:
32 model = GPT2LMHeadModel(GPT2Config())
33 if checkpoint:
34 model.gradient_checkpointing_enable()
35 super().__init__(model, lora_rank, lora_train_bias)
36
```
Path: `applications/Chat/coati/models/gpt/gpt_critic.py`
Content:
```
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
5 from transformers.models.gpt2.modeling_gpt2 import GPT2Model
6
7 from ..base import Critic
8
9
10 class GPTCritic(Critic):
11 """
12 GPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (GPT2Config): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 lora_rank (int): Rank of the LO-RA decomposition.
19 lora_train_bias (str): LoRA bias training mode.
20 """
21
22 def __init__(self,
23 pretrained: Optional[str] = None,
24 config: Optional[GPT2Config] = None,
25 checkpoint: bool = False,
26 lora_rank: int = 0,
27 lora_train_bias: str = 'none') -> None:
28 if pretrained is not None:
29 model = GPT2Model.from_pretrained(pretrained)
30 elif config is not None:
31 model = GPT2Model(config)
32 else:
33 model = GPT2Model(GPT2Config())
34 if checkpoint:
35 model.gradient_checkpointing_enable()
36 value_head = nn.Linear(model.config.n_embd, 1)
37 super().__init__(model, value_head, lora_rank, lora_train_bias)
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py
--- a/applications/Chat/coati/models/gpt/gpt_actor.py
+++ b/applications/Chat/coati/models/gpt/gpt_actor.py
@@ -23,7 +23,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2LMHeadModel.from_pretrained(pretrained)
elif config is not None:
@@ -32,4 +33,4 @@
model = GPT2LMHeadModel(GPT2Config())
if checkpoint:
model.gradient_checkpointing_enable()
- super().__init__(model, lora_rank, lora_train_bias)
+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)
diff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py
--- a/applications/Chat/coati/models/gpt/gpt_critic.py
+++ b/applications/Chat/coati/models/gpt/gpt_critic.py
@@ -24,7 +24,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2Model.from_pretrained(pretrained)
elif config is not None:
@@ -34,4 +35,4 @@
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.n_embd, 1)
- super().__init__(model, value_head, lora_rank, lora_train_bias)
+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
| {"golden_diff": "diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py\n--- a/applications/Chat/coati/models/gpt/gpt_actor.py\n+++ b/applications/Chat/coati/models/gpt/gpt_actor.py\n@@ -23,7 +23,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n@@ -32,4 +33,4 @@\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n- super().__init__(model, lora_rank, lora_train_bias)\n+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)\ndiff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py\n--- a/applications/Chat/coati/models/gpt/gpt_critic.py\n+++ b/applications/Chat/coati/models/gpt/gpt_critic.py\n@@ -24,7 +24,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n@@ -34,4 +35,4 @@\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n- super().__init__(model, value_head, lora_rank, lora_train_bias)\n+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_actor.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_critic.py"}], "after_files": [{"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/Chat/coati/models/gpt/gpt_actor.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/Chat/coati/models/gpt/gpt_critic.py"}]} | 1,019 | 495 |
gh_patches_debug_25257 | rasdani/github-patches | git_diff | ESMCI__cime-4442 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cs.status reset to force rebuild
I would like an additional option to cs.status or perhaps create_test that
would reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that
all tests are rebuilt before being restarted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CIME/cs_status.py`
Content:
```
1 """
2 Implementation of the cs.status script, which prints the status of all
3 of the tests in one or more test suites
4 """
5
6 from __future__ import print_function
7 from CIME.XML.standard_module_setup import *
8 from CIME.XML.expected_fails_file import ExpectedFailsFile
9 from CIME.test_status import TestStatus
10 import os
11 import sys
12 from collections import defaultdict
13
14
15 def cs_status(
16 test_paths,
17 summary=False,
18 fails_only=False,
19 count_fails_phase_list=None,
20 check_throughput=False,
21 check_memory=False,
22 expected_fails_filepath=None,
23 out=sys.stdout,
24 ):
25 """Print the test statuses of all tests in test_paths. The default
26 is to print to stdout, but this can be overridden with the 'out'
27 argument.
28
29 If summary is True, then only the overall status of each test is printed
30
31 If fails_only is True, then only test failures are printed (this
32 includes PENDs as well as FAILs).
33
34 If count_fails_phase_list is provided, it should be a list of phases
35 (from the phases given by test_status.ALL_PHASES). For each phase in
36 this list: do not give line-by-line output; instead, just report the
37 total number of tests that have not PASSed this phase (this includes
38 PENDs and FAILs). (This is typically used with the fails_only
39 option, but it can also be used without that option.)
40
41 If expected_fails_filepath is provided, it should be a string giving
42 the full path to a file listing expected failures for this test
43 suite. Expected failures are then labeled as such in the output.
44 """
45 expect(not (summary and fails_only), "Cannot have both summary and fails_only")
46 expect(
47 not (summary and count_fails_phase_list),
48 "Cannot have both summary and count_fails_phase_list",
49 )
50 if count_fails_phase_list is None:
51 count_fails_phase_list = []
52 non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)
53 xfails = _get_xfails(expected_fails_filepath)
54 test_id_output = defaultdict(str)
55 test_id_counts = defaultdict(int)
56 for test_path in test_paths:
57 test_dir = os.path.dirname(test_path)
58 ts = TestStatus(test_dir=test_dir)
59 test_id = os.path.basename(test_dir).split(".")[-1]
60 if summary:
61 output = _overall_output(
62 ts, " {status} {test_name}\n", check_throughput, check_memory
63 )
64 else:
65 if fails_only:
66 output = ""
67 else:
68 output = _overall_output(
69 ts,
70 " {test_name} (Overall: {status}) details:\n",
71 check_throughput,
72 check_memory,
73 )
74 output += ts.phase_statuses_dump(
75 prefix=" ",
76 skip_passes=fails_only,
77 skip_phase_list=count_fails_phase_list,
78 xfails=xfails.get(ts.get_name()),
79 )
80 if count_fails_phase_list:
81 ts.increment_non_pass_counts(non_pass_counts)
82
83 test_id_output[test_id] += output
84 test_id_counts[test_id] += 1
85
86 for test_id in sorted(test_id_output):
87 count = test_id_counts[test_id]
88 print(
89 "{}: {} test{}".format(test_id, count, "s" if count > 1 else ""), file=out
90 )
91 print(test_id_output[test_id], file=out)
92 print(" ", file=out)
93
94 if count_fails_phase_list:
95 print(72 * "=", file=out)
96 print("Non-PASS results for select phases:", file=out)
97 for phase in count_fails_phase_list:
98 print("{} non-passes: {}".format(phase, non_pass_counts[phase]), file=out)
99
100
101 def _get_xfails(expected_fails_filepath):
102 """Returns a dictionary of ExpectedFails objects, where the keys are test names
103
104 expected_fails_filepath should be either a string giving the path to
105 the file containing expected failures, or None. If None, then this
106 returns an empty dictionary (as if expected_fails_filepath were
107 pointing to a file with no expected failures listed).
108 """
109 if expected_fails_filepath is not None:
110 expected_fails_file = ExpectedFailsFile(expected_fails_filepath)
111 xfails = expected_fails_file.get_expected_fails()
112 else:
113 xfails = {}
114 return xfails
115
116
117 def _overall_output(ts, format_str, check_throughput, check_memory):
118 """Returns a string giving the overall test status
119
120 Args:
121 ts: TestStatus object
122 format_str (string): string giving the format of the output; must
123 contain place-holders for status and test_name
124 """
125 test_name = ts.get_name()
126 status = ts.get_overall_test_status(
127 check_throughput=check_throughput,
128 check_memory=check_memory,
129 )[0]
130 return format_str.format(status=status, test_name=test_name)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CIME/cs_status.py b/CIME/cs_status.py
--- a/CIME/cs_status.py
+++ b/CIME/cs_status.py
@@ -6,7 +6,7 @@
from __future__ import print_function
from CIME.XML.standard_module_setup import *
from CIME.XML.expected_fails_file import ExpectedFailsFile
-from CIME.test_status import TestStatus
+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS
import os
import sys
from collections import defaultdict
@@ -20,6 +20,7 @@
check_throughput=False,
check_memory=False,
expected_fails_filepath=None,
+ force_rebuild=False,
out=sys.stdout,
):
"""Print the test statuses of all tests in test_paths. The default
@@ -56,6 +57,11 @@
for test_path in test_paths:
test_dir = os.path.dirname(test_path)
ts = TestStatus(test_dir=test_dir)
+
+ if force_rebuild:
+ with ts:
+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)
+
test_id = os.path.basename(test_dir).split(".")[-1]
if summary:
output = _overall_output(
| {"golden_diff": "diff --git a/CIME/cs_status.py b/CIME/cs_status.py\n--- a/CIME/cs_status.py\n+++ b/CIME/cs_status.py\n@@ -6,7 +6,7 @@\n from __future__ import print_function\n from CIME.XML.standard_module_setup import *\n from CIME.XML.expected_fails_file import ExpectedFailsFile\n-from CIME.test_status import TestStatus\n+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS\n import os\n import sys\n from collections import defaultdict\n@@ -20,6 +20,7 @@\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n+ force_rebuild=False,\n out=sys.stdout,\n ):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n@@ -56,6 +57,11 @@\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n+\n+ if force_rebuild:\n+ with ts:\n+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)\n+\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n", "issue": "cs.status reset to force rebuild\nI would like an additional option to cs.status or perhaps create_test that\r\nwould reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that \r\nall tests are rebuilt before being restarted. \n", "before_files": [{"content": "\"\"\"\nImplementation of the cs.status script, which prints the status of all\nof the tests in one or more test suites\n\"\"\"\n\nfrom __future__ import print_function\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.expected_fails_file import ExpectedFailsFile\nfrom CIME.test_status import TestStatus\nimport os\nimport sys\nfrom collections import defaultdict\n\n\ndef cs_status(\n test_paths,\n summary=False,\n fails_only=False,\n count_fails_phase_list=None,\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n out=sys.stdout,\n):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n is to print to stdout, but this can be overridden with the 'out'\n argument.\n\n If summary is True, then only the overall status of each test is printed\n\n If fails_only is True, then only test failures are printed (this\n includes PENDs as well as FAILs).\n\n If count_fails_phase_list is provided, it should be a list of phases\n (from the phases given by test_status.ALL_PHASES). For each phase in\n this list: do not give line-by-line output; instead, just report the\n total number of tests that have not PASSed this phase (this includes\n PENDs and FAILs). (This is typically used with the fails_only\n option, but it can also be used without that option.)\n\n If expected_fails_filepath is provided, it should be a string giving\n the full path to a file listing expected failures for this test\n suite. Expected failures are then labeled as such in the output.\n \"\"\"\n expect(not (summary and fails_only), \"Cannot have both summary and fails_only\")\n expect(\n not (summary and count_fails_phase_list),\n \"Cannot have both summary and count_fails_phase_list\",\n )\n if count_fails_phase_list is None:\n count_fails_phase_list = []\n non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)\n xfails = _get_xfails(expected_fails_filepath)\n test_id_output = defaultdict(str)\n test_id_counts = defaultdict(int)\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n ts, \" {status} {test_name}\\n\", check_throughput, check_memory\n )\n else:\n if fails_only:\n output = \"\"\n else:\n output = _overall_output(\n ts,\n \" {test_name} (Overall: {status}) details:\\n\",\n check_throughput,\n check_memory,\n )\n output += ts.phase_statuses_dump(\n prefix=\" \",\n skip_passes=fails_only,\n skip_phase_list=count_fails_phase_list,\n xfails=xfails.get(ts.get_name()),\n )\n if count_fails_phase_list:\n ts.increment_non_pass_counts(non_pass_counts)\n\n test_id_output[test_id] += output\n test_id_counts[test_id] += 1\n\n for test_id in sorted(test_id_output):\n count = test_id_counts[test_id]\n print(\n \"{}: {} test{}\".format(test_id, count, \"s\" if count > 1 else \"\"), file=out\n )\n print(test_id_output[test_id], file=out)\n print(\" \", file=out)\n\n if count_fails_phase_list:\n print(72 * \"=\", file=out)\n print(\"Non-PASS results for select phases:\", file=out)\n for phase in count_fails_phase_list:\n print(\"{} non-passes: {}\".format(phase, non_pass_counts[phase]), file=out)\n\n\ndef _get_xfails(expected_fails_filepath):\n \"\"\"Returns a dictionary of ExpectedFails objects, where the keys are test names\n\n expected_fails_filepath should be either a string giving the path to\n the file containing expected failures, or None. If None, then this\n returns an empty dictionary (as if expected_fails_filepath were\n pointing to a file with no expected failures listed).\n \"\"\"\n if expected_fails_filepath is not None:\n expected_fails_file = ExpectedFailsFile(expected_fails_filepath)\n xfails = expected_fails_file.get_expected_fails()\n else:\n xfails = {}\n return xfails\n\n\ndef _overall_output(ts, format_str, check_throughput, check_memory):\n \"\"\"Returns a string giving the overall test status\n\n Args:\n ts: TestStatus object\n format_str (string): string giving the format of the output; must\n contain place-holders for status and test_name\n \"\"\"\n test_name = ts.get_name()\n status = ts.get_overall_test_status(\n check_throughput=check_throughput,\n check_memory=check_memory,\n )[0]\n return format_str.format(status=status, test_name=test_name)\n", "path": "CIME/cs_status.py"}], "after_files": [{"content": "\"\"\"\nImplementation of the cs.status script, which prints the status of all\nof the tests in one or more test suites\n\"\"\"\n\nfrom __future__ import print_function\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.expected_fails_file import ExpectedFailsFile\nfrom CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS\nimport os\nimport sys\nfrom collections import defaultdict\n\n\ndef cs_status(\n test_paths,\n summary=False,\n fails_only=False,\n count_fails_phase_list=None,\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n force_rebuild=False,\n out=sys.stdout,\n):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n is to print to stdout, but this can be overridden with the 'out'\n argument.\n\n If summary is True, then only the overall status of each test is printed\n\n If fails_only is True, then only test failures are printed (this\n includes PENDs as well as FAILs).\n\n If count_fails_phase_list is provided, it should be a list of phases\n (from the phases given by test_status.ALL_PHASES). For each phase in\n this list: do not give line-by-line output; instead, just report the\n total number of tests that have not PASSed this phase (this includes\n PENDs and FAILs). (This is typically used with the fails_only\n option, but it can also be used without that option.)\n\n If expected_fails_filepath is provided, it should be a string giving\n the full path to a file listing expected failures for this test\n suite. Expected failures are then labeled as such in the output.\n \"\"\"\n expect(not (summary and fails_only), \"Cannot have both summary and fails_only\")\n expect(\n not (summary and count_fails_phase_list),\n \"Cannot have both summary and count_fails_phase_list\",\n )\n if count_fails_phase_list is None:\n count_fails_phase_list = []\n non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)\n xfails = _get_xfails(expected_fails_filepath)\n test_id_output = defaultdict(str)\n test_id_counts = defaultdict(int)\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n\n if force_rebuild:\n with ts:\n ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)\n\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n ts, \" {status} {test_name}\\n\", check_throughput, check_memory\n )\n else:\n if fails_only:\n output = \"\"\n else:\n output = _overall_output(\n ts,\n \" {test_name} (Overall: {status}) details:\\n\",\n check_throughput,\n check_memory,\n )\n output += ts.phase_statuses_dump(\n prefix=\" \",\n skip_passes=fails_only,\n skip_phase_list=count_fails_phase_list,\n xfails=xfails.get(ts.get_name()),\n )\n if count_fails_phase_list:\n ts.increment_non_pass_counts(non_pass_counts)\n\n test_id_output[test_id] += output\n test_id_counts[test_id] += 1\n\n for test_id in sorted(test_id_output):\n count = test_id_counts[test_id]\n print(\n \"{}: {} test{}\".format(test_id, count, \"s\" if count > 1 else \"\"), file=out\n )\n print(test_id_output[test_id], file=out)\n print(\" \", file=out)\n\n if count_fails_phase_list:\n print(72 * \"=\", file=out)\n print(\"Non-PASS results for select phases:\", file=out)\n for phase in count_fails_phase_list:\n print(\"{} non-passes: {}\".format(phase, non_pass_counts[phase]), file=out)\n\n\ndef _get_xfails(expected_fails_filepath):\n \"\"\"Returns a dictionary of ExpectedFails objects, where the keys are test names\n\n expected_fails_filepath should be either a string giving the path to\n the file containing expected failures, or None. If None, then this\n returns an empty dictionary (as if expected_fails_filepath were\n pointing to a file with no expected failures listed).\n \"\"\"\n if expected_fails_filepath is not None:\n expected_fails_file = ExpectedFailsFile(expected_fails_filepath)\n xfails = expected_fails_file.get_expected_fails()\n else:\n xfails = {}\n return xfails\n\n\ndef _overall_output(ts, format_str, check_throughput, check_memory):\n \"\"\"Returns a string giving the overall test status\n\n Args:\n ts: TestStatus object\n format_str (string): string giving the format of the output; must\n contain place-holders for status and test_name\n \"\"\"\n test_name = ts.get_name()\n status = ts.get_overall_test_status(\n check_throughput=check_throughput,\n check_memory=check_memory,\n )[0]\n return format_str.format(status=status, test_name=test_name)\n", "path": "CIME/cs_status.py"}]} | 1,671 | 272 |
gh_patches_debug_13744 | rasdani/github-patches | git_diff | saleor__saleor-1471 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor displaying success messages in the dashboard
The code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/dashboard/templatetags/utils.py`
Content:
```
1 from urllib.parse import urlencode
2
3 from django import forms
4 from django.template import Library
5 from django_filters.fields import RangeField
6 from versatileimagefield.widgets import VersatileImagePPOIClickWidget
7
8 from ...product.utils import get_margin_for_variant, get_variant_costs_data
9 from ..product.widgets import ImagePreviewWidget
10 from .chips import (
11 handle_default, handle_multiple_choice, handle_multiple_model_choice,
12 handle_nullboolean, handle_range, handle_single_choice,
13 handle_single_model_choice)
14
15 register = Library()
16
17
18 @register.simple_tag(takes_context=True)
19 def construct_get_query(context, **params):
20 request_get = context['request'].GET.dict()
21 if not (request_get or params):
22 return ''
23 all_params = {}
24 all_params.update(request_get)
25 all_params.update(params)
26 all_params.update(context.get('default_pagination_params', {}))
27 return '?' + urlencode(all_params)
28
29
30 @register.filter
31 def is_versatile_image_ppoi_click_widget(field):
32 '''
33 This filter checks if image field widget is used when user wants to edit
34 existing product image.
35 '''
36 return isinstance(field.field.widget, VersatileImagePPOIClickWidget)
37
38
39 @register.filter
40 def is_image_preview_widget(field):
41 '''
42 This filter checks if image field widget is used when user wants to add new
43 product image.
44 '''
45 return isinstance(field.field.widget, ImagePreviewWidget)
46
47
48 @register.inclusion_tag('dashboard/product/product_variant/_image_select.html')
49 def render_image_choice(field):
50 choices = zip(field, field.field.queryset)
51 return {'field': field, 'choices_with_images': choices}
52
53
54 @register.inclusion_tag('dashboard/includes/_pagination.html',
55 takes_context=True)
56 def paginate(context, page_obj, num_of_pages=5):
57 context['page_obj'] = page_obj
58 context['n_forward'] = num_of_pages + 1
59 context['n_backward'] = -num_of_pages - 1
60 context['next_section'] = (2 * num_of_pages) + 1
61 context['previous_section'] = (-2 * num_of_pages) - 1
62 return context
63
64
65 @register.simple_tag
66 def margin_for_variant(stock):
67 return get_margin_for_variant(stock)
68
69
70 @register.simple_tag
71 def margins_for_variant(variant):
72 margins = get_variant_costs_data(variant)['margins']
73 return margins
74
75
76 @register.inclusion_tag('dashboard/includes/_filters.html', takes_context=True)
77 def add_filters(context, filter_set, sort_by_filter_name='sort_by'):
78 chips = []
79 request_get = context['request'].GET.copy()
80 for filter_name in filter_set.form.cleaned_data.keys():
81 if filter_name == sort_by_filter_name:
82 # Skip processing of sort_by filter, as it's rendered differently
83 continue
84
85 field = filter_set.form[filter_name]
86 if field.value() not in ['', None]:
87 if isinstance(field.field, forms.NullBooleanField):
88 items = handle_nullboolean(field, request_get)
89 elif isinstance(field.field, forms.ModelMultipleChoiceField):
90 items = handle_multiple_model_choice(field, request_get)
91 elif isinstance(field.field, forms.MultipleChoiceField):
92 items = handle_multiple_choice(field, request_get)
93 elif isinstance(field.field, forms.ModelChoiceField):
94 items = handle_single_model_choice(field, request_get)
95 elif isinstance(field.field, forms.ChoiceField):
96 items = handle_single_choice(field, request_get)
97 elif isinstance(field.field, RangeField):
98 items = handle_range(field, request_get)
99 else:
100 items = handle_default(field, request_get)
101 chips.extend(items)
102 return {
103 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
104 'sort_by': request_get.get(sort_by_filter_name, None)}
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py
--- a/saleor/dashboard/templatetags/utils.py
+++ b/saleor/dashboard/templatetags/utils.py
@@ -1,3 +1,5 @@
+from __future__ import unicode_literals
+from json import dumps
from urllib.parse import urlencode
from django import forms
@@ -102,3 +104,13 @@
return {
'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
'sort_by': request_get.get(sort_by_filter_name, None)}
+
+
[email protected]_tag(takes_context=True)
+def serialize_messages(context):
+ """Serialize django.contrib.messages to JSON"""
+ messages = context.get('messages', [])
+ data = {}
+ for i, message in enumerate(messages):
+ data[i] = str(message)
+ return dumps(data)
| {"golden_diff": "diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py\n--- a/saleor/dashboard/templatetags/utils.py\n+++ b/saleor/dashboard/templatetags/utils.py\n@@ -1,3 +1,5 @@\n+from __future__ import unicode_literals\n+from json import dumps\n from urllib.parse import urlencode\n \n from django import forms\n@@ -102,3 +104,13 @@\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n+\n+\[email protected]_tag(takes_context=True)\n+def serialize_messages(context):\n+ \"\"\"Serialize django.contrib.messages to JSON\"\"\"\n+ messages = context.get('messages', [])\n+ data = {}\n+ for i, message in enumerate(messages):\n+ data[i] = str(message)\n+ return dumps(data)\n", "issue": "Refactor displaying success messages in the dashboard\nThe code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom django import forms\nfrom django.template import Library\nfrom django_filters.fields import RangeField\nfrom versatileimagefield.widgets import VersatileImagePPOIClickWidget\n\nfrom ...product.utils import get_margin_for_variant, get_variant_costs_data\nfrom ..product.widgets import ImagePreviewWidget\nfrom .chips import (\n handle_default, handle_multiple_choice, handle_multiple_model_choice,\n handle_nullboolean, handle_range, handle_single_choice,\n handle_single_model_choice)\n\nregister = Library()\n\n\[email protected]_tag(takes_context=True)\ndef construct_get_query(context, **params):\n request_get = context['request'].GET.dict()\n if not (request_get or params):\n return ''\n all_params = {}\n all_params.update(request_get)\n all_params.update(params)\n all_params.update(context.get('default_pagination_params', {}))\n return '?' + urlencode(all_params)\n\n\[email protected]\ndef is_versatile_image_ppoi_click_widget(field):\n '''\n This filter checks if image field widget is used when user wants to edit\n existing product image.\n '''\n return isinstance(field.field.widget, VersatileImagePPOIClickWidget)\n\n\[email protected]\ndef is_image_preview_widget(field):\n '''\n This filter checks if image field widget is used when user wants to add new\n product image.\n '''\n return isinstance(field.field.widget, ImagePreviewWidget)\n\n\[email protected]_tag('dashboard/product/product_variant/_image_select.html')\ndef render_image_choice(field):\n choices = zip(field, field.field.queryset)\n return {'field': field, 'choices_with_images': choices}\n\n\[email protected]_tag('dashboard/includes/_pagination.html',\n takes_context=True)\ndef paginate(context, page_obj, num_of_pages=5):\n context['page_obj'] = page_obj\n context['n_forward'] = num_of_pages + 1\n context['n_backward'] = -num_of_pages - 1\n context['next_section'] = (2 * num_of_pages) + 1\n context['previous_section'] = (-2 * num_of_pages) - 1\n return context\n\n\[email protected]_tag\ndef margin_for_variant(stock):\n return get_margin_for_variant(stock)\n\n\[email protected]_tag\ndef margins_for_variant(variant):\n margins = get_variant_costs_data(variant)['margins']\n return margins\n\n\[email protected]_tag('dashboard/includes/_filters.html', takes_context=True)\ndef add_filters(context, filter_set, sort_by_filter_name='sort_by'):\n chips = []\n request_get = context['request'].GET.copy()\n for filter_name in filter_set.form.cleaned_data.keys():\n if filter_name == sort_by_filter_name:\n # Skip processing of sort_by filter, as it's rendered differently\n continue\n\n field = filter_set.form[filter_name]\n if field.value() not in ['', None]:\n if isinstance(field.field, forms.NullBooleanField):\n items = handle_nullboolean(field, request_get)\n elif isinstance(field.field, forms.ModelMultipleChoiceField):\n items = handle_multiple_model_choice(field, request_get)\n elif isinstance(field.field, forms.MultipleChoiceField):\n items = handle_multiple_choice(field, request_get)\n elif isinstance(field.field, forms.ModelChoiceField):\n items = handle_single_model_choice(field, request_get)\n elif isinstance(field.field, forms.ChoiceField):\n items = handle_single_choice(field, request_get)\n elif isinstance(field.field, RangeField):\n items = handle_range(field, request_get)\n else:\n items = handle_default(field, request_get)\n chips.extend(items)\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n", "path": "saleor/dashboard/templatetags/utils.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom json import dumps\nfrom urllib.parse import urlencode\n\nfrom django import forms\nfrom django.template import Library\nfrom django_filters.fields import RangeField\nfrom versatileimagefield.widgets import VersatileImagePPOIClickWidget\n\nfrom ...product.utils import get_margin_for_variant, get_variant_costs_data\nfrom ..product.widgets import ImagePreviewWidget\nfrom .chips import (\n handle_default, handle_multiple_choice, handle_multiple_model_choice,\n handle_nullboolean, handle_range, handle_single_choice,\n handle_single_model_choice)\n\nregister = Library()\n\n\[email protected]_tag(takes_context=True)\ndef construct_get_query(context, **params):\n request_get = context['request'].GET.dict()\n if not (request_get or params):\n return ''\n all_params = {}\n all_params.update(request_get)\n all_params.update(params)\n all_params.update(context.get('default_pagination_params', {}))\n return '?' + urlencode(all_params)\n\n\[email protected]\ndef is_versatile_image_ppoi_click_widget(field):\n '''\n This filter checks if image field widget is used when user wants to edit\n existing product image.\n '''\n return isinstance(field.field.widget, VersatileImagePPOIClickWidget)\n\n\[email protected]\ndef is_image_preview_widget(field):\n '''\n This filter checks if image field widget is used when user wants to add new\n product image.\n '''\n return isinstance(field.field.widget, ImagePreviewWidget)\n\n\[email protected]_tag('dashboard/product/product_variant/_image_select.html')\ndef render_image_choice(field):\n choices = zip(field, field.field.queryset)\n return {'field': field, 'choices_with_images': choices}\n\n\[email protected]_tag('dashboard/includes/_pagination.html',\n takes_context=True)\ndef paginate(context, page_obj, num_of_pages=5):\n context['page_obj'] = page_obj\n context['n_forward'] = num_of_pages + 1\n context['n_backward'] = -num_of_pages - 1\n context['next_section'] = (2 * num_of_pages) + 1\n context['previous_section'] = (-2 * num_of_pages) - 1\n return context\n\n\[email protected]_tag\ndef margin_for_variant(stock):\n return get_margin_for_variant(stock)\n\n\[email protected]_tag\ndef margins_for_variant(variant):\n margins = get_variant_costs_data(variant)['margins']\n return margins\n\n\[email protected]_tag('dashboard/includes/_filters.html', takes_context=True)\ndef add_filters(context, filter_set, sort_by_filter_name='sort_by'):\n chips = []\n request_get = context['request'].GET.copy()\n for filter_name in filter_set.form.cleaned_data.keys():\n if filter_name == sort_by_filter_name:\n # Skip processing of sort_by filter, as it's rendered differently\n continue\n\n field = filter_set.form[filter_name]\n if field.value() not in ['', None]:\n if isinstance(field.field, forms.NullBooleanField):\n items = handle_nullboolean(field, request_get)\n elif isinstance(field.field, forms.ModelMultipleChoiceField):\n items = handle_multiple_model_choice(field, request_get)\n elif isinstance(field.field, forms.MultipleChoiceField):\n items = handle_multiple_choice(field, request_get)\n elif isinstance(field.field, forms.ModelChoiceField):\n items = handle_single_model_choice(field, request_get)\n elif isinstance(field.field, forms.ChoiceField):\n items = handle_single_choice(field, request_get)\n elif isinstance(field.field, RangeField):\n items = handle_range(field, request_get)\n else:\n items = handle_default(field, request_get)\n chips.extend(items)\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n\n\[email protected]_tag(takes_context=True)\ndef serialize_messages(context):\n \"\"\"Serialize django.contrib.messages to JSON\"\"\"\n messages = context.get('messages', [])\n data = {}\n for i, message in enumerate(messages):\n data[i] = str(message)\n return dumps(data)\n", "path": "saleor/dashboard/templatetags/utils.py"}]} | 1,373 | 215 |
gh_patches_debug_12447 | rasdani/github-patches | git_diff | searxng__searxng-3204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: lingva engine / redirects & Key-Errors
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/return42/searxng
Branch: darmarit.org
Version: 2024.2.3+a6f5d690
**How did you install SearXNG?**
(unmodified fork/brand) from master branch
**What happened?**
With the default config / the "official instance" we have the errors reported below:
https://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041
**How To Reproduce**
```
!lingva en-de convenient
```
**Technical report**
```
Error
* Error: httpx.ReadTimeout
* Percentage: 50
* Parameters: `(None, None, 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:118`
* Function: `_send_http_request`
* Code: `response = req(params['url'], **request_args)`
```
```
Error
* Error: 1 redirects, maximum: 0
* Percentage: 50
* Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:127`
* Function: `_send_http_request`
* Code: `count_error(`
```
```
Error
* Error: KeyError
* Percentage: 50
* Parameters: `()`
* File name: `searx/engines/lingva.py:51`
* Function: `response`
* Code: `infobox += f"<b>{translation['type']}</b>"`
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/lingva.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Lingva (alternative Google Translate frontend)"""
4
5 from json import loads
6
7 about = {
8 "website": 'https://lingva.ml',
9 "wikidata_id": None,
10 "official_api_documentation": 'https://github.com/thedaviddelta/lingva-translate#public-apis',
11 "use_official_api": True,
12 "require_api_key": False,
13 "results": 'JSON',
14 }
15
16 engine_type = 'online_dictionary'
17 categories = ['general']
18
19 url = "https://lingva.thedaviddelta.com/"
20 search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
21
22
23 def request(_query, params):
24 params['url'] = search_url.format(
25 url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']
26 )
27 return params
28
29
30 def response(resp):
31 results = []
32
33 result = loads(resp.text)
34 info = result["info"]
35 from_to_prefix = "%s-%s " % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])
36
37 if "typo" in info:
38 results.append({"suggestion": from_to_prefix + info["typo"]})
39
40 if 'definitions' in info: # pylint: disable=too-many-nested-blocks
41 for definition in info['definitions']:
42 if 'list' in definition:
43 for item in definition['list']:
44 if 'synonyms' in item:
45 for synonym in item['synonyms']:
46 results.append({"suggestion": from_to_prefix + synonym})
47
48 infobox = ""
49
50 for translation in info["extraTranslations"]:
51 infobox += f"<b>{translation['type']}</b>"
52
53 for word in translation["list"]:
54 infobox += f"<dl><dt>{word['word']}</dt>"
55
56 for meaning in word["meanings"]:
57 infobox += f"<dd>{meaning}</dd>"
58
59 infobox += "</dl>"
60
61 results.append(
62 {
63 'infobox': result["translation"],
64 'content': infobox,
65 }
66 )
67
68 return results
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py
--- a/searx/engines/lingva.py
+++ b/searx/engines/lingva.py
@@ -16,7 +16,7 @@
engine_type = 'online_dictionary'
categories = ['general']
-url = "https://lingva.thedaviddelta.com/"
+url = "https://lingva.thedaviddelta.com"
search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
@@ -48,8 +48,6 @@
infobox = ""
for translation in info["extraTranslations"]:
- infobox += f"<b>{translation['type']}</b>"
-
for word in translation["list"]:
infobox += f"<dl><dt>{word['word']}</dt>"
| {"golden_diff": "diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py\n--- a/searx/engines/lingva.py\n+++ b/searx/engines/lingva.py\n@@ -16,7 +16,7 @@\n engine_type = 'online_dictionary'\n categories = ['general']\n \n-url = \"https://lingva.thedaviddelta.com/\"\n+url = \"https://lingva.thedaviddelta.com\"\n search_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n \n \n@@ -48,8 +48,6 @@\n infobox = \"\"\n \n for translation in info[\"extraTranslations\"]:\n- infobox += f\"<b>{translation['type']}</b>\"\n-\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n", "issue": "Bug: lingva engine / redirects & Key-Errors\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n\r\nRepository: https://github.com/return42/searxng\r\nBranch: darmarit.org\r\nVersion: 2024.2.3+a6f5d690\r\n\r\n**How did you install SearXNG?**\r\n\r\n(unmodified fork/brand) from master branch\r\n\r\n**What happened?**\r\n\r\nWith the default config / the \"official instance\" we have the errors reported below:\r\n\r\nhttps://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041\r\n\r\n**How To Reproduce**\r\n\r\n```\r\n!lingva en-de convenient\r\n```\r\n\r\n**Technical report**\r\n\r\n```\r\nError\r\n * Error: httpx.ReadTimeout\r\n * Percentage: 50\r\n * Parameters: `(None, None, 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:118`\r\n * Function: `_send_http_request`\r\n * Code: `response = req(params['url'], **request_args)`\r\n```\r\n\r\n```\r\nError\r\n * Error: 1 redirects, maximum: 0\r\n * Percentage: 50\r\n * Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:127`\r\n * Function: `_send_http_request`\r\n * Code: `count_error(`\r\n```\r\n\r\n```\r\nError\r\n * Error: KeyError\r\n * Percentage: 50\r\n * Parameters: `()`\r\n * File name: `searx/engines/lingva.py:51`\r\n * Function: `response`\r\n * Code: `infobox += f\"<b>{translation['type']}</b>\"`\r\n```\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Lingva (alternative Google Translate frontend)\"\"\"\n\nfrom json import loads\n\nabout = {\n \"website\": 'https://lingva.ml',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://github.com/thedaviddelta/lingva-translate#public-apis',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nengine_type = 'online_dictionary'\ncategories = ['general']\n\nurl = \"https://lingva.thedaviddelta.com/\"\nsearch_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n\n\ndef request(_query, params):\n params['url'] = search_url.format(\n url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']\n )\n return params\n\n\ndef response(resp):\n results = []\n\n result = loads(resp.text)\n info = result[\"info\"]\n from_to_prefix = \"%s-%s \" % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])\n\n if \"typo\" in info:\n results.append({\"suggestion\": from_to_prefix + info[\"typo\"]})\n\n if 'definitions' in info: # pylint: disable=too-many-nested-blocks\n for definition in info['definitions']:\n if 'list' in definition:\n for item in definition['list']:\n if 'synonyms' in item:\n for synonym in item['synonyms']:\n results.append({\"suggestion\": from_to_prefix + synonym})\n\n infobox = \"\"\n\n for translation in info[\"extraTranslations\"]:\n infobox += f\"<b>{translation['type']}</b>\"\n\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n\n for meaning in word[\"meanings\"]:\n infobox += f\"<dd>{meaning}</dd>\"\n\n infobox += \"</dl>\"\n\n results.append(\n {\n 'infobox': result[\"translation\"],\n 'content': infobox,\n }\n )\n\n return results\n", "path": "searx/engines/lingva.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Lingva (alternative Google Translate frontend)\"\"\"\n\nfrom json import loads\n\nabout = {\n \"website\": 'https://lingva.ml',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://github.com/thedaviddelta/lingva-translate#public-apis',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nengine_type = 'online_dictionary'\ncategories = ['general']\n\nurl = \"https://lingva.thedaviddelta.com\"\nsearch_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n\n\ndef request(_query, params):\n params['url'] = search_url.format(\n url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']\n )\n return params\n\n\ndef response(resp):\n results = []\n\n result = loads(resp.text)\n info = result[\"info\"]\n from_to_prefix = \"%s-%s \" % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])\n\n if \"typo\" in info:\n results.append({\"suggestion\": from_to_prefix + info[\"typo\"]})\n\n if 'definitions' in info: # pylint: disable=too-many-nested-blocks\n for definition in info['definitions']:\n if 'list' in definition:\n for item in definition['list']:\n if 'synonyms' in item:\n for synonym in item['synonyms']:\n results.append({\"suggestion\": from_to_prefix + synonym})\n\n infobox = \"\"\n\n for translation in info[\"extraTranslations\"]:\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n\n for meaning in word[\"meanings\"]:\n infobox += f\"<dd>{meaning}</dd>\"\n\n infobox += \"</dl>\"\n\n results.append(\n {\n 'infobox': result[\"translation\"],\n 'content': infobox,\n }\n )\n\n return results\n", "path": "searx/engines/lingva.py"}]} | 1,350 | 192 |
gh_patches_debug_254 | rasdani/github-patches | git_diff | mindee__doctr-123 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[docs] Enable documentation of multiple versions at once
As of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:
- having the latest version by default
- having the documentation of each release accessible as well using a displayed selector
Hugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import sphinx_rtd_theme
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('../..'))
18 import doctr
19
20 # -- Project information -----------------------------------------------------
21
22 master_doc = 'index'
23 project = 'doctr'
24 copyright = '2021, Mindee'
25 author = 'François-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'
26
27 # The full version, including alpha/beta/rc tags
28 version = doctr.__version__
29 release = doctr.__version__ + '-git'
30
31
32 # -- General configuration ---------------------------------------------------
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.napoleon',
40 'sphinx.ext.viewcode',
41 'sphinx.ext.coverage',
42 'sphinx.ext.mathjax',
43 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/
44 'sphinx_copybutton',
45 ]
46
47 napoleon_use_ivar = True
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # List of patterns, relative to source directory, that match files and
53 # directories to ignore when looking for source files.
54 # This pattern also affects html_static_path and html_extra_path.
55 exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
56
57
58 # The name of the Pygments (syntax highlighting) style to use.
59 pygments_style = 'sphinx'
60 highlight_language = 'python3'
61
62 # -- Options for HTML output -------------------------------------------------
63
64 # The theme to use for HTML and HTML Help pages. See the documentation for
65 # a list of builtin themes.
66 #
67 html_theme = 'sphinx_rtd_theme'
68 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
69
70 # Theme options are theme-specific and customize the look and feel of a theme
71 # further. For a list of options available for each theme, see the
72 # documentation.
73 #
74 html_theme_options = {
75 'collapse_navigation': False,
76 'display_version': True,
77 'logo_only': False,
78 }
79
80 # html_logo = '_static/images/logo.png'
81
82
83 # Add any paths that contain custom static files (such as style sheets) here,
84 # relative to this directory. They are copied after the builtin static files,
85 # so a file named "default.css" will overwrite the builtin "default.css".
86 html_static_path = ['_static']
87
88 # A list of files that should not be packed into the epub file.
89 epub_exclude_files = ['search.html']
90
91 def setup(app):
92 app.add_css_file('css/mindee.css')
93 app.add_js_file('js/custom.js')
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -73,7 +73,7 @@
#
html_theme_options = {
'collapse_navigation': False,
- 'display_version': True,
+ 'display_version': False,
'logo_only': False,
}
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -73,7 +73,7 @@\n #\n html_theme_options = {\n 'collapse_navigation': False,\n- 'display_version': True,\n+ 'display_version': False,\n 'logo_only': False,\n }\n", "issue": "[docs] Enable documentation of multiple versions at once\nAs of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:\r\n- having the latest version by default\r\n- having the documentation of each release accessible as well using a displayed selector\r\n\r\nHugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'doctr'\ncopyright = '2021, Mindee'\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': False,\n}\n\n# html_logo = '_static/images/logo.png'\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'doctr'\ncopyright = '2021, Mindee'\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': False,\n 'logo_only': False,\n}\n\n# html_logo = '_static/images/logo.png'\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}]} | 1,229 | 77 |
gh_patches_debug_49809 | rasdani/github-patches | git_diff | plotly__plotly.py-699 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
jsonschema.SchemaError when a figure is validated
Here is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020
The notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:
_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:
Notebook Validation failed_:
`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:
`{
"data": [
{
"colorscale": "Viridis",
"z": [
[
2,
27,
105,
100
],
[
87,
14,
121,
102
],
[
26,
121,
73,
34
],
[
44,
105,
111,
127
]
],
"type": "heatmap",
"zsmooth": "best"
}
],
"layout": {
"width": 400,
"height": 400
}
}`
Initially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 exec (open('plotly/version.py').read())
4
5
6 def readme():
7 with open('README.rst') as f:
8 return f.read()
9
10
11 setup(name='plotly',
12 version=__version__,
13 use_2to3=False,
14 author='Chris P',
15 author_email='[email protected]',
16 maintainer='Chris P',
17 maintainer_email='[email protected]',
18 url='https://plot.ly/python/',
19 description="Python plotting library for collaborative, "
20 "interactive, publication-quality graphs.",
21 long_description=readme(),
22 classifiers=[
23 'Development Status :: 4 - Beta',
24 'Programming Language :: Python :: 2',
25 'Programming Language :: Python :: 2.7',
26 'Programming Language :: Python :: 3',
27 'Programming Language :: Python :: 3.3',
28 'Programming Language :: Python :: 3.4',
29 'Programming Language :: Python :: 3.5',
30 'Topic :: Scientific/Engineering :: Visualization',
31 ],
32 license='MIT',
33 packages=['plotly',
34 'plotly/api',
35 'plotly/api/v1',
36 'plotly/api/v2',
37 'plotly/plotly',
38 'plotly/plotly/chunked_requests',
39 'plotly/figure_factory',
40 'plotly/graph_objs',
41 'plotly/grid_objs',
42 'plotly/widgets',
43 'plotly/offline',
44 'plotly/matplotlylib',
45 'plotly/matplotlylib/mplexporter',
46 'plotly/matplotlylib/mplexporter/renderers'],
47 package_data={'plotly': ['package_data/*']},
48 install_requires=['decorator', 'requests', 'six', 'pytz'],
49 zip_safe=False)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,5 +45,9 @@
'plotly/matplotlylib/mplexporter',
'plotly/matplotlylib/mplexporter/renderers'],
package_data={'plotly': ['package_data/*']},
- install_requires=['decorator', 'requests', 'six', 'pytz'],
+ install_requires=['decorator',
+ 'nbformat>=4.2',
+ 'pytz',
+ 'requests',
+ 'six'],
zip_safe=False)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,5 +45,9 @@\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n- install_requires=['decorator', 'requests', 'six', 'pytz'],\n+ install_requires=['decorator',\n+ 'nbformat>=4.2',\n+ 'pytz',\n+ 'requests',\n+ 'six'],\n zip_safe=False)\n", "issue": "jsonschema.SchemaError when a figure is validated\nHere is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020\r\n\r\nThe notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:\r\n\r\n_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:\r\nNotebook Validation failed_:\r\n`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:\r\n\r\n`{\r\n \"data\": [\r\n {\r\n \"colorscale\": \"Viridis\",\r\n \"z\": [\r\n [\r\n 2,\r\n 27,\r\n 105,\r\n 100\r\n ],\r\n [\r\n 87,\r\n 14,\r\n 121,\r\n 102\r\n ],\r\n [\r\n 26,\r\n 121,\r\n 73,\r\n 34\r\n ],\r\n [\r\n 44,\r\n 105,\r\n 111,\r\n 127\r\n ]\r\n ],\r\n \"type\": \"heatmap\",\r\n \"zsmooth\": \"best\"\r\n }\r\n ],\r\n \"layout\": {\r\n \"width\": 400,\r\n \"height\": 400\r\n }\r\n}`\r\n\r\nInitially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.\n", "before_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator', 'requests', 'six', 'pytz'],\n zip_safe=False)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator',\n 'nbformat>=4.2',\n 'pytz',\n 'requests',\n 'six'],\n zip_safe=False)\n", "path": "setup.py"}]} | 1,213 | 127 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.