problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_23016
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-7996
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Killing tasks on Startup fails on Windows (PC)
## Problem Definition
After an update Rotki would not restart on its own.
Closing the interface and restarting got no response at all (twice)
Logs showed an attempt to kill the tasks but recorded the failure.
## Logs
> 2024-04-19T13:56:55.600Z: (info): Checking for update
> 2024-04-19T13:56:57.360Z: (info): Update for version 1.32.2 is not available (latest version: 1.32.2, downgrade is disallowed).
> 2024-04-19T13:57:03.347Z: Restarting rotki-core> >
> 2024-04-19T13:57:03.350Z: Starting windows process termination
> 2024-04-19T13:57:07.274Z: Currently running: 246 tasks
> 2024-04-19T13:57:07.280Z: Detected the following running rotki-core processes: 2880, 7092
> 2024-04-19T13:57:07.282Z: Preparing to call "taskill /f /t /PID 2880 /PID 7092" on the rotki-core processes
> 2024-04-19T13:57:07.752Z: Call to taskkill exited
> 2024-04-19T13:57:07.754Z: The 2 processes are still running. Waiting for 2 seconds
> 2024-04-19T15:18:54.047Z:
Investigation and Remediation
Look for any running instances
> C:\Windows\system32>tasklist | find "rotki"
> rotki.exe 16380 Console 1 17,608 K
> rotki.exe 13204 Console 1 10,436 K
> rotki.exe 14164 Console 1 8,596 K
Kill the running instances:
> C:\Windows\system32>taskkill /f /t /PID 16380 /PID 13204 /PID 14164
> SUCCESS: The process with PID 13204 (child process of PID 16380) has been terminated.
> SUCCESS: The process with PID 14164 (child process of PID 16380) has been terminated.
> SUCCESS: The process with PID 16380 (child process of PID 7124) has been terminated.
Could it be the reason It can't kill the running instances is because there is a typo in the code?
As per the log;
2024-04-19T13:57:07.282Z: Preparing to call "taskill /f /t /PID 2880 /PID 7092" on the rotki-core processes
----------------------------------------------------^^^^------------
The command is taskkill (two k's) NOT taskill (one k).
-->
### System Description
Operating system:
`systeminfo | findstr /C:"OS"`
OS Name: Microsoft Windows 10 Home
OS Version: 10.0.19045 N/A Build 19045
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
BIOS Version: LENOVO DUCN33WW, 2021-08-10
Rotki version: 1.32.2
</issue>
<code>
[start of rotkehlchen/server.py]
1 import logging
2 import os
3 import signal
4
5 import gevent
6
7 from rotkehlchen.api.server import APIServer, RestAPI
8 from rotkehlchen.args import app_args
9 from rotkehlchen.logging import TRACE, RotkehlchenLogsAdapter, add_logging_level, configure_logging
10 from rotkehlchen.rotkehlchen import Rotkehlchen
11
12 logger = logging.getLogger(__name__)
13 log = RotkehlchenLogsAdapter(logger)
14
15
16 class RotkehlchenServer:
17 def __init__(self) -> None:
18 """Initializes the backend server
19 May raise:
20 - SystemPermissionError due to the given args containing a datadir
21 that does not have the correct permissions
22 """
23 arg_parser = app_args(
24 prog='rotki',
25 description=(
26 'rotki, the portfolio tracker and accounting tool that respects your privacy'
27 ),
28 )
29 self.args = arg_parser.parse_args()
30 add_logging_level('TRACE', TRACE)
31 configure_logging(self.args)
32 self.rotkehlchen = Rotkehlchen(self.args)
33 self.stop_event = gevent.event.Event()
34 if ',' in self.args.api_cors:
35 domain_list = [str(domain) for domain in self.args.api_cors.split(',')]
36 else:
37 domain_list = [str(self.args.api_cors)]
38 self.api_server = APIServer(
39 rest_api=RestAPI(rotkehlchen=self.rotkehlchen),
40 ws_notifier=self.rotkehlchen.rotki_notifier,
41 cors_domain_list=domain_list,
42 )
43
44 def shutdown(self) -> None:
45 log.debug('Shutdown initiated')
46 self.api_server.stop()
47 self.stop_event.set()
48
49 def main(self) -> None:
50 # disable printing hub exceptions in stderr. With using the hub to do various
51 # tasks that should raise exceptions and have them handled outside the hub
52 # printing them in stdout is now too much spam (and would worry users too)
53 hub = gevent.hub.get_hub()
54 hub.exception_stream = None
55 # we don't use threadpool much so go to 2 instead of default 10
56 hub.threadpool_size = 2
57 hub.threadpool.maxsize = 2
58 if os.name != 'nt':
59 gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check
60 gevent.hub.signal(signal.SIGINT, self.shutdown)
61 gevent.hub.signal(signal.SIGTERM, self.shutdown)
62 # The api server's RestAPI starts rotki main loop
63 self.api_server.start(
64 host=self.args.api_host,
65 rest_port=self.args.rest_api_port,
66 )
67 self.stop_event.wait()
68
[end of rotkehlchen/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rotkehlchen/server.py b/rotkehlchen/server.py
--- a/rotkehlchen/server.py
+++ b/rotkehlchen/server.py
@@ -56,9 +56,17 @@
hub.threadpool_size = 2
hub.threadpool.maxsize = 2
if os.name != 'nt':
- gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check
+ gevent.hub.signal(signal.SIGQUIT, self.shutdown)
+ gevent.hub.signal(signal.SIGTERM, self.shutdown)
+ else:
+ # Handle the window control signal as stated here https://pyinstaller.org/en/stable/feature-notes.html#signal-handling-in-console-windows-applications-and-onefile-application-cleanup # noqa: E501
+ # This logic handles the signal sent from the bootloader equivalent to sigterm in
+ # addition to the signals sent by windows's taskkill.
+ # Research documented in https://github.com/yabirgb/rotki-python-research
+ import win32api # pylint: disable=import-outside-toplevel # isort:skip
+ win32api.SetConsoleCtrlHandler(self.shutdown, True)
+
gevent.hub.signal(signal.SIGINT, self.shutdown)
- gevent.hub.signal(signal.SIGTERM, self.shutdown)
# The api server's RestAPI starts rotki main loop
self.api_server.start(
host=self.args.api_host,
|
{"golden_diff": "diff --git a/rotkehlchen/server.py b/rotkehlchen/server.py\n--- a/rotkehlchen/server.py\n+++ b/rotkehlchen/server.py\n@@ -56,9 +56,17 @@\n hub.threadpool_size = 2\n hub.threadpool.maxsize = 2\n if os.name != 'nt':\n- gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check\n+ gevent.hub.signal(signal.SIGQUIT, self.shutdown)\n+ gevent.hub.signal(signal.SIGTERM, self.shutdown)\n+ else:\n+ # Handle the window control signal as stated here https://pyinstaller.org/en/stable/feature-notes.html#signal-handling-in-console-windows-applications-and-onefile-application-cleanup # noqa: E501\n+ # This logic handles the signal sent from the bootloader equivalent to sigterm in\n+ # addition to the signals sent by windows's taskkill.\n+ # Research documented in https://github.com/yabirgb/rotki-python-research\n+ import win32api # pylint: disable=import-outside-toplevel # isort:skip\n+ win32api.SetConsoleCtrlHandler(self.shutdown, True)\n+\n gevent.hub.signal(signal.SIGINT, self.shutdown)\n- gevent.hub.signal(signal.SIGTERM, self.shutdown)\n # The api server's RestAPI starts rotki main loop\n self.api_server.start(\n host=self.args.api_host,\n", "issue": "Killing tasks on Startup fails on Windows (PC)\n## Problem Definition\r\n\r\nAfter an update Rotki would not restart on its own.\r\n\r\nClosing the interface and restarting got no response at all (twice)\r\n\r\nLogs showed an attempt to kill the tasks but recorded the failure.\r\n\r\n## Logs\r\n\r\n> 2024-04-19T13:56:55.600Z: (info): Checking for update\r\n> 2024-04-19T13:56:57.360Z: (info): Update for version 1.32.2 is not available (latest version: 1.32.2, downgrade is disallowed).\r\n> 2024-04-19T13:57:03.347Z: Restarting rotki-core> > \r\n> 2024-04-19T13:57:03.350Z: Starting windows process termination\r\n> 2024-04-19T13:57:07.274Z: Currently running: 246 tasks\r\n> 2024-04-19T13:57:07.280Z: Detected the following running rotki-core processes: 2880, 7092\r\n> 2024-04-19T13:57:07.282Z: Preparing to call \"taskill /f /t /PID 2880 /PID 7092\" on the rotki-core processes\r\n> 2024-04-19T13:57:07.752Z: Call to taskkill exited\r\n> 2024-04-19T13:57:07.754Z: The 2 processes are still running. Waiting for 2 seconds\r\n> 2024-04-19T15:18:54.047Z: \r\n\r\nInvestigation and Remediation\r\n\r\nLook for any running instances \r\n\r\n> C:\\Windows\\system32>tasklist | find \"rotki\"\r\n> rotki.exe 16380 Console 1 17,608 K\r\n> rotki.exe 13204 Console 1 10,436 K\r\n> rotki.exe 14164 Console 1 8,596 K\r\n\r\nKill the running instances:\r\n\r\n> C:\\Windows\\system32>taskkill /f /t /PID 16380 /PID 13204 /PID 14164\r\n> SUCCESS: The process with PID 13204 (child process of PID 16380) has been terminated.\r\n> SUCCESS: The process with PID 14164 (child process of PID 16380) has been terminated.\r\n> SUCCESS: The process with PID 16380 (child process of PID 7124) has been terminated.\r\n\r\nCould it be the reason It can't kill the running instances is because there is a typo in the code?\r\n\r\nAs per the log;\r\n\r\n2024-04-19T13:57:07.282Z: Preparing to call \"taskill /f /t /PID 2880 /PID 7092\" on the rotki-core processes\r\n----------------------------------------------------^^^^------------\r\n\r\nThe command is taskkill (two k's) NOT taskill (one k). \r\n\r\n\r\n-->\r\n\r\n\r\n### System Description\r\n\r\n\r\nOperating system: \r\n\r\n`systeminfo | findstr /C:\"OS\"`\r\n\r\nOS Name: Microsoft Windows 10 Home\r\nOS Version: 10.0.19045 N/A Build 19045\r\nOS Manufacturer: Microsoft Corporation\r\nOS Configuration: Standalone Workstation\r\nOS Build Type: Multiprocessor Free\r\nBIOS Version: LENOVO DUCN33WW, 2021-08-10\r\n\r\n\r\nRotki version: 1.32.2\r\n\n", "before_files": [{"content": "import logging\nimport os\nimport signal\n\nimport gevent\n\nfrom rotkehlchen.api.server import APIServer, RestAPI\nfrom rotkehlchen.args import app_args\nfrom rotkehlchen.logging import TRACE, RotkehlchenLogsAdapter, add_logging_level, configure_logging\nfrom rotkehlchen.rotkehlchen import Rotkehlchen\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass RotkehlchenServer:\n def __init__(self) -> None:\n \"\"\"Initializes the backend server\n May raise:\n - SystemPermissionError due to the given args containing a datadir\n that does not have the correct permissions\n \"\"\"\n arg_parser = app_args(\n prog='rotki',\n description=(\n 'rotki, the portfolio tracker and accounting tool that respects your privacy'\n ),\n )\n self.args = arg_parser.parse_args()\n add_logging_level('TRACE', TRACE)\n configure_logging(self.args)\n self.rotkehlchen = Rotkehlchen(self.args)\n self.stop_event = gevent.event.Event()\n if ',' in self.args.api_cors:\n domain_list = [str(domain) for domain in self.args.api_cors.split(',')]\n else:\n domain_list = [str(self.args.api_cors)]\n self.api_server = APIServer(\n rest_api=RestAPI(rotkehlchen=self.rotkehlchen),\n ws_notifier=self.rotkehlchen.rotki_notifier,\n cors_domain_list=domain_list,\n )\n\n def shutdown(self) -> None:\n log.debug('Shutdown initiated')\n self.api_server.stop()\n self.stop_event.set()\n\n def main(self) -> None:\n # disable printing hub exceptions in stderr. With using the hub to do various\n # tasks that should raise exceptions and have them handled outside the hub\n # printing them in stdout is now too much spam (and would worry users too)\n hub = gevent.hub.get_hub()\n hub.exception_stream = None\n # we don't use threadpool much so go to 2 instead of default 10\n hub.threadpool_size = 2\n hub.threadpool.maxsize = 2\n if os.name != 'nt':\n gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check\n gevent.hub.signal(signal.SIGINT, self.shutdown)\n gevent.hub.signal(signal.SIGTERM, self.shutdown)\n # The api server's RestAPI starts rotki main loop\n self.api_server.start(\n host=self.args.api_host,\n rest_port=self.args.rest_api_port,\n )\n self.stop_event.wait()\n", "path": "rotkehlchen/server.py"}]}
| 2,187 | 351 |
gh_patches_debug_25644
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-3810
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tokens are able to enumerate project names regardless of organization membership.
We respond with 404s when there are missing project names, but 403s when there is an existing project. Probably should just check for organization membership before even going into the project fetching.
# Original HackerOne Report
Title: Project Enumeration of Other Teams
Types: Information Disclosure
Link: https://hackerone.com/reports/154048
Date: 2016-07-26 05:25:05 -0700
By: vagg-a-bond
Details:
Hello,
It is possible for any valid user of @sentry to enumerate project names in other team by comparing errors.
For example, if user user accesses `GET /api/0/projects/sumne/project_sumne_001/` which is not of his/her, then error will be `403 FORBIDDEN` and for `GET /api/0/projects/sumne/project_sumne_001_not_there/` application responds with `404 NOT FOUND`
So an attacker can enumerate all the projects available in a team by automating the process to request `GET /api/0/projects/TARGET_TEAM_NAME/PROJECT_NAME/`.
**Vulnerable Endpoint:**
> GET /api/0/projects/sumne/project_sumne_001/*
**Recommended Fix:**
Respond with `404 NOT FOUND` for both.
</issue>
<code>
[start of src/sentry/api/bases/project.py]
1 from __future__ import absolute_import
2
3 from sentry.auth import access
4 from sentry.api.base import Endpoint
5 from sentry.api.exceptions import ResourceDoesNotExist
6 from sentry.api.permissions import ScopedPermission
7 from sentry.models import Project, ProjectStatus
8 from sentry.models.apikey import ROOT_KEY
9
10
11 class ProjectPermission(ScopedPermission):
12 scope_map = {
13 'GET': ['project:read', 'project:write', 'project:delete'],
14 'POST': ['project:write', 'project:delete'],
15 'PUT': ['project:write', 'project:delete'],
16 'DELETE': ['project:delete'],
17 }
18
19 def has_object_permission(self, request, view, project):
20 if request.user and request.user.is_authenticated() and request.auth:
21 request.access = access.from_request(
22 request, project.organization, scopes=request.auth.get_scopes(),
23 )
24
25 elif request.auth:
26 if request.auth is ROOT_KEY:
27 return True
28 return request.auth.organization_id == project.organization_id
29
30 else:
31 request.access = access.from_request(request, project.organization)
32
33 allowed_scopes = set(self.scope_map.get(request.method, []))
34 return any(
35 request.access.has_team_scope(project.team, s)
36 for s in allowed_scopes
37 )
38
39
40 class ProjectReleasePermission(ProjectPermission):
41 scope_map = {
42 'GET': ['project:read', 'project:write', 'project:delete', 'project:releases'],
43 'POST': ['project:write', 'project:delete', 'project:releases'],
44 'PUT': ['project:write', 'project:delete', 'project:releases'],
45 'DELETE': ['project:delete', 'project:releases'],
46 }
47
48
49 class ProjectEventPermission(ProjectPermission):
50 scope_map = {
51 'GET': ['event:read', 'event:write', 'event:delete'],
52 'POST': ['event:write', 'event:delete'],
53 'PUT': ['event:write', 'event:delete'],
54 'DELETE': ['event:delete'],
55 }
56
57
58 class ProjectEndpoint(Endpoint):
59 permission_classes = (ProjectPermission,)
60
61 def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):
62 try:
63 project = Project.objects.get_from_cache(
64 organization__slug=organization_slug,
65 slug=project_slug,
66 )
67 except Project.DoesNotExist:
68 raise ResourceDoesNotExist
69
70 if project.status != ProjectStatus.VISIBLE:
71 raise ResourceDoesNotExist
72
73 self.check_object_permissions(request, project)
74
75 kwargs['project'] = project
76 return (args, kwargs)
77
[end of src/sentry/api/bases/project.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/api/bases/project.py b/src/sentry/api/bases/project.py
--- a/src/sentry/api/bases/project.py
+++ b/src/sentry/api/bases/project.py
@@ -4,7 +4,7 @@
from sentry.api.base import Endpoint
from sentry.api.exceptions import ResourceDoesNotExist
from sentry.api.permissions import ScopedPermission
-from sentry.models import Project, ProjectStatus
+from sentry.models import Organization, Project, ProjectStatus
from sentry.models.apikey import ROOT_KEY
@@ -59,9 +59,21 @@
permission_classes = (ProjectPermission,)
def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):
+ try:
+ org = Organization.objects.get_from_cache(slug=organization_slug)
+ if request.user:
+ can_access_org = any(access.from_request(request, org).memberships)
+ if request.auth:
+ can_access_org = request.auth.organization_id == org.id
+ if not can_access_org:
+ raise ResourceDoesNotExist
+
+ except Organization.DoesNotExist:
+ raise ResourceDoesNotExist
+
try:
project = Project.objects.get_from_cache(
- organization__slug=organization_slug,
+ organization=org,
slug=project_slug,
)
except Project.DoesNotExist:
|
{"golden_diff": "diff --git a/src/sentry/api/bases/project.py b/src/sentry/api/bases/project.py\n--- a/src/sentry/api/bases/project.py\n+++ b/src/sentry/api/bases/project.py\n@@ -4,7 +4,7 @@\n from sentry.api.base import Endpoint\n from sentry.api.exceptions import ResourceDoesNotExist\n from sentry.api.permissions import ScopedPermission\n-from sentry.models import Project, ProjectStatus\n+from sentry.models import Organization, Project, ProjectStatus\n from sentry.models.apikey import ROOT_KEY\n \n \n@@ -59,9 +59,21 @@\n permission_classes = (ProjectPermission,)\n \n def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):\n+ try:\n+ org = Organization.objects.get_from_cache(slug=organization_slug)\n+ if request.user:\n+ can_access_org = any(access.from_request(request, org).memberships)\n+ if request.auth:\n+ can_access_org = request.auth.organization_id == org.id\n+ if not can_access_org:\n+ raise ResourceDoesNotExist\n+\n+ except Organization.DoesNotExist:\n+ raise ResourceDoesNotExist\n+\n try:\n project = Project.objects.get_from_cache(\n- organization__slug=organization_slug,\n+ organization=org,\n slug=project_slug,\n )\n except Project.DoesNotExist:\n", "issue": "Tokens are able to enumerate project names regardless of organization membership.\nWe respond with 404s when there are missing project names, but 403s when there is an existing project. Probably should just check for organization membership before even going into the project fetching.\n# Original HackerOne Report\n\nTitle: Project Enumeration of Other Teams\nTypes: Information Disclosure\nLink: https://hackerone.com/reports/154048\nDate: 2016-07-26 05:25:05 -0700\nBy: vagg-a-bond\n\nDetails:\nHello,\n\nIt is possible for any valid user of @sentry to enumerate project names in other team by comparing errors. \n\nFor example, if user user accesses `GET /api/0/projects/sumne/project_sumne_001/` which is not of his/her, then error will be `403 FORBIDDEN` and for `GET /api/0/projects/sumne/project_sumne_001_not_there/` application responds with `404 NOT FOUND`\n\nSo an attacker can enumerate all the projects available in a team by automating the process to request `GET /api/0/projects/TARGET_TEAM_NAME/PROJECT_NAME/`. \n\n**Vulnerable Endpoint:**\n\n> GET /api/0/projects/sumne/project_sumne_001/*\n\n**Recommended Fix:**\nRespond with `404 NOT FOUND` for both. \n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry.auth import access\nfrom sentry.api.base import Endpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.api.permissions import ScopedPermission\nfrom sentry.models import Project, ProjectStatus\nfrom sentry.models.apikey import ROOT_KEY\n\n\nclass ProjectPermission(ScopedPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:delete'],\n 'POST': ['project:write', 'project:delete'],\n 'PUT': ['project:write', 'project:delete'],\n 'DELETE': ['project:delete'],\n }\n\n def has_object_permission(self, request, view, project):\n if request.user and request.user.is_authenticated() and request.auth:\n request.access = access.from_request(\n request, project.organization, scopes=request.auth.get_scopes(),\n )\n\n elif request.auth:\n if request.auth is ROOT_KEY:\n return True\n return request.auth.organization_id == project.organization_id\n\n else:\n request.access = access.from_request(request, project.organization)\n\n allowed_scopes = set(self.scope_map.get(request.method, []))\n return any(\n request.access.has_team_scope(project.team, s)\n for s in allowed_scopes\n )\n\n\nclass ProjectReleasePermission(ProjectPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:delete', 'project:releases'],\n 'POST': ['project:write', 'project:delete', 'project:releases'],\n 'PUT': ['project:write', 'project:delete', 'project:releases'],\n 'DELETE': ['project:delete', 'project:releases'],\n }\n\n\nclass ProjectEventPermission(ProjectPermission):\n scope_map = {\n 'GET': ['event:read', 'event:write', 'event:delete'],\n 'POST': ['event:write', 'event:delete'],\n 'PUT': ['event:write', 'event:delete'],\n 'DELETE': ['event:delete'],\n }\n\n\nclass ProjectEndpoint(Endpoint):\n permission_classes = (ProjectPermission,)\n\n def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):\n try:\n project = Project.objects.get_from_cache(\n organization__slug=organization_slug,\n slug=project_slug,\n )\n except Project.DoesNotExist:\n raise ResourceDoesNotExist\n\n if project.status != ProjectStatus.VISIBLE:\n raise ResourceDoesNotExist\n\n self.check_object_permissions(request, project)\n\n kwargs['project'] = project\n return (args, kwargs)\n", "path": "src/sentry/api/bases/project.py"}]}
| 1,562 | 288 |
gh_patches_debug_26468
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-extras-101
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
slack 404
I'm getting the same issue as was reported here: https://github.com/ansible/ansible/issues/8669
Error output I'm getting is:
```
failed: [test-domain.com -> 127.0.0.1] => {"failed": true}
msg: failed to send payload={"username": "toby", "text": "stage update has successfully completed", "link_names": 1, "channel": "#deploys", "icon_url": "http://www.ansible.com/favicon.ico"} to https://pixelfusion.slack.com/services/hooks/incoming-webhook?token=[obscured]: HTTP Error 404: Not Found
```
</issue>
<code>
[start of notification/slack.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2014, Ramon de la Fuente <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 DOCUMENTATION = """
22 module: slack
23 short_description: Send Slack notifications
24 description:
25 - The M(slack) module sends notifications to U(http://slack.com) via the Incoming WebHook integration
26 version_added: 1.6
27 author: Ramon de la Fuente <[email protected]>
28 options:
29 domain:
30 description:
31 - Slack (sub)domain for your environment without protocol.
32 (i.e. C(future500.slack.com))
33 required: true
34 token:
35 description:
36 - Slack integration token
37 required: true
38 msg:
39 description:
40 - Message to send.
41 required: true
42 channel:
43 description:
44 - Channel to send the message to. If absent, the message goes to the channel selected for the I(token).
45 required: false
46 username:
47 description:
48 - This is the sender of the message.
49 required: false
50 default: ansible
51 icon_url:
52 description:
53 - Url for the message sender's icon (default C(http://www.ansible.com/favicon.ico))
54 required: false
55 icon_emoji:
56 description:
57 - Emoji for the message sender. See Slack documentation for options.
58 (if I(icon_emoji) is set, I(icon_url) will not be used)
59 required: false
60 link_names:
61 description:
62 - Automatically create links for channels and usernames in I(msg).
63 required: false
64 default: 1
65 choices:
66 - 1
67 - 0
68 parse:
69 description:
70 - Setting for the message parser at Slack
71 required: false
72 choices:
73 - 'full'
74 - 'none'
75 validate_certs:
76 description:
77 - If C(no), SSL certificates will not be validated. This should only be used
78 on personally controlled sites using self-signed certificates.
79 required: false
80 default: 'yes'
81 choices:
82 - 'yes'
83 - 'no'
84 """
85
86 EXAMPLES = """
87 - name: Send notification message via Slack
88 local_action:
89 module: slack
90 domain: future500.slack.com
91 token: thetokengeneratedbyslack
92 msg: "{{ inventory_hostname }} completed"
93
94 - name: Send notification message via Slack all options
95 local_action:
96 module: slack
97 domain: future500.slack.com
98 token: thetokengeneratedbyslack
99 msg: "{{ inventory_hostname }} completed"
100 channel: "#ansible"
101 username: "Ansible on {{ inventory_hostname }}"
102 icon_url: "http://www.example.com/some-image-file.png"
103 link_names: 0
104 parse: 'none'
105
106 """
107
108
109 SLACK_INCOMING_WEBHOOK = 'https://%s/services/hooks/incoming-webhook?token=%s'
110
111 def build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse):
112 payload = dict(text=text)
113
114 if channel is not None:
115 payload['channel'] = channel if (channel[0] == '#') else '#'+channel
116 if username is not None:
117 payload['username'] = username
118 if icon_emoji is not None:
119 payload['icon_emoji'] = icon_emoji
120 else:
121 payload['icon_url'] = icon_url
122 if link_names is not None:
123 payload['link_names'] = link_names
124 if parse is not None:
125 payload['parse'] = parse
126
127 payload="payload=" + module.jsonify(payload)
128 return payload
129
130 def do_notify_slack(module, domain, token, payload):
131 slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, token)
132
133 response, info = fetch_url(module, slack_incoming_webhook, data=payload)
134 if info['status'] != 200:
135 obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, '[obscured]')
136 module.fail_json(msg=" failed to send %s to %s: %s" % (payload, obscured_incoming_webhook, info['msg']))
137
138 def main():
139 module = AnsibleModule(
140 argument_spec = dict(
141 domain = dict(type='str', required=True),
142 token = dict(type='str', required=True),
143 msg = dict(type='str', required=True),
144 channel = dict(type='str', default=None),
145 username = dict(type='str', default='Ansible'),
146 icon_url = dict(type='str', default='http://www.ansible.com/favicon.ico'),
147 icon_emoji = dict(type='str', default=None),
148 link_names = dict(type='int', default=1, choices=[0,1]),
149 parse = dict(type='str', default=None, choices=['none', 'full']),
150
151 validate_certs = dict(default='yes', type='bool'),
152 )
153 )
154
155 domain = module.params['domain']
156 token = module.params['token']
157 text = module.params['msg']
158 channel = module.params['channel']
159 username = module.params['username']
160 icon_url = module.params['icon_url']
161 icon_emoji = module.params['icon_emoji']
162 link_names = module.params['link_names']
163 parse = module.params['parse']
164
165 payload = build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse)
166 do_notify_slack(module, domain, token, payload)
167
168 module.exit_json(msg="OK")
169
170 # import module snippets
171 from ansible.module_utils.basic import *
172 from ansible.module_utils.urls import *
173 main()
[end of notification/slack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/notification/slack.py b/notification/slack.py
--- a/notification/slack.py
+++ b/notification/slack.py
@@ -105,8 +105,7 @@
"""
-
-SLACK_INCOMING_WEBHOOK = 'https://%s/services/hooks/incoming-webhook?token=%s'
+SLACK_INCOMING_WEBHOOK = 'https://hooks.slack.com/services/%s'
def build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse):
payload = dict(text=text)
@@ -128,11 +127,11 @@
return payload
def do_notify_slack(module, domain, token, payload):
- slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, token)
+ slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (token)
response, info = fetch_url(module, slack_incoming_webhook, data=payload)
if info['status'] != 200:
- obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, '[obscured]')
+ obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % ('[obscured]')
module.fail_json(msg=" failed to send %s to %s: %s" % (payload, obscured_incoming_webhook, info['msg']))
def main():
@@ -170,4 +169,4 @@
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
-main()
\ No newline at end of file
+main()
|
{"golden_diff": "diff --git a/notification/slack.py b/notification/slack.py\n--- a/notification/slack.py\n+++ b/notification/slack.py\n@@ -105,8 +105,7 @@\n \n \"\"\"\n \n-\n-SLACK_INCOMING_WEBHOOK = 'https://%s/services/hooks/incoming-webhook?token=%s'\n+SLACK_INCOMING_WEBHOOK = 'https://hooks.slack.com/services/%s'\n \n def build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse):\n payload = dict(text=text)\n@@ -128,11 +127,11 @@\n return payload\n \n def do_notify_slack(module, domain, token, payload):\n- slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, token)\n+ slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (token)\n \n response, info = fetch_url(module, slack_incoming_webhook, data=payload)\n if info['status'] != 200:\n- obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, '[obscured]')\n+ obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % ('[obscured]')\n module.fail_json(msg=\" failed to send %s to %s: %s\" % (payload, obscured_incoming_webhook, info['msg']))\n \n def main():\n@@ -170,4 +169,4 @@\n # import module snippets\n from ansible.module_utils.basic import *\n from ansible.module_utils.urls import *\n-main()\n\\ No newline at end of file\n+main()\n", "issue": "slack 404\nI'm getting the same issue as was reported here: https://github.com/ansible/ansible/issues/8669\n\nError output I'm getting is:\n\n```\nfailed: [test-domain.com -> 127.0.0.1] => {\"failed\": true}\nmsg: failed to send payload={\"username\": \"toby\", \"text\": \"stage update has successfully completed\", \"link_names\": 1, \"channel\": \"#deploys\", \"icon_url\": \"http://www.ansible.com/favicon.ico\"} to https://pixelfusion.slack.com/services/hooks/incoming-webhook?token=[obscured]: HTTP Error 404: Not Found\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, Ramon de la Fuente <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = \"\"\"\nmodule: slack\nshort_description: Send Slack notifications\ndescription:\n - The M(slack) module sends notifications to U(http://slack.com) via the Incoming WebHook integration\nversion_added: 1.6\nauthor: Ramon de la Fuente <[email protected]>\noptions:\n domain:\n description:\n - Slack (sub)domain for your environment without protocol.\n (i.e. C(future500.slack.com))\n required: true\n token:\n description:\n - Slack integration token\n required: true\n msg:\n description:\n - Message to send.\n required: true\n channel:\n description:\n - Channel to send the message to. If absent, the message goes to the channel selected for the I(token).\n required: false\n username:\n description:\n - This is the sender of the message.\n required: false\n default: ansible\n icon_url:\n description:\n - Url for the message sender's icon (default C(http://www.ansible.com/favicon.ico))\n required: false\n icon_emoji:\n description:\n - Emoji for the message sender. See Slack documentation for options.\n (if I(icon_emoji) is set, I(icon_url) will not be used)\n required: false\n link_names:\n description:\n - Automatically create links for channels and usernames in I(msg).\n required: false\n default: 1\n choices:\n - 1\n - 0\n parse:\n description:\n - Setting for the message parser at Slack\n required: false\n choices:\n - 'full'\n - 'none'\n validate_certs:\n description:\n - If C(no), SSL certificates will not be validated. This should only be used\n on personally controlled sites using self-signed certificates.\n required: false\n default: 'yes'\n choices:\n - 'yes'\n - 'no'\n\"\"\"\n\nEXAMPLES = \"\"\"\n- name: Send notification message via Slack\n local_action:\n module: slack\n domain: future500.slack.com\n token: thetokengeneratedbyslack\n msg: \"{{ inventory_hostname }} completed\"\n\n- name: Send notification message via Slack all options\n local_action:\n module: slack\n domain: future500.slack.com\n token: thetokengeneratedbyslack\n msg: \"{{ inventory_hostname }} completed\"\n channel: \"#ansible\"\n username: \"Ansible on {{ inventory_hostname }}\"\n icon_url: \"http://www.example.com/some-image-file.png\"\n link_names: 0\n parse: 'none'\n\n\"\"\"\n\n\nSLACK_INCOMING_WEBHOOK = 'https://%s/services/hooks/incoming-webhook?token=%s'\n\ndef build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse):\n payload = dict(text=text)\n\n if channel is not None:\n payload['channel'] = channel if (channel[0] == '#') else '#'+channel\n if username is not None:\n payload['username'] = username\n if icon_emoji is not None:\n payload['icon_emoji'] = icon_emoji\n else:\n payload['icon_url'] = icon_url\n if link_names is not None:\n payload['link_names'] = link_names\n if parse is not None:\n payload['parse'] = parse\n\n payload=\"payload=\" + module.jsonify(payload)\n return payload\n\ndef do_notify_slack(module, domain, token, payload):\n slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, token)\n\n response, info = fetch_url(module, slack_incoming_webhook, data=payload)\n if info['status'] != 200:\n obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, '[obscured]')\n module.fail_json(msg=\" failed to send %s to %s: %s\" % (payload, obscured_incoming_webhook, info['msg']))\n\ndef main():\n module = AnsibleModule(\n argument_spec = dict(\n domain = dict(type='str', required=True),\n token = dict(type='str', required=True),\n msg = dict(type='str', required=True),\n channel = dict(type='str', default=None),\n username = dict(type='str', default='Ansible'),\n icon_url = dict(type='str', default='http://www.ansible.com/favicon.ico'),\n icon_emoji = dict(type='str', default=None),\n link_names = dict(type='int', default=1, choices=[0,1]),\n parse = dict(type='str', default=None, choices=['none', 'full']),\n\n validate_certs = dict(default='yes', type='bool'),\n )\n )\n\n domain = module.params['domain']\n token = module.params['token']\n text = module.params['msg']\n channel = module.params['channel']\n username = module.params['username']\n icon_url = module.params['icon_url']\n icon_emoji = module.params['icon_emoji']\n link_names = module.params['link_names']\n parse = module.params['parse']\n\n payload = build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse)\n do_notify_slack(module, domain, token, payload)\n\n module.exit_json(msg=\"OK\")\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.urls import *\nmain()", "path": "notification/slack.py"}]}
| 2,499 | 357 |
gh_patches_debug_1135
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-8219
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytest is causing error in Jenkins
Example:
https://jenkins.preferred.jp/job/chainer/job/chainer_pr/2162/TEST=CHAINERX_chainer-py3,label=mn1-p100/console
```
14:33:27 + pytest -rfEX --showlocals -m 'not slow and not ideep' /repo/tests/chainer_tests
14:33:28 Traceback (most recent call last):
14:33:28 File "/workspace/conda/envs/testenv/bin/pytest", line 10, in <module>
14:33:28 sys.exit(main())
14:33:28 File "/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 61, in main
14:33:28 config = _prepareconfig(args, plugins)
14:33:28 File "/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 182, in _prepareconfig
14:33:28 config = get_config()
14:33:28 File "/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 156, in get_config
14:33:28 pluginmanager.import_plugin(spec)
14:33:28 File "/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 530, in import_plugin
14:33:28 __import__(importspec)
14:33:28 File "/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/tmpdir.py", line 25, in <module>
14:33:28 class TempPathFactory(object):
14:33:28 File "/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/tmpdir.py", line 35, in TempPathFactory
14:33:28 lambda p: Path(os.path.abspath(six.text_type(p)))
14:33:28 TypeError: attrib() got an unexpected keyword argument 'convert'
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 import pkg_resources
5 import sys
6
7 from setuptools import setup
8
9 import chainerx_build_helper
10
11
12 if sys.version_info[:3] == (3, 5, 0):
13 if not int(os.getenv('CHAINER_PYTHON_350_FORCE', '0')):
14 msg = """
15 Chainer does not work with Python 3.5.0.
16
17 We strongly recommend to use another version of Python.
18 If you want to use Chainer with Python 3.5.0 at your own risk,
19 set CHAINER_PYTHON_350_FORCE environment variable to 1."""
20 print(msg)
21 sys.exit(1)
22
23
24 requirements = {
25 'install': [
26 'setuptools',
27 # typing==3.7.4 causes error "TypeError: Instance and class checks can
28 # only be used with @runtime_checkable protocols" only with Python 2.
29 # https://github.com/chainer/chainer/pull/7562
30 'typing' + ('<=3.6.6' if sys.version_info[0] <= 2 else ''),
31 'typing_extensions' + ('<=3.6.6' if sys.version_info[0] <= 2 else ''),
32 'filelock',
33 'numpy>=1.9.0',
34 # protobuf 3.8.0rc1 causes CI errors.
35 # TODO(niboshi): Probably we should always use pip in CIs for
36 # installing chainer. It avoids pre-release dependencies by default.
37 # See also: https://github.com/pypa/setuptools/issues/855
38 'protobuf>=3.0.0,<3.8.0rc1',
39 'six>=1.9.0',
40 ],
41 'stylecheck': [
42 'autopep8>=1.4.1,<1.5',
43 'flake8>=3.7,<3.8',
44 'pycodestyle>=2.5,<2.6',
45 ],
46 'test': [
47 'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.
48 'mock',
49 ],
50 'doctest': [
51 'sphinx==1.8.2',
52 'matplotlib',
53 'theano',
54 ],
55 'docs': [
56 'sphinx==1.8.2',
57 'sphinx_rtd_theme',
58 ],
59 'appveyor': [
60 '-r test',
61 # pytest-timeout>=1.3.0 requires pytest>=3.6.
62 # TODO(niboshi): Consider upgrading pytest to >=3.6
63 'pytest-timeout<1.3.0',
64 ],
65 }
66
67
68 def reduce_requirements(key):
69 # Resolve recursive requirements notation (-r)
70 reqs = requirements[key]
71 resolved_reqs = []
72 for req in reqs:
73 if req.startswith('-r'):
74 depend_key = req[2:].lstrip()
75 reduce_requirements(depend_key)
76 resolved_reqs += requirements[depend_key]
77 else:
78 resolved_reqs.append(req)
79 requirements[key] = resolved_reqs
80
81
82 for k in requirements.keys():
83 reduce_requirements(k)
84
85
86 extras_require = {k: v for k, v in requirements.items() if k != 'install'}
87 setup_requires = []
88 install_requires = requirements['install']
89 tests_require = requirements['test']
90
91
92 def find_any_distribution(pkgs):
93 for pkg in pkgs:
94 try:
95 return pkg_resources.get_distribution(pkg)
96 except pkg_resources.DistributionNotFound:
97 pass
98 return None
99
100
101 mn_pkg = find_any_distribution(['chainermn'])
102 if mn_pkg is not None:
103 msg = """
104 We detected that ChainerMN is installed in your environment.
105 ChainerMN has been integrated to Chainer and no separate installation
106 is necessary. Please uninstall the old ChainerMN in advance.
107 """
108 print(msg)
109 exit(1)
110
111 here = os.path.abspath(os.path.dirname(__file__))
112 # Get __version__ variable
113 exec(open(os.path.join(here, 'chainer', '_version.py')).read())
114
115
116 setup_kwargs = dict(
117 name='chainer',
118 version=__version__, # NOQA
119 description='A flexible framework of neural networks',
120 long_description=open('README.md').read(),
121 long_description_content_type='text/markdown',
122 author='Seiya Tokui',
123 author_email='[email protected]',
124 url='https://chainer.org/',
125 license='MIT License',
126 packages=['chainer',
127 'chainer.backends',
128 'chainer.dataset',
129 'chainer.dataset.tabular',
130 'chainer.datasets',
131 'chainer.distributions',
132 'chainer.exporters',
133 'chainer.functions',
134 'chainer.functions.activation',
135 'chainer.functions.array',
136 'chainer.functions.connection',
137 'chainer.functions.evaluation',
138 'chainer.functions.loss',
139 'chainer.functions.math',
140 'chainer.functions.noise',
141 'chainer.functions.normalization',
142 'chainer.functions.pooling',
143 'chainer.functions.rnn',
144 'chainer.functions.theano',
145 'chainer.functions.util',
146 'chainer.function_hooks',
147 'chainer.iterators',
148 'chainer.initializers',
149 'chainer.links',
150 'chainer.links.activation',
151 'chainer.links.caffe',
152 'chainer.links.caffe.protobuf3',
153 'chainer.links.connection',
154 'chainer.links.loss',
155 'chainer.links.model',
156 'chainer.links.model.vision',
157 'chainer.links.normalization',
158 'chainer.links.rnn',
159 'chainer.links.theano',
160 'chainer.link_hooks',
161 'chainer.graph_optimizations',
162 'chainer.optimizers',
163 'chainer.optimizer_hooks',
164 'chainer.serializers',
165 'chainer.testing',
166 'chainer.training',
167 'chainer.training.extensions',
168 'chainer.training.triggers',
169 'chainer.training.updaters',
170 'chainer.utils',
171 'chainermn',
172 'chainermn.communicators',
173 'chainermn.datasets',
174 'chainermn.extensions',
175 'chainermn.functions',
176 'chainermn.iterators',
177 'chainermn.links'],
178 package_data={
179 'chainer': ['py.typed'],
180 },
181 zip_safe=False,
182 setup_requires=setup_requires,
183 install_requires=install_requires,
184 tests_require=tests_require,
185 extras_require=extras_require,
186 )
187
188
189 build_chainerx = 0 != int(os.getenv('CHAINER_BUILD_CHAINERX', '0'))
190 if (os.getenv('READTHEDOCS', None) == 'True'
191 and os.getenv('READTHEDOCS_PROJECT', None) == 'chainer'):
192 os.environ['MAKEFLAGS'] = '-j2'
193 build_chainerx = True
194
195 chainerx_build_helper.config_setup_kwargs(setup_kwargs, build_chainerx)
196
197
198 setup(**setup_kwargs)
199
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,6 +45,7 @@
],
'test': [
'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.
+ 'attrs<19.2.0', # pytest 4.1.1 does not run with attrs==19.2.0
'mock',
],
'doctest': [
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,6 +45,7 @@\n ],\n 'test': [\n 'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.\n+ 'attrs<19.2.0', # pytest 4.1.1 does not run with attrs==19.2.0\n 'mock',\n ],\n 'doctest': [\n", "issue": "pytest is causing error in Jenkins\nExample:\r\n\r\nhttps://jenkins.preferred.jp/job/chainer/job/chainer_pr/2162/TEST=CHAINERX_chainer-py3,label=mn1-p100/console\r\n\r\n```\r\n14:33:27 + pytest -rfEX --showlocals -m 'not slow and not ideep' /repo/tests/chainer_tests\r\n14:33:28 Traceback (most recent call last):\r\n14:33:28 File \"/workspace/conda/envs/testenv/bin/pytest\", line 10, in <module>\r\n14:33:28 sys.exit(main())\r\n14:33:28 File \"/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py\", line 61, in main\r\n14:33:28 config = _prepareconfig(args, plugins)\r\n14:33:28 File \"/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py\", line 182, in _prepareconfig\r\n14:33:28 config = get_config()\r\n14:33:28 File \"/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py\", line 156, in get_config\r\n14:33:28 pluginmanager.import_plugin(spec)\r\n14:33:28 File \"/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/config/__init__.py\", line 530, in import_plugin\r\n14:33:28 __import__(importspec)\r\n14:33:28 File \"/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/tmpdir.py\", line 25, in <module>\r\n14:33:28 class TempPathFactory(object):\r\n14:33:28 File \"/workspace/conda/envs/testenv/lib/python3.6/site-packages/_pytest/tmpdir.py\", line 35, in TempPathFactory\r\n14:33:28 lambda p: Path(os.path.abspath(six.text_type(p)))\r\n14:33:28 TypeError: attrib() got an unexpected keyword argument 'convert'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport pkg_resources\nimport sys\n\nfrom setuptools import setup\n\nimport chainerx_build_helper\n\n\nif sys.version_info[:3] == (3, 5, 0):\n if not int(os.getenv('CHAINER_PYTHON_350_FORCE', '0')):\n msg = \"\"\"\nChainer does not work with Python 3.5.0.\n\nWe strongly recommend to use another version of Python.\nIf you want to use Chainer with Python 3.5.0 at your own risk,\nset CHAINER_PYTHON_350_FORCE environment variable to 1.\"\"\"\n print(msg)\n sys.exit(1)\n\n\nrequirements = {\n 'install': [\n 'setuptools',\n # typing==3.7.4 causes error \"TypeError: Instance and class checks can\n # only be used with @runtime_checkable protocols\" only with Python 2.\n # https://github.com/chainer/chainer/pull/7562\n 'typing' + ('<=3.6.6' if sys.version_info[0] <= 2 else ''),\n 'typing_extensions' + ('<=3.6.6' if sys.version_info[0] <= 2 else ''),\n 'filelock',\n 'numpy>=1.9.0',\n # protobuf 3.8.0rc1 causes CI errors.\n # TODO(niboshi): Probably we should always use pip in CIs for\n # installing chainer. It avoids pre-release dependencies by default.\n # See also: https://github.com/pypa/setuptools/issues/855\n 'protobuf>=3.0.0,<3.8.0rc1',\n 'six>=1.9.0',\n ],\n 'stylecheck': [\n 'autopep8>=1.4.1,<1.5',\n 'flake8>=3.7,<3.8',\n 'pycodestyle>=2.5,<2.6',\n ],\n 'test': [\n 'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.\n 'mock',\n ],\n 'doctest': [\n 'sphinx==1.8.2',\n 'matplotlib',\n 'theano',\n ],\n 'docs': [\n 'sphinx==1.8.2',\n 'sphinx_rtd_theme',\n ],\n 'appveyor': [\n '-r test',\n # pytest-timeout>=1.3.0 requires pytest>=3.6.\n # TODO(niboshi): Consider upgrading pytest to >=3.6\n 'pytest-timeout<1.3.0',\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\nsetup_requires = []\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n\ndef find_any_distribution(pkgs):\n for pkg in pkgs:\n try:\n return pkg_resources.get_distribution(pkg)\n except pkg_resources.DistributionNotFound:\n pass\n return None\n\n\nmn_pkg = find_any_distribution(['chainermn'])\nif mn_pkg is not None:\n msg = \"\"\"\nWe detected that ChainerMN is installed in your environment.\nChainerMN has been integrated to Chainer and no separate installation\nis necessary. Please uninstall the old ChainerMN in advance.\n\"\"\"\n print(msg)\n exit(1)\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nexec(open(os.path.join(here, 'chainer', '_version.py')).read())\n\n\nsetup_kwargs = dict(\n name='chainer',\n version=__version__, # NOQA\n description='A flexible framework of neural networks',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://chainer.org/',\n license='MIT License',\n packages=['chainer',\n 'chainer.backends',\n 'chainer.dataset',\n 'chainer.dataset.tabular',\n 'chainer.datasets',\n 'chainer.distributions',\n 'chainer.exporters',\n 'chainer.functions',\n 'chainer.functions.activation',\n 'chainer.functions.array',\n 'chainer.functions.connection',\n 'chainer.functions.evaluation',\n 'chainer.functions.loss',\n 'chainer.functions.math',\n 'chainer.functions.noise',\n 'chainer.functions.normalization',\n 'chainer.functions.pooling',\n 'chainer.functions.rnn',\n 'chainer.functions.theano',\n 'chainer.functions.util',\n 'chainer.function_hooks',\n 'chainer.iterators',\n 'chainer.initializers',\n 'chainer.links',\n 'chainer.links.activation',\n 'chainer.links.caffe',\n 'chainer.links.caffe.protobuf3',\n 'chainer.links.connection',\n 'chainer.links.loss',\n 'chainer.links.model',\n 'chainer.links.model.vision',\n 'chainer.links.normalization',\n 'chainer.links.rnn',\n 'chainer.links.theano',\n 'chainer.link_hooks',\n 'chainer.graph_optimizations',\n 'chainer.optimizers',\n 'chainer.optimizer_hooks',\n 'chainer.serializers',\n 'chainer.testing',\n 'chainer.training',\n 'chainer.training.extensions',\n 'chainer.training.triggers',\n 'chainer.training.updaters',\n 'chainer.utils',\n 'chainermn',\n 'chainermn.communicators',\n 'chainermn.datasets',\n 'chainermn.extensions',\n 'chainermn.functions',\n 'chainermn.iterators',\n 'chainermn.links'],\n package_data={\n 'chainer': ['py.typed'],\n },\n zip_safe=False,\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n)\n\n\nbuild_chainerx = 0 != int(os.getenv('CHAINER_BUILD_CHAINERX', '0'))\nif (os.getenv('READTHEDOCS', None) == 'True'\n and os.getenv('READTHEDOCS_PROJECT', None) == 'chainer'):\n os.environ['MAKEFLAGS'] = '-j2'\n build_chainerx = True\n\nchainerx_build_helper.config_setup_kwargs(setup_kwargs, build_chainerx)\n\n\nsetup(**setup_kwargs)\n", "path": "setup.py"}]}
| 3,031 | 113 |
gh_patches_debug_19970
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-9238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Value "gnu89" (of type "string") for combo option "C language standard to use" is not one of the choices. Possible choices are (as string): "none", "c89", "c99", "c11", "c17", "gnu99", "gnu11".
Building glib2 2.62.6 with meson 0.59.1 fails:
```
Value "gnu89" (of type "string") for combo option "C language standard to use" is not one of the choices. Possible choices are (as string): "none", "c89", "c99", "c11", "c17", "gnu99", "gnu11".
```
With meson 0.58.1 there was no such error.
See https://trac.macports.org/ticket/63462
In #8766 gnu99 and gnu11 were added to the list of possible choices but for some reason gnu89 was not added? Could you add it, please?
</issue>
<code>
[start of mesonbuild/compilers/objc.py]
1 # Copyright 2012-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing as T
16
17 from .. import coredata
18 from ..mesonlib import MachineChoice, OptionKey
19
20 from .compilers import Compiler
21 from .mixins.clike import CLikeCompiler
22 from .mixins.gnu import GnuCompiler
23 from .mixins.clang import ClangCompiler
24
25 if T.TYPE_CHECKING:
26 from ..programs import ExternalProgram
27 from ..envconfig import MachineInfo
28 from ..environment import Environment
29 from ..linkers import DynamicLinker
30
31
32 class ObjCCompiler(CLikeCompiler, Compiler):
33
34 language = 'objc'
35
36 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
37 is_cross: bool, info: 'MachineInfo',
38 exe_wrap: T.Optional['ExternalProgram'],
39 linker: T.Optional['DynamicLinker'] = None,
40 full_version: T.Optional[str] = None):
41 Compiler.__init__(self, exelist, version, for_machine, info,
42 is_cross=is_cross, full_version=full_version,
43 linker=linker)
44 CLikeCompiler.__init__(self, exe_wrap)
45
46 @staticmethod
47 def get_display_language() -> str:
48 return 'Objective-C'
49
50 def sanity_check(self, work_dir: str, environment: 'Environment') -> None:
51 code = '#import<stddef.h>\nint main(void) { return 0; }\n'
52 return self._sanity_check_impl(work_dir, environment, 'sanitycheckobjc.m', code)
53
54
55 class GnuObjCCompiler(GnuCompiler, ObjCCompiler):
56 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
57 is_cross: bool, info: 'MachineInfo',
58 exe_wrapper: T.Optional['ExternalProgram'] = None,
59 defines: T.Optional[T.Dict[str, str]] = None,
60 linker: T.Optional['DynamicLinker'] = None,
61 full_version: T.Optional[str] = None):
62 ObjCCompiler.__init__(self, exelist, version, for_machine, is_cross,
63 info, exe_wrapper, linker=linker, full_version=full_version)
64 GnuCompiler.__init__(self, defines)
65 default_warn_args = ['-Wall', '-Winvalid-pch']
66 self.warn_args = {'0': [],
67 '1': default_warn_args,
68 '2': default_warn_args + ['-Wextra'],
69 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
70
71
72 class ClangObjCCompiler(ClangCompiler, ObjCCompiler):
73 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
74 is_cross: bool, info: 'MachineInfo',
75 exe_wrapper: T.Optional['ExternalProgram'] = None,
76 defines: T.Optional[T.Dict[str, str]] = None,
77 linker: T.Optional['DynamicLinker'] = None,
78 full_version: T.Optional[str] = None):
79 ObjCCompiler.__init__(self, exelist, version, for_machine, is_cross,
80 info, exe_wrapper, linker=linker, full_version=full_version)
81 ClangCompiler.__init__(self, defines)
82 default_warn_args = ['-Wall', '-Winvalid-pch']
83 self.warn_args = {'0': [],
84 '1': default_warn_args,
85 '2': default_warn_args + ['-Wextra'],
86 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
87
88 def get_options(self) -> 'coredata.KeyedOptionDictType':
89 opts = super().get_options()
90 opts.update({
91 OptionKey('std', machine=self.for_machine, lang='c'): coredata.UserComboOption(
92 'C language standard to use',
93 ['none', 'c89', 'c99', 'c11', 'c17', 'gnu99', 'gnu11'],
94 'none',
95 )
96 })
97 return opts
98
99 def get_option_compile_args(self, options: 'coredata.KeyedOptionDictType') -> T.List[str]:
100 args = []
101 std = options[OptionKey('std', machine=self.for_machine, lang='c')]
102 if std.value != 'none':
103 args.append('-std=' + std.value)
104 return args
105
106 class AppleClangObjCCompiler(ClangObjCCompiler):
107
108 """Handle the differences between Apple's clang and vanilla clang."""
109
[end of mesonbuild/compilers/objc.py]
[start of mesonbuild/compilers/objcpp.py]
1 # Copyright 2012-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing as T
16
17 from .. import coredata
18 from ..mesonlib import MachineChoice, OptionKey
19
20 from .mixins.clike import CLikeCompiler
21 from .compilers import Compiler
22 from .mixins.gnu import GnuCompiler
23 from .mixins.clang import ClangCompiler
24
25 if T.TYPE_CHECKING:
26 from ..programs import ExternalProgram
27 from ..envconfig import MachineInfo
28 from ..environment import Environment
29 from ..linkers import DynamicLinker
30
31 class ObjCPPCompiler(CLikeCompiler, Compiler):
32
33 language = 'objcpp'
34
35 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
36 is_cross: bool, info: 'MachineInfo',
37 exe_wrap: T.Optional['ExternalProgram'],
38 linker: T.Optional['DynamicLinker'] = None,
39 full_version: T.Optional[str] = None):
40 Compiler.__init__(self, exelist, version, for_machine, info,
41 is_cross=is_cross, full_version=full_version,
42 linker=linker)
43 CLikeCompiler.__init__(self, exe_wrap)
44
45 @staticmethod
46 def get_display_language() -> str:
47 return 'Objective-C++'
48
49 def sanity_check(self, work_dir: str, environment: 'Environment') -> None:
50 code = '#import<stdio.h>\nclass MyClass;int main(void) { return 0; }\n'
51 return self._sanity_check_impl(work_dir, environment, 'sanitycheckobjcpp.mm', code)
52
53
54 class GnuObjCPPCompiler(GnuCompiler, ObjCPPCompiler):
55 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
56 is_cross: bool, info: 'MachineInfo',
57 exe_wrapper: T.Optional['ExternalProgram'] = None,
58 defines: T.Optional[T.Dict[str, str]] = None,
59 linker: T.Optional['DynamicLinker'] = None,
60 full_version: T.Optional[str] = None):
61 ObjCPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
62 info, exe_wrapper, linker=linker, full_version=full_version)
63 GnuCompiler.__init__(self, defines)
64 default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']
65 self.warn_args = {'0': [],
66 '1': default_warn_args,
67 '2': default_warn_args + ['-Wextra'],
68 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
69
70
71 class ClangObjCPPCompiler(ClangCompiler, ObjCPPCompiler):
72
73 def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,
74 is_cross: bool, info: 'MachineInfo',
75 exe_wrapper: T.Optional['ExternalProgram'] = None,
76 defines: T.Optional[T.Dict[str, str]] = None,
77 linker: T.Optional['DynamicLinker'] = None,
78 full_version: T.Optional[str] = None):
79 ObjCPPCompiler.__init__(self, exelist, version, for_machine, is_cross,
80 info, exe_wrapper, linker=linker, full_version=full_version)
81 ClangCompiler.__init__(self, defines)
82 default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']
83 self.warn_args = {'0': [],
84 '1': default_warn_args,
85 '2': default_warn_args + ['-Wextra'],
86 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
87
88
89 def get_options(self) -> 'coredata.KeyedOptionDictType':
90 opts = super().get_options()
91 opts.update({
92 OptionKey('std', machine=self.for_machine, lang='cpp'): coredata.UserComboOption(
93 'C++ language standard to use',
94 ['none', 'c++98', 'c++11', 'c++14', 'c++17'],
95 'none',
96 )
97 })
98 return opts
99
100 def get_option_compile_args(self, options: 'coredata.KeyedOptionDictType') -> T.List[str]:
101 args = []
102 std = options[OptionKey('std', machine=self.for_machine, lang='cpp')]
103 if std.value != 'none':
104 args.append('-std=' + std.value)
105 return args
106
107
108 class AppleClangObjCPPCompiler(ClangObjCPPCompiler):
109
110 """Handle the differences between Apple's clang and vanilla clang."""
111
[end of mesonbuild/compilers/objcpp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mesonbuild/compilers/objc.py b/mesonbuild/compilers/objc.py
--- a/mesonbuild/compilers/objc.py
+++ b/mesonbuild/compilers/objc.py
@@ -90,7 +90,7 @@
opts.update({
OptionKey('std', machine=self.for_machine, lang='c'): coredata.UserComboOption(
'C language standard to use',
- ['none', 'c89', 'c99', 'c11', 'c17', 'gnu99', 'gnu11'],
+ ['none', 'c89', 'c99', 'c11', 'c17', 'gnu89', 'gnu99', 'gnu11', 'gnu17'],
'none',
)
})
diff --git a/mesonbuild/compilers/objcpp.py b/mesonbuild/compilers/objcpp.py
--- a/mesonbuild/compilers/objcpp.py
+++ b/mesonbuild/compilers/objcpp.py
@@ -91,7 +91,7 @@
opts.update({
OptionKey('std', machine=self.for_machine, lang='cpp'): coredata.UserComboOption(
'C++ language standard to use',
- ['none', 'c++98', 'c++11', 'c++14', 'c++17'],
+ ['none', 'c++98', 'c++11', 'c++14', 'c++17', 'gnu++98', 'gnu++11', 'gnu++14', 'gnu++17'],
'none',
)
})
|
{"golden_diff": "diff --git a/mesonbuild/compilers/objc.py b/mesonbuild/compilers/objc.py\n--- a/mesonbuild/compilers/objc.py\n+++ b/mesonbuild/compilers/objc.py\n@@ -90,7 +90,7 @@\n opts.update({\n OptionKey('std', machine=self.for_machine, lang='c'): coredata.UserComboOption(\n 'C language standard to use',\n- ['none', 'c89', 'c99', 'c11', 'c17', 'gnu99', 'gnu11'],\n+ ['none', 'c89', 'c99', 'c11', 'c17', 'gnu89', 'gnu99', 'gnu11', 'gnu17'],\n 'none',\n )\n })\ndiff --git a/mesonbuild/compilers/objcpp.py b/mesonbuild/compilers/objcpp.py\n--- a/mesonbuild/compilers/objcpp.py\n+++ b/mesonbuild/compilers/objcpp.py\n@@ -91,7 +91,7 @@\n opts.update({\n OptionKey('std', machine=self.for_machine, lang='cpp'): coredata.UserComboOption(\n 'C++ language standard to use',\n- ['none', 'c++98', 'c++11', 'c++14', 'c++17'],\n+ ['none', 'c++98', 'c++11', 'c++14', 'c++17', 'gnu++98', 'gnu++11', 'gnu++14', 'gnu++17'],\n 'none',\n )\n })\n", "issue": "Value \"gnu89\" (of type \"string\") for combo option \"C language standard to use\" is not one of the choices. Possible choices are (as string): \"none\", \"c89\", \"c99\", \"c11\", \"c17\", \"gnu99\", \"gnu11\".\nBuilding glib2 2.62.6 with meson 0.59.1 fails:\r\n\r\n```\r\nValue \"gnu89\" (of type \"string\") for combo option \"C language standard to use\" is not one of the choices. Possible choices are (as string): \"none\", \"c89\", \"c99\", \"c11\", \"c17\", \"gnu99\", \"gnu11\".\r\n```\r\n\r\nWith meson 0.58.1 there was no such error.\r\n\r\nSee https://trac.macports.org/ticket/63462\r\n\r\nIn #8766 gnu99 and gnu11 were added to the list of possible choices but for some reason gnu89 was not added? Could you add it, please?\n", "before_files": [{"content": "# Copyright 2012-2017 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing as T\n\nfrom .. import coredata\nfrom ..mesonlib import MachineChoice, OptionKey\n\nfrom .compilers import Compiler\nfrom .mixins.clike import CLikeCompiler\nfrom .mixins.gnu import GnuCompiler\nfrom .mixins.clang import ClangCompiler\n\nif T.TYPE_CHECKING:\n from ..programs import ExternalProgram\n from ..envconfig import MachineInfo\n from ..environment import Environment\n from ..linkers import DynamicLinker\n\n\nclass ObjCCompiler(CLikeCompiler, Compiler):\n\n language = 'objc'\n\n def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,\n is_cross: bool, info: 'MachineInfo',\n exe_wrap: T.Optional['ExternalProgram'],\n linker: T.Optional['DynamicLinker'] = None,\n full_version: T.Optional[str] = None):\n Compiler.__init__(self, exelist, version, for_machine, info,\n is_cross=is_cross, full_version=full_version,\n linker=linker)\n CLikeCompiler.__init__(self, exe_wrap)\n\n @staticmethod\n def get_display_language() -> str:\n return 'Objective-C'\n\n def sanity_check(self, work_dir: str, environment: 'Environment') -> None:\n code = '#import<stddef.h>\\nint main(void) { return 0; }\\n'\n return self._sanity_check_impl(work_dir, environment, 'sanitycheckobjc.m', code)\n\n\nclass GnuObjCCompiler(GnuCompiler, ObjCCompiler):\n def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,\n is_cross: bool, info: 'MachineInfo',\n exe_wrapper: T.Optional['ExternalProgram'] = None,\n defines: T.Optional[T.Dict[str, str]] = None,\n linker: T.Optional['DynamicLinker'] = None,\n full_version: T.Optional[str] = None):\n ObjCCompiler.__init__(self, exelist, version, for_machine, is_cross,\n info, exe_wrapper, linker=linker, full_version=full_version)\n GnuCompiler.__init__(self, defines)\n default_warn_args = ['-Wall', '-Winvalid-pch']\n self.warn_args = {'0': [],\n '1': default_warn_args,\n '2': default_warn_args + ['-Wextra'],\n '3': default_warn_args + ['-Wextra', '-Wpedantic']}\n\n\nclass ClangObjCCompiler(ClangCompiler, ObjCCompiler):\n def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,\n is_cross: bool, info: 'MachineInfo',\n exe_wrapper: T.Optional['ExternalProgram'] = None,\n defines: T.Optional[T.Dict[str, str]] = None,\n linker: T.Optional['DynamicLinker'] = None,\n full_version: T.Optional[str] = None):\n ObjCCompiler.__init__(self, exelist, version, for_machine, is_cross,\n info, exe_wrapper, linker=linker, full_version=full_version)\n ClangCompiler.__init__(self, defines)\n default_warn_args = ['-Wall', '-Winvalid-pch']\n self.warn_args = {'0': [],\n '1': default_warn_args,\n '2': default_warn_args + ['-Wextra'],\n '3': default_warn_args + ['-Wextra', '-Wpedantic']}\n\n def get_options(self) -> 'coredata.KeyedOptionDictType':\n opts = super().get_options()\n opts.update({\n OptionKey('std', machine=self.for_machine, lang='c'): coredata.UserComboOption(\n 'C language standard to use',\n ['none', 'c89', 'c99', 'c11', 'c17', 'gnu99', 'gnu11'],\n 'none',\n )\n })\n return opts\n\n def get_option_compile_args(self, options: 'coredata.KeyedOptionDictType') -> T.List[str]:\n args = []\n std = options[OptionKey('std', machine=self.for_machine, lang='c')]\n if std.value != 'none':\n args.append('-std=' + std.value)\n return args\n\nclass AppleClangObjCCompiler(ClangObjCCompiler):\n\n \"\"\"Handle the differences between Apple's clang and vanilla clang.\"\"\"\n", "path": "mesonbuild/compilers/objc.py"}, {"content": "# Copyright 2012-2017 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing as T\n\nfrom .. import coredata\nfrom ..mesonlib import MachineChoice, OptionKey\n\nfrom .mixins.clike import CLikeCompiler\nfrom .compilers import Compiler\nfrom .mixins.gnu import GnuCompiler\nfrom .mixins.clang import ClangCompiler\n\nif T.TYPE_CHECKING:\n from ..programs import ExternalProgram\n from ..envconfig import MachineInfo\n from ..environment import Environment\n from ..linkers import DynamicLinker\n\nclass ObjCPPCompiler(CLikeCompiler, Compiler):\n\n language = 'objcpp'\n\n def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,\n is_cross: bool, info: 'MachineInfo',\n exe_wrap: T.Optional['ExternalProgram'],\n linker: T.Optional['DynamicLinker'] = None,\n full_version: T.Optional[str] = None):\n Compiler.__init__(self, exelist, version, for_machine, info,\n is_cross=is_cross, full_version=full_version,\n linker=linker)\n CLikeCompiler.__init__(self, exe_wrap)\n\n @staticmethod\n def get_display_language() -> str:\n return 'Objective-C++'\n\n def sanity_check(self, work_dir: str, environment: 'Environment') -> None:\n code = '#import<stdio.h>\\nclass MyClass;int main(void) { return 0; }\\n'\n return self._sanity_check_impl(work_dir, environment, 'sanitycheckobjcpp.mm', code)\n\n\nclass GnuObjCPPCompiler(GnuCompiler, ObjCPPCompiler):\n def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,\n is_cross: bool, info: 'MachineInfo',\n exe_wrapper: T.Optional['ExternalProgram'] = None,\n defines: T.Optional[T.Dict[str, str]] = None,\n linker: T.Optional['DynamicLinker'] = None,\n full_version: T.Optional[str] = None):\n ObjCPPCompiler.__init__(self, exelist, version, for_machine, is_cross,\n info, exe_wrapper, linker=linker, full_version=full_version)\n GnuCompiler.__init__(self, defines)\n default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']\n self.warn_args = {'0': [],\n '1': default_warn_args,\n '2': default_warn_args + ['-Wextra'],\n '3': default_warn_args + ['-Wextra', '-Wpedantic']}\n\n\nclass ClangObjCPPCompiler(ClangCompiler, ObjCPPCompiler):\n\n def __init__(self, exelist: T.List[str], version: str, for_machine: MachineChoice,\n is_cross: bool, info: 'MachineInfo',\n exe_wrapper: T.Optional['ExternalProgram'] = None,\n defines: T.Optional[T.Dict[str, str]] = None,\n linker: T.Optional['DynamicLinker'] = None,\n full_version: T.Optional[str] = None):\n ObjCPPCompiler.__init__(self, exelist, version, for_machine, is_cross,\n info, exe_wrapper, linker=linker, full_version=full_version)\n ClangCompiler.__init__(self, defines)\n default_warn_args = ['-Wall', '-Winvalid-pch', '-Wnon-virtual-dtor']\n self.warn_args = {'0': [],\n '1': default_warn_args,\n '2': default_warn_args + ['-Wextra'],\n '3': default_warn_args + ['-Wextra', '-Wpedantic']}\n\n\n def get_options(self) -> 'coredata.KeyedOptionDictType':\n opts = super().get_options()\n opts.update({\n OptionKey('std', machine=self.for_machine, lang='cpp'): coredata.UserComboOption(\n 'C++ language standard to use',\n ['none', 'c++98', 'c++11', 'c++14', 'c++17'],\n 'none',\n )\n })\n return opts\n\n def get_option_compile_args(self, options: 'coredata.KeyedOptionDictType') -> T.List[str]:\n args = []\n std = options[OptionKey('std', machine=self.for_machine, lang='cpp')]\n if std.value != 'none':\n args.append('-std=' + std.value)\n return args\n\n\nclass AppleClangObjCPPCompiler(ClangObjCPPCompiler):\n\n \"\"\"Handle the differences between Apple's clang and vanilla clang.\"\"\"\n", "path": "mesonbuild/compilers/objcpp.py"}]}
| 3,447 | 369 |
gh_patches_debug_24066
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-3719
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_resolve_parameters_ changes are not backwards-compatible
**Description of the issue**
External code that defined a `_resolve_parameters_` magic method for Cirq 0.9 is broken by #3546, which changed the function signature for that method in a non-backwards-compatible way.
**How to reproduce the issue**
Construct a parameterized circuit in Cirq v0.10.0-dev, then attempt to run it in the latest version of qsim.
**Cirq version**
Occurs when upgrading from v0.9.* to v0.10.0-dev
</issue>
<code>
[start of cirq/protocols/resolve_parameters.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numbers
16 from typing import AbstractSet, Any, TYPE_CHECKING
17
18 import sympy
19 from typing_extensions import Protocol
20
21 from cirq import study
22 from cirq._doc import doc_private
23
24 if TYPE_CHECKING:
25 import cirq
26
27
28 class SupportsParameterization(Protocol):
29 """An object that can be parameterized by Symbols and resolved
30 via a ParamResolver"""
31
32 @doc_private
33 def _is_parameterized_(self: Any) -> bool:
34 """Whether the object is parameterized by any Symbols that require
35 resolution. Returns True if the object has any unresolved Symbols
36 and False otherwise."""
37
38 @doc_private
39 def _parameter_names_(self: Any) -> AbstractSet[str]:
40 """Returns a collection of string names of parameters that require
41 resolution. If _is_parameterized_ is False, the collection is empty.
42 The converse is not necessarily true, because some objects may report
43 that they are parameterized when they contain symbolic constants which
44 need to be evaluated, but no free symbols.
45 """
46
47 @doc_private
48 def _resolve_parameters_(self: Any, param_resolver: 'cirq.ParamResolver', recursive: bool):
49 """Resolve the parameters in the effect."""
50
51
52 def is_parameterized(val: Any) -> bool:
53 """Returns whether the object is parameterized with any Symbols.
54
55 A value is parameterized when it has an `_is_parameterized_` method and
56 that method returns a truthy value, or if the value is an instance of
57 sympy.Basic.
58
59 Returns:
60 True if the gate has any unresolved Symbols
61 and False otherwise. If no implementation of the magic
62 method above exists or if that method returns NotImplemented,
63 this will default to False.
64 """
65 if isinstance(val, sympy.Basic):
66 return True
67 if isinstance(val, numbers.Number):
68 return False
69 if isinstance(val, (list, tuple)):
70 return any(is_parameterized(e) for e in val)
71
72 getter = getattr(val, '_is_parameterized_', None)
73 result = NotImplemented if getter is None else getter()
74
75 if result is not NotImplemented:
76 return result
77
78 return bool(parameter_names(val))
79
80
81 def parameter_names(val: Any) -> AbstractSet[str]:
82 """Returns parameter names for this object.
83
84 Args:
85 val: Object for which to find the parameter names.
86 check_symbols: If true, fall back to calling parameter_symbols.
87
88 Returns:
89 A set of parameter names if the object is parameterized. It the object
90 does not implement the _parameter_names_ magic method or that method
91 returns NotImplemented, returns an empty set.
92 """
93 if isinstance(val, sympy.Basic):
94 return {symbol.name for symbol in val.free_symbols}
95 if isinstance(val, numbers.Number):
96 return set()
97 if isinstance(val, (list, tuple)):
98 return {name for e in val for name in parameter_names(e)}
99
100 getter = getattr(val, '_parameter_names_', None)
101 result = NotImplemented if getter is None else getter()
102 if result is not NotImplemented:
103 return result
104
105 return set()
106
107
108 def parameter_symbols(val: Any) -> AbstractSet[sympy.Symbol]:
109 """Returns parameter symbols for this object.
110
111 Args:
112 val: Object for which to find the parameter symbols.
113
114 Returns:
115 A set of parameter symbols if the object is parameterized. It the object
116 does not implement the _parameter_symbols_ magic method or that method
117 returns NotImplemented, returns an empty set.
118 """
119 return {sympy.Symbol(name) for name in parameter_names(val)}
120
121
122 def resolve_parameters(
123 val: Any, param_resolver: 'cirq.ParamResolverOrSimilarType', recursive: bool = True
124 ):
125 """Resolves symbol parameters in the effect using the param resolver.
126
127 This function will use the `_resolve_parameters_` magic method
128 of `val` to resolve any Symbols with concrete values from the given
129 parameter resolver.
130
131 Args:
132 val: The object to resolve (e.g. the gate, operation, etc)
133 param_resolver: the object to use for resolving all symbols
134 recursive: if True, resolves parameters recursively over the
135 resolver; otherwise performs a single resolution step.
136
137 Returns:
138 a gate or operation of the same type, but with all Symbols
139 replaced with floats or terminal symbols according to the
140 given ParamResolver. If `val` has no `_resolve_parameters_`
141 method or if it returns NotImplemented, `val` itself is returned.
142
143 Raises:
144 RecursionError if the ParamResolver detects a loop in resolution.
145 """
146 if not param_resolver:
147 return val
148
149 # Ensure it is a dictionary wrapped in a ParamResolver.
150 param_resolver = study.ParamResolver(param_resolver)
151 if isinstance(val, sympy.Basic):
152 return param_resolver.value_of(val, recursive)
153 if isinstance(val, (list, tuple)):
154 return type(val)(resolve_parameters(e, param_resolver, recursive) for e in val)
155
156 getter = getattr(val, '_resolve_parameters_', None)
157 result = NotImplemented if getter is None else getter(param_resolver, recursive)
158
159 if result is not NotImplemented:
160 return result
161 else:
162 return val
163
164
165 def resolve_parameters_once(val: Any, param_resolver: 'cirq.ParamResolverOrSimilarType'):
166 """Performs a single parameter resolution step using the param resolver."""
167 return resolve_parameters(val, param_resolver, False)
168
[end of cirq/protocols/resolve_parameters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cirq/protocols/resolve_parameters.py b/cirq/protocols/resolve_parameters.py
--- a/cirq/protocols/resolve_parameters.py
+++ b/cirq/protocols/resolve_parameters.py
@@ -142,6 +142,8 @@
Raises:
RecursionError if the ParamResolver detects a loop in resolution.
+ ValueError if `recursive=False` is passed to an external
+ _resolve_parameters_ method with no `recursive` parameter.
"""
if not param_resolver:
return val
@@ -154,7 +156,19 @@
return type(val)(resolve_parameters(e, param_resolver, recursive) for e in val)
getter = getattr(val, '_resolve_parameters_', None)
- result = NotImplemented if getter is None else getter(param_resolver, recursive)
+ if getter is None:
+ result = NotImplemented
+ # Backwards-compatibility for external _resolve_parameters_ methods.
+ # TODO: remove in Cirq v0.11.0
+ elif 'recursive' in getter.__code__.co_varnames:
+ result = getter(param_resolver, recursive)
+ else:
+ if not recursive:
+ raise ValueError(
+ f'Object type {type(val)} does not support non-recursive parameter resolution.'
+ ' This must be updated before Cirq v0.11.'
+ )
+ result = getter(param_resolver)
if result is not NotImplemented:
return result
|
{"golden_diff": "diff --git a/cirq/protocols/resolve_parameters.py b/cirq/protocols/resolve_parameters.py\n--- a/cirq/protocols/resolve_parameters.py\n+++ b/cirq/protocols/resolve_parameters.py\n@@ -142,6 +142,8 @@\n \n Raises:\n RecursionError if the ParamResolver detects a loop in resolution.\n+ ValueError if `recursive=False` is passed to an external\n+ _resolve_parameters_ method with no `recursive` parameter.\n \"\"\"\n if not param_resolver:\n return val\n@@ -154,7 +156,19 @@\n return type(val)(resolve_parameters(e, param_resolver, recursive) for e in val)\n \n getter = getattr(val, '_resolve_parameters_', None)\n- result = NotImplemented if getter is None else getter(param_resolver, recursive)\n+ if getter is None:\n+ result = NotImplemented\n+ # Backwards-compatibility for external _resolve_parameters_ methods.\n+ # TODO: remove in Cirq v0.11.0\n+ elif 'recursive' in getter.__code__.co_varnames:\n+ result = getter(param_resolver, recursive)\n+ else:\n+ if not recursive:\n+ raise ValueError(\n+ f'Object type {type(val)} does not support non-recursive parameter resolution.'\n+ ' This must be updated before Cirq v0.11.'\n+ )\n+ result = getter(param_resolver)\n \n if result is not NotImplemented:\n return result\n", "issue": "_resolve_parameters_ changes are not backwards-compatible\n**Description of the issue**\r\n\r\nExternal code that defined a `_resolve_parameters_` magic method for Cirq 0.9 is broken by #3546, which changed the function signature for that method in a non-backwards-compatible way.\r\n\r\n**How to reproduce the issue**\r\n\r\nConstruct a parameterized circuit in Cirq v0.10.0-dev, then attempt to run it in the latest version of qsim.\r\n\r\n**Cirq version**\r\n\r\nOccurs when upgrading from v0.9.* to v0.10.0-dev\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numbers\nfrom typing import AbstractSet, Any, TYPE_CHECKING\n\nimport sympy\nfrom typing_extensions import Protocol\n\nfrom cirq import study\nfrom cirq._doc import doc_private\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass SupportsParameterization(Protocol):\n \"\"\"An object that can be parameterized by Symbols and resolved\n via a ParamResolver\"\"\"\n\n @doc_private\n def _is_parameterized_(self: Any) -> bool:\n \"\"\"Whether the object is parameterized by any Symbols that require\n resolution. Returns True if the object has any unresolved Symbols\n and False otherwise.\"\"\"\n\n @doc_private\n def _parameter_names_(self: Any) -> AbstractSet[str]:\n \"\"\"Returns a collection of string names of parameters that require\n resolution. If _is_parameterized_ is False, the collection is empty.\n The converse is not necessarily true, because some objects may report\n that they are parameterized when they contain symbolic constants which\n need to be evaluated, but no free symbols.\n \"\"\"\n\n @doc_private\n def _resolve_parameters_(self: Any, param_resolver: 'cirq.ParamResolver', recursive: bool):\n \"\"\"Resolve the parameters in the effect.\"\"\"\n\n\ndef is_parameterized(val: Any) -> bool:\n \"\"\"Returns whether the object is parameterized with any Symbols.\n\n A value is parameterized when it has an `_is_parameterized_` method and\n that method returns a truthy value, or if the value is an instance of\n sympy.Basic.\n\n Returns:\n True if the gate has any unresolved Symbols\n and False otherwise. If no implementation of the magic\n method above exists or if that method returns NotImplemented,\n this will default to False.\n \"\"\"\n if isinstance(val, sympy.Basic):\n return True\n if isinstance(val, numbers.Number):\n return False\n if isinstance(val, (list, tuple)):\n return any(is_parameterized(e) for e in val)\n\n getter = getattr(val, '_is_parameterized_', None)\n result = NotImplemented if getter is None else getter()\n\n if result is not NotImplemented:\n return result\n\n return bool(parameter_names(val))\n\n\ndef parameter_names(val: Any) -> AbstractSet[str]:\n \"\"\"Returns parameter names for this object.\n\n Args:\n val: Object for which to find the parameter names.\n check_symbols: If true, fall back to calling parameter_symbols.\n\n Returns:\n A set of parameter names if the object is parameterized. It the object\n does not implement the _parameter_names_ magic method or that method\n returns NotImplemented, returns an empty set.\n \"\"\"\n if isinstance(val, sympy.Basic):\n return {symbol.name for symbol in val.free_symbols}\n if isinstance(val, numbers.Number):\n return set()\n if isinstance(val, (list, tuple)):\n return {name for e in val for name in parameter_names(e)}\n\n getter = getattr(val, '_parameter_names_', None)\n result = NotImplemented if getter is None else getter()\n if result is not NotImplemented:\n return result\n\n return set()\n\n\ndef parameter_symbols(val: Any) -> AbstractSet[sympy.Symbol]:\n \"\"\"Returns parameter symbols for this object.\n\n Args:\n val: Object for which to find the parameter symbols.\n\n Returns:\n A set of parameter symbols if the object is parameterized. It the object\n does not implement the _parameter_symbols_ magic method or that method\n returns NotImplemented, returns an empty set.\n \"\"\"\n return {sympy.Symbol(name) for name in parameter_names(val)}\n\n\ndef resolve_parameters(\n val: Any, param_resolver: 'cirq.ParamResolverOrSimilarType', recursive: bool = True\n):\n \"\"\"Resolves symbol parameters in the effect using the param resolver.\n\n This function will use the `_resolve_parameters_` magic method\n of `val` to resolve any Symbols with concrete values from the given\n parameter resolver.\n\n Args:\n val: The object to resolve (e.g. the gate, operation, etc)\n param_resolver: the object to use for resolving all symbols\n recursive: if True, resolves parameters recursively over the\n resolver; otherwise performs a single resolution step.\n\n Returns:\n a gate or operation of the same type, but with all Symbols\n replaced with floats or terminal symbols according to the\n given ParamResolver. If `val` has no `_resolve_parameters_`\n method or if it returns NotImplemented, `val` itself is returned.\n\n Raises:\n RecursionError if the ParamResolver detects a loop in resolution.\n \"\"\"\n if not param_resolver:\n return val\n\n # Ensure it is a dictionary wrapped in a ParamResolver.\n param_resolver = study.ParamResolver(param_resolver)\n if isinstance(val, sympy.Basic):\n return param_resolver.value_of(val, recursive)\n if isinstance(val, (list, tuple)):\n return type(val)(resolve_parameters(e, param_resolver, recursive) for e in val)\n\n getter = getattr(val, '_resolve_parameters_', None)\n result = NotImplemented if getter is None else getter(param_resolver, recursive)\n\n if result is not NotImplemented:\n return result\n else:\n return val\n\n\ndef resolve_parameters_once(val: Any, param_resolver: 'cirq.ParamResolverOrSimilarType'):\n \"\"\"Performs a single parameter resolution step using the param resolver.\"\"\"\n return resolve_parameters(val, param_resolver, False)\n", "path": "cirq/protocols/resolve_parameters.py"}]}
| 2,361 | 322 |
gh_patches_debug_23151
|
rasdani/github-patches
|
git_diff
|
blaze__blaze-902
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dshape of nunique not the same as the resulting object
``` python
In [14]: fx.date.nunique()
Out[14]: 5825.0
In [15]: fx.date.nunique().dshape
Out[15]: dshape("int32")
```
</issue>
<code>
[start of blaze/compute/numpy.py]
1 from __future__ import absolute_import, division, print_function
2
3 import datetime
4
5 import numpy as np
6 from pandas import DataFrame, Series
7 from datashape import to_numpy
8
9 from ..expr import Reduction, Field, Projection, Broadcast, Selection, ndim
10 from ..expr import Distinct, Sort, Head, Label, ReLabel, Expr, Slice
11 from ..expr import std, var, count, nunique, Summary
12 from ..expr import BinOp, UnaryOp, USub, Not, nelements
13 from ..expr import UTCFromTimestamp, DateTimeTruncate
14
15 from .core import base, compute
16 from ..dispatch import dispatch
17 from into import into
18 import pandas as pd
19
20 __all__ = ['np']
21
22
23 @dispatch(Field, np.ndarray)
24 def compute_up(c, x, **kwargs):
25 if x.dtype.names and c._name in x.dtype.names:
26 return x[c._name]
27 if not x.dtype.names and x.shape[1] == len(c._child.fields):
28 return x[:, c._child.fields.index(c._name)]
29 raise NotImplementedError() # pragma: no cover
30
31
32 @dispatch(Projection, np.ndarray)
33 def compute_up(t, x, **kwargs):
34 if x.dtype.names and all(col in x.dtype.names for col in t.fields):
35 return x[t.fields]
36 if not x.dtype.names and x.shape[1] == len(t._child.fields):
37 return x[:, [t._child.fields.index(col) for col in t.fields]]
38 raise NotImplementedError() # pragma: no cover
39
40
41 @dispatch(Broadcast, np.ndarray)
42 def compute_up(t, x, **kwargs):
43 d = dict((t._child[c]._expr, x[c]) for c in t._child.fields)
44 return compute(t._expr, d)
45
46
47 @dispatch(BinOp, np.ndarray, (np.ndarray, base))
48 def compute_up(t, lhs, rhs, **kwargs):
49 return t.op(lhs, rhs)
50
51
52 @dispatch(BinOp, np.ndarray)
53 def compute_up(t, data, **kwargs):
54 if isinstance(t.lhs, Expr):
55 return t.op(data, t.rhs)
56 else:
57 return t.op(t.lhs, data)
58
59
60 @dispatch(BinOp, base, np.ndarray)
61 def compute_up(t, lhs, rhs, **kwargs):
62 return t.op(lhs, rhs)
63
64
65 @dispatch(UnaryOp, np.ndarray)
66 def compute_up(t, x, **kwargs):
67 return getattr(np, t.symbol)(x)
68
69
70 @dispatch(Not, np.ndarray)
71 def compute_up(t, x, **kwargs):
72 return ~x
73
74
75 @dispatch(USub, np.ndarray)
76 def compute_up(t, x, **kwargs):
77 return -x
78
79
80 @dispatch(count, np.ndarray)
81 def compute_up(t, x, **kwargs):
82 if np.issubdtype(x.dtype, np.float): # scalar dtype
83 return pd.notnull(x).sum(keepdims=t.keepdims, axis=t.axis)
84 else:
85 return np.ones(x.shape).sum(keepdims=t.keepdims, axis=t.axis)
86
87
88 @dispatch(nunique, np.ndarray)
89 def compute_up(t, x, **kwargs):
90 assert t.axis == tuple(range(ndim(t._child)))
91 result = len(np.unique(x))
92 if t.keepdims:
93 result = np.array([result])
94 return result
95
96
97 @dispatch(Reduction, np.ndarray)
98 def compute_up(t, x, **kwargs):
99 return getattr(x, t.symbol)(axis=t.axis, keepdims=t.keepdims)
100
101
102 def axify(expr, axis):
103 """ inject axis argument into expression
104
105 Helper function for compute_up(Summary, np.ndarray)
106
107 >>> from blaze import symbol
108 >>> s = symbol('s', '10 * 10 * int')
109 >>> expr = s.sum()
110 >>> axify(expr, axis=0)
111 sum(s, axis=(0,))
112 """
113 return type(expr)(expr._child, axis=axis)
114
115
116 @dispatch(Summary, np.ndarray)
117 def compute_up(expr, data, **kwargs):
118 shape, dtype = to_numpy(expr.dshape)
119 if shape:
120 result = np.empty(shape=shape, dtype=dtype)
121 for n, v in zip(expr.names, expr.values):
122 result[n] = compute(axify(v, expr.axis), data)
123 return result
124 else:
125 return tuple(compute(axify(v, expr.axis), data) for v in expr.values)
126
127
128 @dispatch((std, var), np.ndarray)
129 def compute_up(t, x, **kwargs):
130 return getattr(x, t.symbol)(ddof=t.unbiased)
131
132
133 @dispatch(Distinct, np.ndarray)
134 def compute_up(t, x, **kwargs):
135 return np.unique(x)
136
137
138 @dispatch(Sort, np.ndarray)
139 def compute_up(t, x, **kwargs):
140 if x.dtype.names is None: # not a struct array
141 result = np.sort(x)
142 elif (t.key in x.dtype.names or # struct array
143 isinstance(t.key, list) and all(k in x.dtype.names for k in t.key)):
144 result = np.sort(x, order=t.key)
145 elif t.key:
146 raise NotImplementedError("Sort key %s not supported" % t.key)
147
148 if not t.ascending:
149 result = result[::-1]
150
151 return result
152
153
154 @dispatch(Head, np.ndarray)
155 def compute_up(t, x, **kwargs):
156 return x[:t.n]
157
158
159 @dispatch(Label, np.ndarray)
160 def compute_up(t, x, **kwargs):
161 return np.array(x, dtype=[(t.label, x.dtype.type)])
162
163
164 @dispatch(ReLabel, np.ndarray)
165 def compute_up(t, x, **kwargs):
166 types = [x.dtype[i] for i in range(len(x.dtype))]
167 return np.array(x, dtype=list(zip(t.fields, types)))
168
169
170 @dispatch(Selection, np.ndarray)
171 def compute_up(sel, x, **kwargs):
172 return x[compute(sel.predicate, {sel._child: x})]
173
174
175 @dispatch(UTCFromTimestamp, np.ndarray)
176 def compute_up(expr, data, **kwargs):
177 return (data * 1e6).astype('M8[us]')
178
179
180 @dispatch(Slice, np.ndarray)
181 def compute_up(expr, x, **kwargs):
182 return x[expr.index]
183
184
185 @dispatch(Expr, np.ndarray)
186 def compute_up(t, x, **kwargs):
187 ds = t._child.dshape
188 if x.ndim > 1 or isinstance(x, np.recarray) or x.dtype.fields is not None:
189 return compute_up(t, into(DataFrame, x, dshape=ds), **kwargs)
190 else:
191 return compute_up(t, into(Series, x, dshape=ds), **kwargs)
192
193
194 @dispatch(nelements, np.ndarray)
195 def compute_up(expr, data, **kwargs):
196 axis = expr.axis
197 if expr.keepdims:
198 shape = tuple(data.shape[i] if i not in axis else 1
199 for i in range(ndim(expr._child)))
200 else:
201 shape = tuple(data.shape[i] for i in range(ndim(expr._child))
202 if i not in axis)
203 value = np.prod([data.shape[i] for i in axis])
204 result = np.empty(shape)
205 result.fill(value)
206 result = result.astype('int64')
207
208 return result
209
210
211
212 # Note the use of 'week': 'M8[D]' here.
213
214 # We truncate week offsets "manually" in the compute_up implementation by first
215 # converting to days then multiplying our measure by 7 this simplifies our code
216 # by only requiring us to calculate the week offset relative to the day of week.
217
218 precision_map = {'year': 'M8[Y]',
219 'month': 'M8[M]',
220 'week': 'M8[D]',
221 'day': 'M8[D]',
222 'hour': 'M8[h]',
223 'minute': 'M8[m]',
224 'second': 'M8[s]',
225 'millisecond': 'M8[ms]',
226 'microsecond': 'M8[us]',
227 'nanosecond': 'M8[ns]'}
228
229
230 # these offsets are integers in units of their representation
231
232 epoch = datetime.datetime(1970, 1, 1)
233 offsets = {
234 'week': epoch.isoweekday(),
235 'day': epoch.toordinal() # number of days since *Python's* epoch (01/01/01)
236 }
237
238
239 @dispatch(DateTimeTruncate, (np.ndarray, np.datetime64))
240 def compute_up(expr, data, **kwargs):
241 np_dtype = precision_map[expr.unit]
242 offset = offsets.get(expr.unit, 0)
243 measure = expr.measure * 7 if expr.unit == 'week' else expr.measure
244 result = (((data.astype(np_dtype)
245 .view('int64')
246 + offset)
247 // measure
248 * measure
249 - offset)
250 .astype(np_dtype))
251 return result
252
253
254 @dispatch(np.ndarray)
255 def chunks(x, chunksize=1024):
256 start = 0
257 n = len(x)
258 while start < n:
259 yield x[start:start + chunksize]
260 start += chunksize
261
[end of blaze/compute/numpy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/blaze/compute/numpy.py b/blaze/compute/numpy.py
--- a/blaze/compute/numpy.py
+++ b/blaze/compute/numpy.py
@@ -4,7 +4,7 @@
import numpy as np
from pandas import DataFrame, Series
-from datashape import to_numpy
+from datashape import to_numpy, to_numpy_dtype
from ..expr import Reduction, Field, Projection, Broadcast, Selection, ndim
from ..expr import Distinct, Sort, Head, Label, ReLabel, Expr, Slice
@@ -77,12 +77,19 @@
return -x
+inat = np.datetime64('NaT').view('int64')
+
+
@dispatch(count, np.ndarray)
def compute_up(t, x, **kwargs):
- if np.issubdtype(x.dtype, np.float): # scalar dtype
+ if issubclass(x.dtype.type, (np.floating, np.object_)):
return pd.notnull(x).sum(keepdims=t.keepdims, axis=t.axis)
+ elif issubclass(x.dtype.type, np.datetime64):
+ return (x.view('int64') != inat).sum(keepdims=t.keepdims, axis=t.axis)
else:
- return np.ones(x.shape).sum(keepdims=t.keepdims, axis=t.axis)
+ return np.ones(x.shape,
+ dtype=to_numpy_dtype(t.dshape)).sum(keepdims=t.keepdims,
+ axis=t.axis)
@dispatch(nunique, np.ndarray)
|
{"golden_diff": "diff --git a/blaze/compute/numpy.py b/blaze/compute/numpy.py\n--- a/blaze/compute/numpy.py\n+++ b/blaze/compute/numpy.py\n@@ -4,7 +4,7 @@\n \n import numpy as np\n from pandas import DataFrame, Series\n-from datashape import to_numpy\n+from datashape import to_numpy, to_numpy_dtype\n \n from ..expr import Reduction, Field, Projection, Broadcast, Selection, ndim\n from ..expr import Distinct, Sort, Head, Label, ReLabel, Expr, Slice\n@@ -77,12 +77,19 @@\n return -x\n \n \n+inat = np.datetime64('NaT').view('int64')\n+\n+\n @dispatch(count, np.ndarray)\n def compute_up(t, x, **kwargs):\n- if np.issubdtype(x.dtype, np.float): # scalar dtype\n+ if issubclass(x.dtype.type, (np.floating, np.object_)):\n return pd.notnull(x).sum(keepdims=t.keepdims, axis=t.axis)\n+ elif issubclass(x.dtype.type, np.datetime64):\n+ return (x.view('int64') != inat).sum(keepdims=t.keepdims, axis=t.axis)\n else:\n- return np.ones(x.shape).sum(keepdims=t.keepdims, axis=t.axis)\n+ return np.ones(x.shape,\n+ dtype=to_numpy_dtype(t.dshape)).sum(keepdims=t.keepdims,\n+ axis=t.axis)\n \n \n @dispatch(nunique, np.ndarray)\n", "issue": "dshape of nunique not the same as the resulting object\n``` python\nIn [14]: fx.date.nunique()\nOut[14]: 5825.0\n\nIn [15]: fx.date.nunique().dshape\nOut[15]: dshape(\"int32\")\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport datetime\n\nimport numpy as np\nfrom pandas import DataFrame, Series\nfrom datashape import to_numpy\n\nfrom ..expr import Reduction, Field, Projection, Broadcast, Selection, ndim\nfrom ..expr import Distinct, Sort, Head, Label, ReLabel, Expr, Slice\nfrom ..expr import std, var, count, nunique, Summary\nfrom ..expr import BinOp, UnaryOp, USub, Not, nelements\nfrom ..expr import UTCFromTimestamp, DateTimeTruncate\n\nfrom .core import base, compute\nfrom ..dispatch import dispatch\nfrom into import into\nimport pandas as pd\n\n__all__ = ['np']\n\n\n@dispatch(Field, np.ndarray)\ndef compute_up(c, x, **kwargs):\n if x.dtype.names and c._name in x.dtype.names:\n return x[c._name]\n if not x.dtype.names and x.shape[1] == len(c._child.fields):\n return x[:, c._child.fields.index(c._name)]\n raise NotImplementedError() # pragma: no cover\n\n\n@dispatch(Projection, np.ndarray)\ndef compute_up(t, x, **kwargs):\n if x.dtype.names and all(col in x.dtype.names for col in t.fields):\n return x[t.fields]\n if not x.dtype.names and x.shape[1] == len(t._child.fields):\n return x[:, [t._child.fields.index(col) for col in t.fields]]\n raise NotImplementedError() # pragma: no cover\n\n\n@dispatch(Broadcast, np.ndarray)\ndef compute_up(t, x, **kwargs):\n d = dict((t._child[c]._expr, x[c]) for c in t._child.fields)\n return compute(t._expr, d)\n\n\n@dispatch(BinOp, np.ndarray, (np.ndarray, base))\ndef compute_up(t, lhs, rhs, **kwargs):\n return t.op(lhs, rhs)\n\n\n@dispatch(BinOp, np.ndarray)\ndef compute_up(t, data, **kwargs):\n if isinstance(t.lhs, Expr):\n return t.op(data, t.rhs)\n else:\n return t.op(t.lhs, data)\n\n\n@dispatch(BinOp, base, np.ndarray)\ndef compute_up(t, lhs, rhs, **kwargs):\n return t.op(lhs, rhs)\n\n\n@dispatch(UnaryOp, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return getattr(np, t.symbol)(x)\n\n\n@dispatch(Not, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return ~x\n\n\n@dispatch(USub, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return -x\n\n\n@dispatch(count, np.ndarray)\ndef compute_up(t, x, **kwargs):\n if np.issubdtype(x.dtype, np.float): # scalar dtype\n return pd.notnull(x).sum(keepdims=t.keepdims, axis=t.axis)\n else:\n return np.ones(x.shape).sum(keepdims=t.keepdims, axis=t.axis)\n\n\n@dispatch(nunique, np.ndarray)\ndef compute_up(t, x, **kwargs):\n assert t.axis == tuple(range(ndim(t._child)))\n result = len(np.unique(x))\n if t.keepdims:\n result = np.array([result])\n return result\n\n\n@dispatch(Reduction, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return getattr(x, t.symbol)(axis=t.axis, keepdims=t.keepdims)\n\n\ndef axify(expr, axis):\n \"\"\" inject axis argument into expression\n\n Helper function for compute_up(Summary, np.ndarray)\n\n >>> from blaze import symbol\n >>> s = symbol('s', '10 * 10 * int')\n >>> expr = s.sum()\n >>> axify(expr, axis=0)\n sum(s, axis=(0,))\n \"\"\"\n return type(expr)(expr._child, axis=axis)\n\n\n@dispatch(Summary, np.ndarray)\ndef compute_up(expr, data, **kwargs):\n shape, dtype = to_numpy(expr.dshape)\n if shape:\n result = np.empty(shape=shape, dtype=dtype)\n for n, v in zip(expr.names, expr.values):\n result[n] = compute(axify(v, expr.axis), data)\n return result\n else:\n return tuple(compute(axify(v, expr.axis), data) for v in expr.values)\n\n\n@dispatch((std, var), np.ndarray)\ndef compute_up(t, x, **kwargs):\n return getattr(x, t.symbol)(ddof=t.unbiased)\n\n\n@dispatch(Distinct, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return np.unique(x)\n\n\n@dispatch(Sort, np.ndarray)\ndef compute_up(t, x, **kwargs):\n if x.dtype.names is None: # not a struct array\n result = np.sort(x)\n elif (t.key in x.dtype.names or # struct array\n isinstance(t.key, list) and all(k in x.dtype.names for k in t.key)):\n result = np.sort(x, order=t.key)\n elif t.key:\n raise NotImplementedError(\"Sort key %s not supported\" % t.key)\n\n if not t.ascending:\n result = result[::-1]\n\n return result\n\n\n@dispatch(Head, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return x[:t.n]\n\n\n@dispatch(Label, np.ndarray)\ndef compute_up(t, x, **kwargs):\n return np.array(x, dtype=[(t.label, x.dtype.type)])\n\n\n@dispatch(ReLabel, np.ndarray)\ndef compute_up(t, x, **kwargs):\n types = [x.dtype[i] for i in range(len(x.dtype))]\n return np.array(x, dtype=list(zip(t.fields, types)))\n\n\n@dispatch(Selection, np.ndarray)\ndef compute_up(sel, x, **kwargs):\n return x[compute(sel.predicate, {sel._child: x})]\n\n\n@dispatch(UTCFromTimestamp, np.ndarray)\ndef compute_up(expr, data, **kwargs):\n return (data * 1e6).astype('M8[us]')\n\n\n@dispatch(Slice, np.ndarray)\ndef compute_up(expr, x, **kwargs):\n return x[expr.index]\n\n\n@dispatch(Expr, np.ndarray)\ndef compute_up(t, x, **kwargs):\n ds = t._child.dshape\n if x.ndim > 1 or isinstance(x, np.recarray) or x.dtype.fields is not None:\n return compute_up(t, into(DataFrame, x, dshape=ds), **kwargs)\n else:\n return compute_up(t, into(Series, x, dshape=ds), **kwargs)\n\n\n@dispatch(nelements, np.ndarray)\ndef compute_up(expr, data, **kwargs):\n axis = expr.axis\n if expr.keepdims:\n shape = tuple(data.shape[i] if i not in axis else 1\n for i in range(ndim(expr._child)))\n else:\n shape = tuple(data.shape[i] for i in range(ndim(expr._child))\n if i not in axis)\n value = np.prod([data.shape[i] for i in axis])\n result = np.empty(shape)\n result.fill(value)\n result = result.astype('int64')\n\n return result\n\n\n\n# Note the use of 'week': 'M8[D]' here.\n\n# We truncate week offsets \"manually\" in the compute_up implementation by first\n# converting to days then multiplying our measure by 7 this simplifies our code\n# by only requiring us to calculate the week offset relative to the day of week.\n\nprecision_map = {'year': 'M8[Y]',\n 'month': 'M8[M]',\n 'week': 'M8[D]',\n 'day': 'M8[D]',\n 'hour': 'M8[h]',\n 'minute': 'M8[m]',\n 'second': 'M8[s]',\n 'millisecond': 'M8[ms]',\n 'microsecond': 'M8[us]',\n 'nanosecond': 'M8[ns]'}\n\n\n# these offsets are integers in units of their representation\n\nepoch = datetime.datetime(1970, 1, 1)\noffsets = {\n 'week': epoch.isoweekday(),\n 'day': epoch.toordinal() # number of days since *Python's* epoch (01/01/01)\n}\n\n\n@dispatch(DateTimeTruncate, (np.ndarray, np.datetime64))\ndef compute_up(expr, data, **kwargs):\n np_dtype = precision_map[expr.unit]\n offset = offsets.get(expr.unit, 0)\n measure = expr.measure * 7 if expr.unit == 'week' else expr.measure\n result = (((data.astype(np_dtype)\n .view('int64')\n + offset)\n // measure\n * measure\n - offset)\n .astype(np_dtype))\n return result\n\n\n@dispatch(np.ndarray)\ndef chunks(x, chunksize=1024):\n start = 0\n n = len(x)\n while start < n:\n yield x[start:start + chunksize]\n start += chunksize\n", "path": "blaze/compute/numpy.py"}]}
| 3,274 | 339 |
gh_patches_debug_51900
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-663
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Raise error when trying to write empty dataframes
Currently, the behavior when writing an empty dataframe is counter-intuitive (`ValueError: Geometry column cannot contain mutiple geometry types when writing to file.`). Would it be possible to add a check and more helpful exception?
</issue>
<code>
[start of geopandas/io/file.py]
1 import os
2
3 import fiona
4 import numpy as np
5 import six
6
7 from geopandas import GeoDataFrame
8
9 # Adapted from pandas.io.common
10 if six.PY3:
11 from urllib.request import urlopen as _urlopen
12 from urllib.parse import urlparse as parse_url
13 from urllib.parse import uses_relative, uses_netloc, uses_params
14 else:
15 from urllib2 import urlopen as _urlopen
16 from urlparse import urlparse as parse_url
17 from urlparse import uses_relative, uses_netloc, uses_params
18
19 _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
20 _VALID_URLS.discard('')
21
22
23 def _is_url(url):
24 """Check to see if *url* has a valid protocol."""
25 try:
26 return parse_url(url).scheme in _VALID_URLS
27 except:
28 return False
29
30
31 def read_file(filename, **kwargs):
32 """
33 Returns a GeoDataFrame from a file or URL.
34
35 Parameters
36 ----------
37 filename: str
38 Either the absolute or relative path to the file or URL to
39 be opened.
40 **kwargs:
41 Keyword args to be passed to the `open` or `BytesCollection` method
42 in the fiona library when opening the file. For more information on
43 possible keywords, type:
44 ``import fiona; help(fiona.open)``
45
46 Examples
47 --------
48 >>> df = geopandas.read_file("nybb.shp")
49
50 Returns
51 -------
52 geodataframe : GeoDataFrame
53 """
54 bbox = kwargs.pop('bbox', None)
55 if _is_url(filename):
56 req = _urlopen(filename)
57 path_or_bytes = req.read()
58 reader = fiona.BytesCollection
59 else:
60 path_or_bytes = filename
61 reader = fiona.open
62
63 with reader(path_or_bytes, **kwargs) as features:
64 crs = features.crs
65 if bbox is not None:
66 assert len(bbox) == 4
67 f_filt = features.filter(bbox=bbox)
68 else:
69 f_filt = features
70
71 columns = list(features.meta["schema"]["properties"]) + ["geometry"]
72 gdf = GeoDataFrame.from_features(f_filt, crs=crs, columns=columns)
73
74 return gdf
75
76
77 def to_file(df, filename, driver="ESRI Shapefile", schema=None,
78 **kwargs):
79 """
80 Write this GeoDataFrame to an OGR data source
81
82 A dictionary of supported OGR providers is available via:
83 >>> import fiona
84 >>> fiona.supported_drivers
85
86 Parameters
87 ----------
88 df : GeoDataFrame to be written
89 filename : string
90 File path or file handle to write to.
91 driver : string, default 'ESRI Shapefile'
92 The OGR format driver used to write the vector file.
93 schema : dict, default None
94 If specified, the schema dictionary is passed to Fiona to
95 better control how the file is written. If None, GeoPandas
96 will determine the schema based on each column's dtype
97
98 The *kwargs* are passed to fiona.open and can be used to write
99 to multi-layer data, store data within archives (zip files), etc.
100 """
101 if schema is None:
102 schema = infer_schema(df)
103 filename = os.path.abspath(os.path.expanduser(filename))
104 with fiona.drivers():
105 with fiona.open(filename, 'w', driver=driver, crs=df.crs,
106 schema=schema, **kwargs) as colxn:
107 colxn.writerecords(df.iterfeatures())
108
109
110 def infer_schema(df):
111 try:
112 from collections import OrderedDict
113 except ImportError:
114 from ordereddict import OrderedDict
115
116 def convert_type(column, in_type):
117 if in_type == object:
118 return 'str'
119 out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
120 if out_type == 'long':
121 out_type = 'int'
122 if out_type == 'bool':
123 raise ValueError('column "{}" is boolean type, '.format(column) +
124 'which is unsupported in file writing. '
125 'Consider casting the column to int type.')
126 return out_type
127
128 properties = OrderedDict([
129 (col, convert_type(col, _type)) for col, _type in
130 zip(df.columns, df.dtypes) if col != df._geometry_column_name
131 ])
132
133 geom_type = _common_geom_type(df)
134 if not geom_type:
135 raise ValueError("Geometry column cannot contain mutiple "
136 "geometry types when writing to file.")
137
138 schema = {'geometry': geom_type, 'properties': properties}
139
140 return schema
141
142
143 def _common_geom_type(df):
144 # Need to check geom_types before we write to file...
145 # Some (most?) providers expect a single geometry type:
146 # Point, LineString, or Polygon
147 geom_types = df.geometry.geom_type.unique()
148
149 from os.path import commonprefix
150 # use reversed geom types and commonprefix to find the common suffix,
151 # then reverse the result to get back to a geom type
152 geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1]
153 if not geom_type:
154 return None
155
156 if df.geometry.has_z.any():
157 geom_type = "3D " + geom_type
158
159 return geom_type
160
[end of geopandas/io/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/geopandas/io/file.py b/geopandas/io/file.py
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -130,7 +130,11 @@
zip(df.columns, df.dtypes) if col != df._geometry_column_name
])
+ if df.empty:
+ raise ValueError("Cannot write empty DataFrame to file.")
+
geom_type = _common_geom_type(df)
+
if not geom_type:
raise ValueError("Geometry column cannot contain mutiple "
"geometry types when writing to file.")
|
{"golden_diff": "diff --git a/geopandas/io/file.py b/geopandas/io/file.py\n--- a/geopandas/io/file.py\n+++ b/geopandas/io/file.py\n@@ -130,7 +130,11 @@\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n \n+ if df.empty:\n+ raise ValueError(\"Cannot write empty DataFrame to file.\")\n+\n geom_type = _common_geom_type(df)\n+ \n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n", "issue": "Raise error when trying to write empty dataframes\nCurrently, the behavior when writing an empty dataframe is counter-intuitive (`ValueError: Geometry column cannot contain mutiple geometry types when writing to file.`). Would it be possible to add a check and more helpful exception? \n", "before_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nimport six\n\nfrom geopandas import GeoDataFrame\n\n# Adapted from pandas.io.common\nif six.PY3:\n from urllib.request import urlopen as _urlopen\n from urllib.parse import urlparse as parse_url\n from urllib.parse import uses_relative, uses_netloc, uses_params\nelse:\n from urllib2 import urlopen as _urlopen\n from urlparse import urlparse as parse_url\n from urlparse import uses_relative, uses_netloc, uses_params\n\n_VALID_URLS = set(uses_relative + uses_netloc + uses_params)\n_VALID_URLS.discard('')\n\n\ndef _is_url(url):\n \"\"\"Check to see if *url* has a valid protocol.\"\"\"\n try:\n return parse_url(url).scheme in _VALID_URLS\n except:\n return False\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file or URL.\n\n Parameters\n ----------\n filename: str\n Either the absolute or relative path to the file or URL to\n be opened.\n **kwargs:\n Keyword args to be passed to the `open` or `BytesCollection` method\n in the fiona library when opening the file. For more information on\n possible keywords, type:\n ``import fiona; help(fiona.open)``\n\n Examples\n --------\n >>> df = geopandas.read_file(\"nybb.shp\")\n\n Returns\n -------\n geodataframe : GeoDataFrame\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n if _is_url(filename):\n req = _urlopen(filename)\n path_or_bytes = req.read()\n reader = fiona.BytesCollection\n else:\n path_or_bytes = filename\n reader = fiona.open\n\n with reader(path_or_bytes, **kwargs) as features:\n crs = features.crs\n if bbox is not None:\n assert len(bbox) == 4\n f_filt = features.filter(bbox=bbox)\n else:\n f_filt = features\n\n columns = list(features.meta[\"schema\"][\"properties\"]) + [\"geometry\"]\n gdf = GeoDataFrame.from_features(f_filt, crs=crs, columns=columns)\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.drivers():\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as colxn:\n colxn.writerecords(df.iterfeatures())\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(column, in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n if out_type == 'bool':\n raise ValueError('column \"{}\" is boolean type, '.format(column) +\n 'which is unsupported in file writing. '\n 'Consider casting the column to int type.')\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(col, _type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix\n # use reversed geom types and commonprefix to find the common suffix,\n # then reverse the result to get back to a geom type\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1]\n if not geom_type:\n return None\n\n if df.geometry.has_z.any():\n geom_type = \"3D \" + geom_type\n\n return geom_type\n", "path": "geopandas/io/file.py"}]}
| 2,119 | 129 |
gh_patches_debug_38682
|
rasdani/github-patches
|
git_diff
|
DistrictDataLabs__yellowbrick-410
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove visualizer tests in type checking
Many of the type checking utilities, e.g. `is_classifier`, `is_regressor`, etc. have a note to remove lines of code that are unnecessary after #90 is implemented.
For example see: [yellowbrick/utils/types.py#L64](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/utils/types.py#L64):
```python
def is_classifier(estimator):
"""
Returns True if the given estimator is (probably) a classifier.
Parameters
----------
estimator : class or instance
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
See also
--------
is_classifier
`sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_
"""
# TODO: once we make ScoreVisualizer and ModelVisualizer pass through
# wrappers as in Issue #90, these three lines become unnecessary.
# NOTE: This must be imported here to avoid recursive import.
from yellowbrick.base import Visualizer
if isinstance(estimator, Visualizer):
return is_classifier(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "classifier"
# Alias for closer name to isinstance and issubclass
isclassifier = is_classifier
```
We should remove these lines of code and **ensure the tests have correct coverage**.
</issue>
<code>
[start of yellowbrick/utils/types.py]
1 # yellowbrick.utils.types
2 # Detection utilities for Scikit-Learn and Numpy types for flexibility
3 #
4 # Author: Benjamin Bengfort <[email protected]>
5 # Created: Fri May 19 10:51:13 2017 -0700
6 #
7 # Copyright (C) 2017 District Data Labs
8 # For license information, see LICENSE.txt
9 #
10 # ID: types.py [79cd8cf] [email protected] $
11
12 """
13 Detection utilities for Scikit-Learn and Numpy types for flexibility
14 """
15
16 ##########################################################################
17 ## Imports
18 ##########################################################################
19
20 import inspect
21 import numpy as np
22
23 from sklearn.base import BaseEstimator
24
25
26 ##########################################################################
27 ## Model Type checking utilities
28 ##########################################################################
29
30 def is_estimator(model):
31 """
32 Determines if a model is an estimator using issubclass and isinstance.
33
34 Parameters
35 ----------
36 estimator : class or instance
37 The object to test if it is a Scikit-Learn clusterer, especially a
38 Scikit-Learn estimator or Yellowbrick visualizer
39 """
40 if inspect.isclass(model):
41 return issubclass(model, BaseEstimator)
42
43 return isinstance(model, BaseEstimator)
44
45 # Alias for closer name to isinstance and issubclass
46 isestimator = is_estimator
47
48
49 def is_classifier(estimator):
50 """
51 Returns True if the given estimator is (probably) a classifier.
52
53 Parameters
54 ----------
55 estimator : class or instance
56 The object to test if it is a Scikit-Learn clusterer, especially a
57 Scikit-Learn estimator or Yellowbrick visualizer
58
59 See also
60 --------
61 is_classifier
62 `sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_
63 """
64 # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
65 # wrappers as in Issue #90, these three lines become unnecessary.
66 # NOTE: This must be imported here to avoid recursive import.
67 from yellowbrick.base import Visualizer
68 if isinstance(estimator, Visualizer):
69 return is_classifier(estimator.estimator)
70
71 # Test the _estimator_type property
72 return getattr(estimator, "_estimator_type", None) == "classifier"
73
74 # Alias for closer name to isinstance and issubclass
75 isclassifier = is_classifier
76
77
78 def is_regressor(estimator):
79 """
80 Returns True if the given estimator is (probably) a regressor.
81
82 Parameters
83 ----------
84 estimator : class or instance
85 The object to test if it is a Scikit-Learn clusterer, especially a
86 Scikit-Learn estimator or Yellowbrick visualizer
87
88 See also
89 --------
90 is_regressor
91 `sklearn.is_regressor() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L531>`_
92 """
93 # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
94 # wrappers as in Issue #90, these three lines become unnecessary.
95 # NOTE: This must be imported here to avoid recursive import.
96 from yellowbrick.base import Visualizer
97 if isinstance(estimator, Visualizer):
98 return is_regressor(estimator.estimator)
99
100 # Test the _estimator_type property
101 return getattr(estimator, "_estimator_type", None) == "regressor"
102
103 # Alias for closer name to isinstance and issubclass
104 isregressor = is_regressor
105
106
107 def is_clusterer(estimator):
108 """
109 Returns True if the given estimator is a clusterer.
110
111 Parameters
112 ----------
113 estimator : class or instance
114 The object to test if it is a Scikit-Learn clusterer, especially a
115 Scikit-Learn estimator or Yellowbrick visualizer
116 """
117 # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
118 # wrappers as in Issue #90, these three lines become unnecessary.
119 # NOTE: This must be imported here to avoid recursive import.
120 from yellowbrick.base import Visualizer
121 if isinstance(estimator, Visualizer):
122 return is_clusterer(estimator.estimator)
123
124 # Test the _estimator_type property
125 return getattr(estimator, "_estimator_type", None) == "clusterer"
126
127 # Alias for closer name to isinstance and issubclass
128 isclusterer = is_clusterer
129
130
131 def is_gridsearch(estimator):
132 """
133 Returns True if the given estimator is a clusterer.
134
135 Parameters
136 ----------
137 estimator : class or instance
138 The object to test if it is a Scikit-Learn clusterer, especially a
139 Scikit-Learn estimator or Yellowbrick visualizer
140 """
141 # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
142 # wrappers as in Issue #90, these three lines become unnecessary.
143 # NOTE: This must be imported here to avoid recursive import.
144 from yellowbrick.base import Visualizer
145 if isinstance(estimator, Visualizer):
146 return is_gridsearch(estimator.estimator)
147
148 from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
149
150 if inspect.isclass(estimator):
151 return issubclass(estimator, (GridSearchCV, RandomizedSearchCV))
152
153 return isinstance(estimator, (GridSearchCV, RandomizedSearchCV))
154
155
156 # Alias for closer name to isinstance and issubclass
157 isgridsearch = is_gridsearch
158
159
160 def is_probabilistic(estimator):
161 """
162 Returns True if the given estimator returns a y_score for it's decision
163 function, e.g. has ``predict_proba`` or ``decision_function`` methods.
164
165 Parameters
166 ----------
167 estimator : class or instance
168 The object to test if is probabilistic, especially a Scikit-Learn
169 estimator or Yellowbrick visualizer.
170 """
171 return any([
172 hasattr(estimator, 'predict_proba'),
173 hasattr(estimator, 'decision_function'),
174 ])
175
176 # Alias for closer name to isinstance and issubclass
177 isprobabilistic = is_probabilistic
178
179
180 ##########################################################################
181 ## Data Type checking utilities
182 ##########################################################################
183
184 def is_dataframe(obj):
185 """
186 Returns True if the given object is a Pandas Data Frame.
187
188 Parameters
189 ----------
190 obj: instance
191 The object to test whether or not is a Pandas DataFrame.
192 """
193 try:
194 # This is the best method of type checking
195 from pandas import DataFrame
196 return isinstance(obj, DataFrame)
197 except ImportError:
198 # Pandas is not a dependency, so this is scary
199 return obj.__class__.__name__ == "DataFrame"
200
201 # Alias for closer name to isinstance and issubclass
202 isdataframe = is_dataframe
203
204
205 def is_series(obj):
206 """
207 Returns True if the given object is a Pandas Series.
208
209 Parameters
210 ----------
211 obj: instance
212 The object to test whether or not is a Pandas Series.
213 """
214 try:
215 # This is the best method of type checking
216 from pandas import Series
217 return isinstance(obj, Series)
218 except ImportError:
219 # Pandas is not a dependency, so this is scary
220 return obj.__class__.__name__ == "Series"
221
222 # Alias for closer name to isinstance and issubclass
223 isseries = is_series
224
225
226 def is_structured_array(obj):
227 """
228 Returns True if the given object is a Numpy Structured Array.
229
230 Parameters
231 ----------
232 obj: instance
233 The object to test whether or not is a Numpy Structured Array.
234 """
235 if isinstance(obj, np.ndarray) and hasattr(obj, 'dtype'):
236 if obj.dtype.names is not None:
237 return True
238 return False
239
240
241 # Alias for closer name to isinstance and issubclass
242 isstructuredarray = is_structured_array
243
[end of yellowbrick/utils/types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yellowbrick/utils/types.py b/yellowbrick/utils/types.py
--- a/yellowbrick/utils/types.py
+++ b/yellowbrick/utils/types.py
@@ -61,12 +61,6 @@
is_classifier
`sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_classifier(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "classifier"
@@ -90,12 +84,6 @@
is_regressor
`sklearn.is_regressor() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L531>`_
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_regressor(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "regressor"
@@ -114,12 +102,6 @@
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_clusterer(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "clusterer"
@@ -138,12 +120,6 @@
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_gridsearch(estimator.estimator)
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
|
{"golden_diff": "diff --git a/yellowbrick/utils/types.py b/yellowbrick/utils/types.py\n--- a/yellowbrick/utils/types.py\n+++ b/yellowbrick/utils/types.py\n@@ -61,12 +61,6 @@\n is_classifier\n `sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_\n \"\"\"\n- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n- # wrappers as in Issue #90, these three lines become unnecessary.\n- # NOTE: This must be imported here to avoid recursive import.\n- from yellowbrick.base import Visualizer\n- if isinstance(estimator, Visualizer):\n- return is_classifier(estimator.estimator)\n \n # Test the _estimator_type property\n return getattr(estimator, \"_estimator_type\", None) == \"classifier\"\n@@ -90,12 +84,6 @@\n is_regressor\n `sklearn.is_regressor() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L531>`_\n \"\"\"\n- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n- # wrappers as in Issue #90, these three lines become unnecessary.\n- # NOTE: This must be imported here to avoid recursive import.\n- from yellowbrick.base import Visualizer\n- if isinstance(estimator, Visualizer):\n- return is_regressor(estimator.estimator)\n \n # Test the _estimator_type property\n return getattr(estimator, \"_estimator_type\", None) == \"regressor\"\n@@ -114,12 +102,6 @@\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n \"\"\"\n- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n- # wrappers as in Issue #90, these three lines become unnecessary.\n- # NOTE: This must be imported here to avoid recursive import.\n- from yellowbrick.base import Visualizer\n- if isinstance(estimator, Visualizer):\n- return is_clusterer(estimator.estimator)\n \n # Test the _estimator_type property\n return getattr(estimator, \"_estimator_type\", None) == \"clusterer\"\n@@ -138,12 +120,6 @@\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n \"\"\"\n- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n- # wrappers as in Issue #90, these three lines become unnecessary.\n- # NOTE: This must be imported here to avoid recursive import.\n- from yellowbrick.base import Visualizer\n- if isinstance(estimator, Visualizer):\n- return is_gridsearch(estimator.estimator)\n \n from sklearn.model_selection import GridSearchCV, RandomizedSearchCV\n", "issue": "Remove visualizer tests in type checking\nMany of the type checking utilities, e.g. `is_classifier`, `is_regressor`, etc. have a note to remove lines of code that are unnecessary after #90 is implemented. \r\n\r\nFor example see: [yellowbrick/utils/types.py#L64](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/utils/types.py#L64):\r\n\r\n```python\r\ndef is_classifier(estimator):\r\n \"\"\"\r\n Returns True if the given estimator is (probably) a classifier.\r\n\r\n Parameters\r\n ----------\r\n estimator : class or instance\r\n The object to test if it is a Scikit-Learn clusterer, especially a\r\n Scikit-Learn estimator or Yellowbrick visualizer\r\n\r\n See also\r\n --------\r\n is_classifier\r\n `sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_\r\n \"\"\"\r\n # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\r\n # wrappers as in Issue #90, these three lines become unnecessary.\r\n # NOTE: This must be imported here to avoid recursive import.\r\n from yellowbrick.base import Visualizer\r\n if isinstance(estimator, Visualizer):\r\n return is_classifier(estimator.estimator)\r\n\r\n # Test the _estimator_type property\r\n return getattr(estimator, \"_estimator_type\", None) == \"classifier\"\r\n\r\n# Alias for closer name to isinstance and issubclass\r\nisclassifier = is_classifier\r\n```\r\n\r\nWe should remove these lines of code and **ensure the tests have correct coverage**. \n", "before_files": [{"content": "# yellowbrick.utils.types\n# Detection utilities for Scikit-Learn and Numpy types for flexibility\n#\n# Author: Benjamin Bengfort <[email protected]>\n# Created: Fri May 19 10:51:13 2017 -0700\n#\n# Copyright (C) 2017 District Data Labs\n# For license information, see LICENSE.txt\n#\n# ID: types.py [79cd8cf] [email protected] $\n\n\"\"\"\nDetection utilities for Scikit-Learn and Numpy types for flexibility\n\"\"\"\n\n##########################################################################\n## Imports\n##########################################################################\n\nimport inspect\nimport numpy as np\n\nfrom sklearn.base import BaseEstimator\n\n\n##########################################################################\n## Model Type checking utilities\n##########################################################################\n\ndef is_estimator(model):\n \"\"\"\n Determines if a model is an estimator using issubclass and isinstance.\n\n Parameters\n ----------\n estimator : class or instance\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n \"\"\"\n if inspect.isclass(model):\n return issubclass(model, BaseEstimator)\n\n return isinstance(model, BaseEstimator)\n\n# Alias for closer name to isinstance and issubclass\nisestimator = is_estimator\n\n\ndef is_classifier(estimator):\n \"\"\"\n Returns True if the given estimator is (probably) a classifier.\n\n Parameters\n ----------\n estimator : class or instance\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n\n See also\n --------\n is_classifier\n `sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_\n \"\"\"\n # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n # wrappers as in Issue #90, these three lines become unnecessary.\n # NOTE: This must be imported here to avoid recursive import.\n from yellowbrick.base import Visualizer\n if isinstance(estimator, Visualizer):\n return is_classifier(estimator.estimator)\n\n # Test the _estimator_type property\n return getattr(estimator, \"_estimator_type\", None) == \"classifier\"\n\n# Alias for closer name to isinstance and issubclass\nisclassifier = is_classifier\n\n\ndef is_regressor(estimator):\n \"\"\"\n Returns True if the given estimator is (probably) a regressor.\n\n Parameters\n ----------\n estimator : class or instance\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n\n See also\n --------\n is_regressor\n `sklearn.is_regressor() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L531>`_\n \"\"\"\n # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n # wrappers as in Issue #90, these three lines become unnecessary.\n # NOTE: This must be imported here to avoid recursive import.\n from yellowbrick.base import Visualizer\n if isinstance(estimator, Visualizer):\n return is_regressor(estimator.estimator)\n\n # Test the _estimator_type property\n return getattr(estimator, \"_estimator_type\", None) == \"regressor\"\n\n# Alias for closer name to isinstance and issubclass\nisregressor = is_regressor\n\n\ndef is_clusterer(estimator):\n \"\"\"\n Returns True if the given estimator is a clusterer.\n\n Parameters\n ----------\n estimator : class or instance\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n \"\"\"\n # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n # wrappers as in Issue #90, these three lines become unnecessary.\n # NOTE: This must be imported here to avoid recursive import.\n from yellowbrick.base import Visualizer\n if isinstance(estimator, Visualizer):\n return is_clusterer(estimator.estimator)\n\n # Test the _estimator_type property\n return getattr(estimator, \"_estimator_type\", None) == \"clusterer\"\n\n# Alias for closer name to isinstance and issubclass\nisclusterer = is_clusterer\n\n\ndef is_gridsearch(estimator):\n \"\"\"\n Returns True if the given estimator is a clusterer.\n\n Parameters\n ----------\n estimator : class or instance\n The object to test if it is a Scikit-Learn clusterer, especially a\n Scikit-Learn estimator or Yellowbrick visualizer\n \"\"\"\n # TODO: once we make ScoreVisualizer and ModelVisualizer pass through\n # wrappers as in Issue #90, these three lines become unnecessary.\n # NOTE: This must be imported here to avoid recursive import.\n from yellowbrick.base import Visualizer\n if isinstance(estimator, Visualizer):\n return is_gridsearch(estimator.estimator)\n\n from sklearn.model_selection import GridSearchCV, RandomizedSearchCV\n\n if inspect.isclass(estimator):\n return issubclass(estimator, (GridSearchCV, RandomizedSearchCV))\n\n return isinstance(estimator, (GridSearchCV, RandomizedSearchCV))\n\n\n# Alias for closer name to isinstance and issubclass\nisgridsearch = is_gridsearch\n\n\ndef is_probabilistic(estimator):\n \"\"\"\n Returns True if the given estimator returns a y_score for it's decision\n function, e.g. has ``predict_proba`` or ``decision_function`` methods.\n\n Parameters\n ----------\n estimator : class or instance\n The object to test if is probabilistic, especially a Scikit-Learn\n estimator or Yellowbrick visualizer.\n \"\"\"\n return any([\n hasattr(estimator, 'predict_proba'),\n hasattr(estimator, 'decision_function'),\n ])\n\n# Alias for closer name to isinstance and issubclass\nisprobabilistic = is_probabilistic\n\n\n##########################################################################\n## Data Type checking utilities\n##########################################################################\n\ndef is_dataframe(obj):\n \"\"\"\n Returns True if the given object is a Pandas Data Frame.\n\n Parameters\n ----------\n obj: instance\n The object to test whether or not is a Pandas DataFrame.\n \"\"\"\n try:\n # This is the best method of type checking\n from pandas import DataFrame\n return isinstance(obj, DataFrame)\n except ImportError:\n # Pandas is not a dependency, so this is scary\n return obj.__class__.__name__ == \"DataFrame\"\n\n# Alias for closer name to isinstance and issubclass\nisdataframe = is_dataframe\n\n\ndef is_series(obj):\n \"\"\"\n Returns True if the given object is a Pandas Series.\n\n Parameters\n ----------\n obj: instance\n The object to test whether or not is a Pandas Series.\n \"\"\"\n try:\n # This is the best method of type checking\n from pandas import Series\n return isinstance(obj, Series)\n except ImportError:\n # Pandas is not a dependency, so this is scary\n return obj.__class__.__name__ == \"Series\"\n\n# Alias for closer name to isinstance and issubclass\nisseries = is_series\n\n\ndef is_structured_array(obj):\n \"\"\"\n Returns True if the given object is a Numpy Structured Array.\n\n Parameters\n ----------\n obj: instance\n The object to test whether or not is a Numpy Structured Array.\n \"\"\"\n if isinstance(obj, np.ndarray) and hasattr(obj, 'dtype'):\n if obj.dtype.names is not None:\n return True\n return False\n\n\n# Alias for closer name to isinstance and issubclass\nisstructuredarray = is_structured_array\n", "path": "yellowbrick/utils/types.py"}]}
| 3,180 | 666 |
gh_patches_debug_26229
|
rasdani/github-patches
|
git_diff
|
searx__searx-2799
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Updating languages of qwant is broken
The URL used for retrieving available languages has gone:
```
engines_languages = fetch_supported_languages()
File "/home/runner/work/searx/searx/./searx_extra/update/update_languages.py", line 30, in fetch_supported_languages
engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()
File "/home/runner/work/searx/searx/searx/engines/__init__.py", line 141, in <lambda>
lambda: engine._fetch_supported_languages(get(engine.supported_languages_url, headers=headers)))
File "/home/runner/work/searx/searx/searx/poolrequests.py", line 209, in get
return request('get', url, **kwargs)
File "/home/runner/work/searx/searx/searx/poolrequests.py", line 202, in request
raise_for_httperror(response)
File "/home/runner/work/searx/searx/searx/raise_for_httperror.py", line 66, in raise_for_httperror
resp.raise_for_status()
File "/home/runner/work/searx/searx/local/py3/lib/python3.9/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.qwant.com/region
```
Full run: https://github.com/searx/searx/runs/2487497774?check_suite_focus=true
@MarcAbonce If you have time could you please look into it? Or at least give some pointers how it can be fixed?
</issue>
<code>
[start of searx/engines/qwant.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Qwant (Web, Images, News, Social)
4 """
5
6 from datetime import datetime
7 from json import loads
8 from urllib.parse import urlencode
9 from searx.utils import html_to_text, match_language
10 from searx.exceptions import SearxEngineAPIException, SearxEngineCaptchaException
11 from searx.raise_for_httperror import raise_for_httperror
12
13 # about
14 about = {
15 "website": 'https://www.qwant.com/',
16 "wikidata_id": 'Q14657870',
17 "official_api_documentation": None,
18 "use_official_api": True,
19 "require_api_key": False,
20 "results": 'JSON',
21 }
22
23 # engine dependent config
24 categories = []
25 paging = True
26 supported_languages_url = 'https://qwant.com/region'
27
28 category_to_keyword = {'general': 'web',
29 'images': 'images',
30 'news': 'news'}
31
32 # search-url
33 url = 'https://api.qwant.com/api/search/{keyword}?count=10&offset={offset}&f=&{query}&t={keyword}&uiv=4'
34
35
36 # do search-request
37 def request(query, params):
38 offset = (params['pageno'] - 1) * 10
39
40 if categories[0] and categories[0] in category_to_keyword:
41
42 params['url'] = url.format(keyword=category_to_keyword[categories[0]],
43 query=urlencode({'q': query}),
44 offset=offset)
45 else:
46 params['url'] = url.format(keyword='web',
47 query=urlencode({'q': query}),
48 offset=offset)
49
50 # add language tag
51 if params['language'] != 'all':
52 language = match_language(params['language'], supported_languages, language_aliases)
53 params['url'] += '&locale=' + language.replace('-', '_').lower()
54
55 params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0'
56 params['raise_for_httperror'] = False
57 return params
58
59
60 # get response from search-request
61 def response(resp):
62 results = []
63
64 # According to https://www.qwant.com/js/app.js
65 if resp.status_code == 429:
66 raise SearxEngineCaptchaException()
67
68 # raise for other errors
69 raise_for_httperror(resp)
70
71 # load JSON result
72 search_results = loads(resp.text)
73
74 # check for an API error
75 if search_results.get('status') != 'success':
76 raise SearxEngineAPIException('API error ' + str(search_results.get('error', '')))
77
78 # return empty array if there are no results
79 if 'data' not in search_results:
80 return []
81
82 data = search_results.get('data', {})
83
84 res = data.get('result', {})
85
86 # parse results
87 for result in res.get('items', {}):
88
89 title = html_to_text(result['title'])
90 res_url = result['url']
91 content = html_to_text(result['desc'])
92
93 if category_to_keyword.get(categories[0], '') == 'web':
94 results.append({'title': title,
95 'content': content,
96 'url': res_url})
97
98 elif category_to_keyword.get(categories[0], '') == 'images':
99 thumbnail_src = result['thumbnail']
100 img_src = result['media']
101 results.append({'template': 'images.html',
102 'url': res_url,
103 'title': title,
104 'content': '',
105 'thumbnail_src': thumbnail_src,
106 'img_src': img_src})
107
108 elif category_to_keyword.get(categories[0], '') == 'news':
109 published_date = datetime.fromtimestamp(result['date'], None)
110 media = result.get('media', [])
111 if len(media) > 0:
112 img_src = media[0].get('pict', {}).get('url', None)
113 else:
114 img_src = None
115 results.append({'url': res_url,
116 'title': title,
117 'publishedDate': published_date,
118 'content': content,
119 'img_src': img_src})
120
121 return results
122
123
124 # get supported languages from their site
125 def _fetch_supported_languages(resp):
126 # list of regions is embedded in page as a js object
127 response_text = resp.text
128 response_text = response_text[response_text.find('regionalisation'):]
129 response_text = response_text[response_text.find('{'):response_text.find(');')]
130
131 regions_json = loads(response_text)
132
133 supported_languages = {}
134 for lang in regions_json['languages'].values():
135 for country in lang['countries']:
136 lang_code = "{lang}-{country}".format(lang=lang['code'], country=country)
137 supported_languages[lang_code] = {'name': lang['name']}
138
139 return supported_languages
140
[end of searx/engines/qwant.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/qwant.py b/searx/engines/qwant.py
--- a/searx/engines/qwant.py
+++ b/searx/engines/qwant.py
@@ -23,7 +23,7 @@
# engine dependent config
categories = []
paging = True
-supported_languages_url = 'https://qwant.com/region'
+supported_languages_url = about['website']
category_to_keyword = {'general': 'web',
'images': 'images',
@@ -125,15 +125,15 @@
def _fetch_supported_languages(resp):
# list of regions is embedded in page as a js object
response_text = resp.text
- response_text = response_text[response_text.find('regionalisation'):]
- response_text = response_text[response_text.find('{'):response_text.find(');')]
+ response_text = response_text[response_text.find('INITIAL_PROPS'):]
+ response_text = response_text[response_text.find('{'):response_text.find('</script>')]
regions_json = loads(response_text)
- supported_languages = {}
- for lang in regions_json['languages'].values():
- for country in lang['countries']:
- lang_code = "{lang}-{country}".format(lang=lang['code'], country=country)
- supported_languages[lang_code] = {'name': lang['name']}
+ supported_languages = []
+ for country, langs in regions_json['locales'].items():
+ for lang in langs['langs']:
+ lang_code = "{lang}-{country}".format(lang=lang, country=country)
+ supported_languages.append(lang_code)
return supported_languages
|
{"golden_diff": "diff --git a/searx/engines/qwant.py b/searx/engines/qwant.py\n--- a/searx/engines/qwant.py\n+++ b/searx/engines/qwant.py\n@@ -23,7 +23,7 @@\n # engine dependent config\n categories = []\n paging = True\n-supported_languages_url = 'https://qwant.com/region'\n+supported_languages_url = about['website']\n \n category_to_keyword = {'general': 'web',\n 'images': 'images',\n@@ -125,15 +125,15 @@\n def _fetch_supported_languages(resp):\n # list of regions is embedded in page as a js object\n response_text = resp.text\n- response_text = response_text[response_text.find('regionalisation'):]\n- response_text = response_text[response_text.find('{'):response_text.find(');')]\n+ response_text = response_text[response_text.find('INITIAL_PROPS'):]\n+ response_text = response_text[response_text.find('{'):response_text.find('</script>')]\n \n regions_json = loads(response_text)\n \n- supported_languages = {}\n- for lang in regions_json['languages'].values():\n- for country in lang['countries']:\n- lang_code = \"{lang}-{country}\".format(lang=lang['code'], country=country)\n- supported_languages[lang_code] = {'name': lang['name']}\n+ supported_languages = []\n+ for country, langs in regions_json['locales'].items():\n+ for lang in langs['langs']:\n+ lang_code = \"{lang}-{country}\".format(lang=lang, country=country)\n+ supported_languages.append(lang_code)\n \n return supported_languages\n", "issue": "Updating languages of qwant is broken\nThe URL used for retrieving available languages has gone:\r\n\r\n```\r\n engines_languages = fetch_supported_languages()\r\n File \"/home/runner/work/searx/searx/./searx_extra/update/update_languages.py\", line 30, in fetch_supported_languages\r\n engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()\r\n File \"/home/runner/work/searx/searx/searx/engines/__init__.py\", line 141, in <lambda>\r\n lambda: engine._fetch_supported_languages(get(engine.supported_languages_url, headers=headers)))\r\n File \"/home/runner/work/searx/searx/searx/poolrequests.py\", line 209, in get\r\n return request('get', url, **kwargs)\r\n File \"/home/runner/work/searx/searx/searx/poolrequests.py\", line 202, in request\r\n raise_for_httperror(response)\r\n File \"/home/runner/work/searx/searx/searx/raise_for_httperror.py\", line 66, in raise_for_httperror\r\n resp.raise_for_status()\r\n File \"/home/runner/work/searx/searx/local/py3/lib/python3.9/site-packages/requests/models.py\", line 943, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.qwant.com/region\r\n```\r\n\r\nFull run: https://github.com/searx/searx/runs/2487497774?check_suite_focus=true\r\n\r\n@MarcAbonce If you have time could you please look into it? Or at least give some pointers how it can be fixed?\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Qwant (Web, Images, News, Social)\n\"\"\"\n\nfrom datetime import datetime\nfrom json import loads\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text, match_language\nfrom searx.exceptions import SearxEngineAPIException, SearxEngineCaptchaException\nfrom searx.raise_for_httperror import raise_for_httperror\n\n# about\nabout = {\n \"website\": 'https://www.qwant.com/',\n \"wikidata_id\": 'Q14657870',\n \"official_api_documentation\": None,\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\n# engine dependent config\ncategories = []\npaging = True\nsupported_languages_url = 'https://qwant.com/region'\n\ncategory_to_keyword = {'general': 'web',\n 'images': 'images',\n 'news': 'news'}\n\n# search-url\nurl = 'https://api.qwant.com/api/search/{keyword}?count=10&offset={offset}&f=&{query}&t={keyword}&uiv=4'\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * 10\n\n if categories[0] and categories[0] in category_to_keyword:\n\n params['url'] = url.format(keyword=category_to_keyword[categories[0]],\n query=urlencode({'q': query}),\n offset=offset)\n else:\n params['url'] = url.format(keyword='web',\n query=urlencode({'q': query}),\n offset=offset)\n\n # add language tag\n if params['language'] != 'all':\n language = match_language(params['language'], supported_languages, language_aliases)\n params['url'] += '&locale=' + language.replace('-', '_').lower()\n\n params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0'\n params['raise_for_httperror'] = False\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n # According to https://www.qwant.com/js/app.js\n if resp.status_code == 429:\n raise SearxEngineCaptchaException()\n\n # raise for other errors\n raise_for_httperror(resp)\n\n # load JSON result\n search_results = loads(resp.text)\n\n # check for an API error\n if search_results.get('status') != 'success':\n raise SearxEngineAPIException('API error ' + str(search_results.get('error', '')))\n\n # return empty array if there are no results\n if 'data' not in search_results:\n return []\n\n data = search_results.get('data', {})\n\n res = data.get('result', {})\n\n # parse results\n for result in res.get('items', {}):\n\n title = html_to_text(result['title'])\n res_url = result['url']\n content = html_to_text(result['desc'])\n\n if category_to_keyword.get(categories[0], '') == 'web':\n results.append({'title': title,\n 'content': content,\n 'url': res_url})\n\n elif category_to_keyword.get(categories[0], '') == 'images':\n thumbnail_src = result['thumbnail']\n img_src = result['media']\n results.append({'template': 'images.html',\n 'url': res_url,\n 'title': title,\n 'content': '',\n 'thumbnail_src': thumbnail_src,\n 'img_src': img_src})\n\n elif category_to_keyword.get(categories[0], '') == 'news':\n published_date = datetime.fromtimestamp(result['date'], None)\n media = result.get('media', [])\n if len(media) > 0:\n img_src = media[0].get('pict', {}).get('url', None)\n else:\n img_src = None\n results.append({'url': res_url,\n 'title': title,\n 'publishedDate': published_date,\n 'content': content,\n 'img_src': img_src})\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n # list of regions is embedded in page as a js object\n response_text = resp.text\n response_text = response_text[response_text.find('regionalisation'):]\n response_text = response_text[response_text.find('{'):response_text.find(');')]\n\n regions_json = loads(response_text)\n\n supported_languages = {}\n for lang in regions_json['languages'].values():\n for country in lang['countries']:\n lang_code = \"{lang}-{country}\".format(lang=lang['code'], country=country)\n supported_languages[lang_code] = {'name': lang['name']}\n\n return supported_languages\n", "path": "searx/engines/qwant.py"}]}
| 2,321 | 366 |
gh_patches_debug_17273
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-5231
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
graphql_jwt.exceptions.PermissionDenied: You do not have permission to perform this action
### I want to deploy full saleor store(saleor/saleor-dashboard/saleor-storefront) with docker but i catught and expection that means have no permissn to query in graphql api and storefront cant load the main site.
…
### Steps to reproduce the problem
1. git clone https://github.com/mirumee/saleor.git
2. cd saleor
3. docker-compose build
4. And continue to see in accordance with the documentation

### What I expected to happen
load normally the site like before.
### Screenshots

**System information**
Operating system:
ubuntu 18:04
</issue>
<code>
[start of saleor/graphql/account/schema.py]
1 import graphene
2 from graphql_jwt.decorators import login_required
3
4 from ...core.permissions import AccountPermissions
5 from ..core.fields import FilterInputConnectionField
6 from ..core.types import FilterInputObjectType
7 from ..decorators import one_of_permissions_required, permission_required
8 from .bulk_mutations import CustomerBulkDelete, StaffBulkDelete, UserBulkSetActive
9 from .enums import CountryCodeEnum
10 from .filters import (
11 CustomerFilter,
12 PermissionGroupFilter,
13 ServiceAccountFilter,
14 StaffUserFilter,
15 )
16 from .mutations.account import (
17 AccountAddressCreate,
18 AccountAddressDelete,
19 AccountAddressUpdate,
20 AccountDelete,
21 AccountRegister,
22 AccountRequestDeletion,
23 AccountSetDefaultAddress,
24 AccountUpdate,
25 AccountUpdateMeta,
26 ConfirmEmailChange,
27 RequestEmailChange,
28 )
29 from .mutations.base import (
30 ConfirmAccount,
31 PasswordChange,
32 RequestPasswordReset,
33 SetPassword,
34 UserClearMeta,
35 UserUpdateMeta,
36 )
37 from .mutations.permission_group import (
38 PermissionGroupAssignUsers,
39 PermissionGroupCreate,
40 PermissionGroupDelete,
41 PermissionGroupUnassignUsers,
42 PermissionGroupUpdate,
43 )
44 from .mutations.service_account import (
45 ServiceAccountClearPrivateMeta,
46 ServiceAccountCreate,
47 ServiceAccountDelete,
48 ServiceAccountTokenCreate,
49 ServiceAccountTokenDelete,
50 ServiceAccountUpdate,
51 ServiceAccountUpdatePrivateMeta,
52 )
53 from .mutations.staff import (
54 AddressCreate,
55 AddressDelete,
56 AddressSetDefault,
57 AddressUpdate,
58 CustomerCreate,
59 CustomerDelete,
60 CustomerUpdate,
61 StaffCreate,
62 StaffDelete,
63 StaffUpdate,
64 UserAvatarDelete,
65 UserAvatarUpdate,
66 UserClearPrivateMeta,
67 UserUpdatePrivateMeta,
68 )
69 from .resolvers import (
70 resolve_address,
71 resolve_address_validation_rules,
72 resolve_customers,
73 resolve_permission_groups,
74 resolve_service_accounts,
75 resolve_staff_users,
76 resolve_user,
77 )
78 from .sorters import (
79 PermissionGroupSortingInput,
80 ServiceAccountSortingInput,
81 UserSortingInput,
82 )
83 from .types import Address, AddressValidationData, Group, ServiceAccount, User
84
85
86 class CustomerFilterInput(FilterInputObjectType):
87 class Meta:
88 filterset_class = CustomerFilter
89
90
91 class PermissionGroupFilterInput(FilterInputObjectType):
92 class Meta:
93 filterset_class = PermissionGroupFilter
94
95
96 class StaffUserInput(FilterInputObjectType):
97 class Meta:
98 filterset_class = StaffUserFilter
99
100
101 class ServiceAccountFilterInput(FilterInputObjectType):
102 class Meta:
103 filterset_class = ServiceAccountFilter
104
105
106 class AccountQueries(graphene.ObjectType):
107 address_validation_rules = graphene.Field(
108 AddressValidationData,
109 description="Returns address validation rules.",
110 country_code=graphene.Argument(
111 CountryCodeEnum,
112 description="Two-letter ISO 3166-1 country code.",
113 required=True,
114 ),
115 country_area=graphene.Argument(
116 graphene.String, description="Designation of a region, province or state."
117 ),
118 city=graphene.Argument(graphene.String, description="City or a town name."),
119 city_area=graphene.Argument(
120 graphene.String, description="Sublocality like a district."
121 ),
122 )
123 address = graphene.Field(
124 Address,
125 id=graphene.Argument(
126 graphene.ID, description="ID of an address.", required=True
127 ),
128 description="Look up an address by ID.",
129 )
130 customers = FilterInputConnectionField(
131 User,
132 filter=CustomerFilterInput(description="Filtering options for customers."),
133 sort_by=UserSortingInput(description="Sort customers."),
134 description="List of the shop's customers.",
135 )
136 permission_groups = FilterInputConnectionField(
137 Group,
138 filter=PermissionGroupFilterInput(
139 description="Filtering options for permission groups."
140 ),
141 sort_by=PermissionGroupSortingInput(description="Sort permission groups."),
142 description="List of permission groups.",
143 )
144 permission_group = graphene.Field(
145 Group,
146 id=graphene.Argument(
147 graphene.ID, description="ID of the group.", required=True
148 ),
149 description="Look up permission group by ID.",
150 )
151 me = graphene.Field(User, description="Return the currently authenticated user.")
152 staff_users = FilterInputConnectionField(
153 User,
154 filter=StaffUserInput(description="Filtering options for staff users."),
155 sort_by=UserSortingInput(description="Sort staff users."),
156 description="List of the shop's staff users.",
157 )
158 service_accounts = FilterInputConnectionField(
159 ServiceAccount,
160 filter=ServiceAccountFilterInput(
161 description="Filtering options for service accounts."
162 ),
163 sort_by=ServiceAccountSortingInput(description="Sort service accounts."),
164 description="List of the service accounts.",
165 )
166 service_account = graphene.Field(
167 ServiceAccount,
168 id=graphene.Argument(
169 graphene.ID, description="ID of the service account.", required=True
170 ),
171 description="Look up a service account by ID.",
172 )
173
174 user = graphene.Field(
175 User,
176 id=graphene.Argument(graphene.ID, description="ID of the user.", required=True),
177 description="Look up a user by ID.",
178 )
179
180 def resolve_address_validation_rules(
181 self, info, country_code, country_area=None, city=None, city_area=None
182 ):
183 return resolve_address_validation_rules(
184 info,
185 country_code,
186 country_area=country_area,
187 city=city,
188 city_area=city_area,
189 )
190
191 @permission_required(AccountPermissions.MANAGE_SERVICE_ACCOUNTS)
192 def resolve_service_accounts(self, info, **kwargs):
193 return resolve_service_accounts(info, **kwargs)
194
195 @permission_required(AccountPermissions.MANAGE_SERVICE_ACCOUNTS)
196 def resolve_service_account(self, info, id):
197 return graphene.Node.get_node_from_global_id(info, id, ServiceAccount)
198
199 @permission_required(AccountPermissions.MANAGE_USERS)
200 def resolve_customers(self, info, query=None, **kwargs):
201 return resolve_customers(info, query=query, **kwargs)
202
203 @permission_required(AccountPermissions.MANAGE_STAFF)
204 def resolve_permission_groups(self, info, query=None, **kwargs):
205 return resolve_permission_groups(info, query=query, **kwargs)
206
207 @permission_required(AccountPermissions.MANAGE_STAFF)
208 def resolve_permission_group(self, info, id):
209 return graphene.Node.get_node_from_global_id(info, id, Group)
210
211 @login_required
212 def resolve_me(self, info):
213 return info.context.user
214
215 @permission_required(AccountPermissions.MANAGE_STAFF)
216 def resolve_staff_users(self, info, query=None, **kwargs):
217 return resolve_staff_users(info, query=query, **kwargs)
218
219 @one_of_permissions_required(
220 [AccountPermissions.MANAGE_STAFF, AccountPermissions.MANAGE_USERS]
221 )
222 def resolve_user(self, info, id):
223 return resolve_user(info, id)
224
225 def resolve_address(self, info, id):
226 return resolve_address(info, id)
227
228
229 class AccountMutations(graphene.ObjectType):
230 # Base mutations
231 request_password_reset = RequestPasswordReset.Field()
232 confirm_account = ConfirmAccount.Field()
233 set_password = SetPassword.Field()
234 password_change = PasswordChange.Field()
235 request_email_change = RequestEmailChange.Field()
236 confirm_email_change = ConfirmEmailChange.Field()
237
238 # Account mutations
239 account_address_create = AccountAddressCreate.Field()
240 account_address_update = AccountAddressUpdate.Field()
241 account_address_delete = AccountAddressDelete.Field()
242 account_set_default_address = AccountSetDefaultAddress.Field()
243
244 account_register = AccountRegister.Field()
245 account_update = AccountUpdate.Field()
246 account_request_deletion = AccountRequestDeletion.Field()
247 account_delete = AccountDelete.Field()
248
249 account_update_meta = AccountUpdateMeta.Field()
250
251 # Staff mutations
252 address_create = AddressCreate.Field()
253 address_update = AddressUpdate.Field()
254 address_delete = AddressDelete.Field()
255 address_set_default = AddressSetDefault.Field()
256
257 customer_create = CustomerCreate.Field()
258 customer_update = CustomerUpdate.Field()
259 customer_delete = CustomerDelete.Field()
260 customer_bulk_delete = CustomerBulkDelete.Field()
261
262 staff_create = StaffCreate.Field()
263 staff_update = StaffUpdate.Field()
264 staff_delete = StaffDelete.Field()
265 staff_bulk_delete = StaffBulkDelete.Field()
266
267 user_avatar_update = UserAvatarUpdate.Field()
268 user_avatar_delete = UserAvatarDelete.Field()
269 user_bulk_set_active = UserBulkSetActive.Field()
270
271 user_update_metadata = UserUpdateMeta.Field()
272 user_clear_metadata = UserClearMeta.Field()
273
274 user_update_private_metadata = UserUpdatePrivateMeta.Field()
275 user_clear_private_metadata = UserClearPrivateMeta.Field()
276
277 service_account_create = ServiceAccountCreate.Field()
278 service_account_update = ServiceAccountUpdate.Field()
279 service_account_delete = ServiceAccountDelete.Field()
280
281 service_account_update_private_metadata = ServiceAccountUpdatePrivateMeta.Field()
282 service_account_clear_private_metadata = ServiceAccountClearPrivateMeta.Field()
283
284 service_account_token_create = ServiceAccountTokenCreate.Field()
285 service_account_token_delete = ServiceAccountTokenDelete.Field()
286
287 # Permission group mutations
288 permission_group_create = PermissionGroupCreate.Field()
289 permission_group_update = PermissionGroupUpdate.Field()
290 permission_group_delete = PermissionGroupDelete.Field()
291 permission_group_assign_users = PermissionGroupAssignUsers.Field()
292 permission_group_unassign_users = PermissionGroupUnassignUsers.Field()
293
[end of saleor/graphql/account/schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/saleor/graphql/account/schema.py b/saleor/graphql/account/schema.py
--- a/saleor/graphql/account/schema.py
+++ b/saleor/graphql/account/schema.py
@@ -1,5 +1,4 @@
import graphene
-from graphql_jwt.decorators import login_required
from ...core.permissions import AccountPermissions
from ..core.fields import FilterInputConnectionField
@@ -208,9 +207,9 @@
def resolve_permission_group(self, info, id):
return graphene.Node.get_node_from_global_id(info, id, Group)
- @login_required
def resolve_me(self, info):
- return info.context.user
+ user = info.context.user
+ return user if user.is_authenticated else None
@permission_required(AccountPermissions.MANAGE_STAFF)
def resolve_staff_users(self, info, query=None, **kwargs):
|
{"golden_diff": "diff --git a/saleor/graphql/account/schema.py b/saleor/graphql/account/schema.py\n--- a/saleor/graphql/account/schema.py\n+++ b/saleor/graphql/account/schema.py\n@@ -1,5 +1,4 @@\n import graphene\n-from graphql_jwt.decorators import login_required\n \n from ...core.permissions import AccountPermissions\n from ..core.fields import FilterInputConnectionField\n@@ -208,9 +207,9 @@\n def resolve_permission_group(self, info, id):\n return graphene.Node.get_node_from_global_id(info, id, Group)\n \n- @login_required\n def resolve_me(self, info):\n- return info.context.user\n+ user = info.context.user\n+ return user if user.is_authenticated else None\n \n @permission_required(AccountPermissions.MANAGE_STAFF)\n def resolve_staff_users(self, info, query=None, **kwargs):\n", "issue": "graphql_jwt.exceptions.PermissionDenied: You do not have permission to perform this action\n### I want to deploy full saleor store(saleor/saleor-dashboard/saleor-storefront) with docker but i catught and expection that means have no permissn to query in graphql api and storefront cant load the main site.\r\n\u2026\r\n\r\n### Steps to reproduce the problem\r\n1. git clone https://github.com/mirumee/saleor.git\r\n2. cd saleor\r\n3. docker-compose build\r\n4. And continue to see in accordance with the documentation\r\n\r\n\r\n### What I expected to happen\r\nload normally the site like before.\r\n\r\n### Screenshots\r\n\r\n\r\n**System information**\r\nOperating system:\r\nubuntu 18:04\n", "before_files": [{"content": "import graphene\nfrom graphql_jwt.decorators import login_required\n\nfrom ...core.permissions import AccountPermissions\nfrom ..core.fields import FilterInputConnectionField\nfrom ..core.types import FilterInputObjectType\nfrom ..decorators import one_of_permissions_required, permission_required\nfrom .bulk_mutations import CustomerBulkDelete, StaffBulkDelete, UserBulkSetActive\nfrom .enums import CountryCodeEnum\nfrom .filters import (\n CustomerFilter,\n PermissionGroupFilter,\n ServiceAccountFilter,\n StaffUserFilter,\n)\nfrom .mutations.account import (\n AccountAddressCreate,\n AccountAddressDelete,\n AccountAddressUpdate,\n AccountDelete,\n AccountRegister,\n AccountRequestDeletion,\n AccountSetDefaultAddress,\n AccountUpdate,\n AccountUpdateMeta,\n ConfirmEmailChange,\n RequestEmailChange,\n)\nfrom .mutations.base import (\n ConfirmAccount,\n PasswordChange,\n RequestPasswordReset,\n SetPassword,\n UserClearMeta,\n UserUpdateMeta,\n)\nfrom .mutations.permission_group import (\n PermissionGroupAssignUsers,\n PermissionGroupCreate,\n PermissionGroupDelete,\n PermissionGroupUnassignUsers,\n PermissionGroupUpdate,\n)\nfrom .mutations.service_account import (\n ServiceAccountClearPrivateMeta,\n ServiceAccountCreate,\n ServiceAccountDelete,\n ServiceAccountTokenCreate,\n ServiceAccountTokenDelete,\n ServiceAccountUpdate,\n ServiceAccountUpdatePrivateMeta,\n)\nfrom .mutations.staff import (\n AddressCreate,\n AddressDelete,\n AddressSetDefault,\n AddressUpdate,\n CustomerCreate,\n CustomerDelete,\n CustomerUpdate,\n StaffCreate,\n StaffDelete,\n StaffUpdate,\n UserAvatarDelete,\n UserAvatarUpdate,\n UserClearPrivateMeta,\n UserUpdatePrivateMeta,\n)\nfrom .resolvers import (\n resolve_address,\n resolve_address_validation_rules,\n resolve_customers,\n resolve_permission_groups,\n resolve_service_accounts,\n resolve_staff_users,\n resolve_user,\n)\nfrom .sorters import (\n PermissionGroupSortingInput,\n ServiceAccountSortingInput,\n UserSortingInput,\n)\nfrom .types import Address, AddressValidationData, Group, ServiceAccount, User\n\n\nclass CustomerFilterInput(FilterInputObjectType):\n class Meta:\n filterset_class = CustomerFilter\n\n\nclass PermissionGroupFilterInput(FilterInputObjectType):\n class Meta:\n filterset_class = PermissionGroupFilter\n\n\nclass StaffUserInput(FilterInputObjectType):\n class Meta:\n filterset_class = StaffUserFilter\n\n\nclass ServiceAccountFilterInput(FilterInputObjectType):\n class Meta:\n filterset_class = ServiceAccountFilter\n\n\nclass AccountQueries(graphene.ObjectType):\n address_validation_rules = graphene.Field(\n AddressValidationData,\n description=\"Returns address validation rules.\",\n country_code=graphene.Argument(\n CountryCodeEnum,\n description=\"Two-letter ISO 3166-1 country code.\",\n required=True,\n ),\n country_area=graphene.Argument(\n graphene.String, description=\"Designation of a region, province or state.\"\n ),\n city=graphene.Argument(graphene.String, description=\"City or a town name.\"),\n city_area=graphene.Argument(\n graphene.String, description=\"Sublocality like a district.\"\n ),\n )\n address = graphene.Field(\n Address,\n id=graphene.Argument(\n graphene.ID, description=\"ID of an address.\", required=True\n ),\n description=\"Look up an address by ID.\",\n )\n customers = FilterInputConnectionField(\n User,\n filter=CustomerFilterInput(description=\"Filtering options for customers.\"),\n sort_by=UserSortingInput(description=\"Sort customers.\"),\n description=\"List of the shop's customers.\",\n )\n permission_groups = FilterInputConnectionField(\n Group,\n filter=PermissionGroupFilterInput(\n description=\"Filtering options for permission groups.\"\n ),\n sort_by=PermissionGroupSortingInput(description=\"Sort permission groups.\"),\n description=\"List of permission groups.\",\n )\n permission_group = graphene.Field(\n Group,\n id=graphene.Argument(\n graphene.ID, description=\"ID of the group.\", required=True\n ),\n description=\"Look up permission group by ID.\",\n )\n me = graphene.Field(User, description=\"Return the currently authenticated user.\")\n staff_users = FilterInputConnectionField(\n User,\n filter=StaffUserInput(description=\"Filtering options for staff users.\"),\n sort_by=UserSortingInput(description=\"Sort staff users.\"),\n description=\"List of the shop's staff users.\",\n )\n service_accounts = FilterInputConnectionField(\n ServiceAccount,\n filter=ServiceAccountFilterInput(\n description=\"Filtering options for service accounts.\"\n ),\n sort_by=ServiceAccountSortingInput(description=\"Sort service accounts.\"),\n description=\"List of the service accounts.\",\n )\n service_account = graphene.Field(\n ServiceAccount,\n id=graphene.Argument(\n graphene.ID, description=\"ID of the service account.\", required=True\n ),\n description=\"Look up a service account by ID.\",\n )\n\n user = graphene.Field(\n User,\n id=graphene.Argument(graphene.ID, description=\"ID of the user.\", required=True),\n description=\"Look up a user by ID.\",\n )\n\n def resolve_address_validation_rules(\n self, info, country_code, country_area=None, city=None, city_area=None\n ):\n return resolve_address_validation_rules(\n info,\n country_code,\n country_area=country_area,\n city=city,\n city_area=city_area,\n )\n\n @permission_required(AccountPermissions.MANAGE_SERVICE_ACCOUNTS)\n def resolve_service_accounts(self, info, **kwargs):\n return resolve_service_accounts(info, **kwargs)\n\n @permission_required(AccountPermissions.MANAGE_SERVICE_ACCOUNTS)\n def resolve_service_account(self, info, id):\n return graphene.Node.get_node_from_global_id(info, id, ServiceAccount)\n\n @permission_required(AccountPermissions.MANAGE_USERS)\n def resolve_customers(self, info, query=None, **kwargs):\n return resolve_customers(info, query=query, **kwargs)\n\n @permission_required(AccountPermissions.MANAGE_STAFF)\n def resolve_permission_groups(self, info, query=None, **kwargs):\n return resolve_permission_groups(info, query=query, **kwargs)\n\n @permission_required(AccountPermissions.MANAGE_STAFF)\n def resolve_permission_group(self, info, id):\n return graphene.Node.get_node_from_global_id(info, id, Group)\n\n @login_required\n def resolve_me(self, info):\n return info.context.user\n\n @permission_required(AccountPermissions.MANAGE_STAFF)\n def resolve_staff_users(self, info, query=None, **kwargs):\n return resolve_staff_users(info, query=query, **kwargs)\n\n @one_of_permissions_required(\n [AccountPermissions.MANAGE_STAFF, AccountPermissions.MANAGE_USERS]\n )\n def resolve_user(self, info, id):\n return resolve_user(info, id)\n\n def resolve_address(self, info, id):\n return resolve_address(info, id)\n\n\nclass AccountMutations(graphene.ObjectType):\n # Base mutations\n request_password_reset = RequestPasswordReset.Field()\n confirm_account = ConfirmAccount.Field()\n set_password = SetPassword.Field()\n password_change = PasswordChange.Field()\n request_email_change = RequestEmailChange.Field()\n confirm_email_change = ConfirmEmailChange.Field()\n\n # Account mutations\n account_address_create = AccountAddressCreate.Field()\n account_address_update = AccountAddressUpdate.Field()\n account_address_delete = AccountAddressDelete.Field()\n account_set_default_address = AccountSetDefaultAddress.Field()\n\n account_register = AccountRegister.Field()\n account_update = AccountUpdate.Field()\n account_request_deletion = AccountRequestDeletion.Field()\n account_delete = AccountDelete.Field()\n\n account_update_meta = AccountUpdateMeta.Field()\n\n # Staff mutations\n address_create = AddressCreate.Field()\n address_update = AddressUpdate.Field()\n address_delete = AddressDelete.Field()\n address_set_default = AddressSetDefault.Field()\n\n customer_create = CustomerCreate.Field()\n customer_update = CustomerUpdate.Field()\n customer_delete = CustomerDelete.Field()\n customer_bulk_delete = CustomerBulkDelete.Field()\n\n staff_create = StaffCreate.Field()\n staff_update = StaffUpdate.Field()\n staff_delete = StaffDelete.Field()\n staff_bulk_delete = StaffBulkDelete.Field()\n\n user_avatar_update = UserAvatarUpdate.Field()\n user_avatar_delete = UserAvatarDelete.Field()\n user_bulk_set_active = UserBulkSetActive.Field()\n\n user_update_metadata = UserUpdateMeta.Field()\n user_clear_metadata = UserClearMeta.Field()\n\n user_update_private_metadata = UserUpdatePrivateMeta.Field()\n user_clear_private_metadata = UserClearPrivateMeta.Field()\n\n service_account_create = ServiceAccountCreate.Field()\n service_account_update = ServiceAccountUpdate.Field()\n service_account_delete = ServiceAccountDelete.Field()\n\n service_account_update_private_metadata = ServiceAccountUpdatePrivateMeta.Field()\n service_account_clear_private_metadata = ServiceAccountClearPrivateMeta.Field()\n\n service_account_token_create = ServiceAccountTokenCreate.Field()\n service_account_token_delete = ServiceAccountTokenDelete.Field()\n\n # Permission group mutations\n permission_group_create = PermissionGroupCreate.Field()\n permission_group_update = PermissionGroupUpdate.Field()\n permission_group_delete = PermissionGroupDelete.Field()\n permission_group_assign_users = PermissionGroupAssignUsers.Field()\n permission_group_unassign_users = PermissionGroupUnassignUsers.Field()\n", "path": "saleor/graphql/account/schema.py"}]}
| 3,581 | 187 |
gh_patches_debug_15130
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-340
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xonsh hangs on incorrectly indented for loop
```
condamaster%=🐚 for i in range(1):
.............. print(i)
..............
```
Xonsh hangs here. The issue is that the print(i) isn't indented (side-issue, it should indent automatically, just like IPython).
</issue>
<code>
[start of xonsh/execer.py]
1 """Implements the xonsh executer"""
2 from __future__ import print_function, unicode_literals
3 import re
4 import os
5 import types
6 import inspect
7 import builtins
8 from collections import Iterable, Sequence, Mapping
9
10 from xonsh import ast
11 from xonsh.parser import Parser
12 from xonsh.tools import subproc_toks
13 from xonsh.built_ins import load_builtins, unload_builtins
14
15
16 class Execer(object):
17 """Executes xonsh code in a context."""
18
19 def __init__(self,
20 filename='<xonsh-code>',
21 debug_level=0,
22 parser_args=None,
23 unload=True):
24 """Parameters
25 ----------
26 filename : str, optional
27 File we are to execute.
28 debug_level : int, optional
29 Debugging level to use in lexing and parsing.
30 parser_args : dict, optional
31 Arguments to pass down to the parser.
32 unload : bool, optional
33 Whether or not to unload xonsh builtins upon deletion.
34 """
35 parser_args = parser_args or {}
36 self.parser = Parser(**parser_args)
37 self.filename = filename
38 self.debug_level = debug_level
39 self.unload = unload
40 self.ctxtransformer = ast.CtxAwareTransformer(self.parser)
41 load_builtins(execer=self)
42
43 def __del__(self):
44 if self.unload:
45 unload_builtins()
46
47 def parse(self, input, ctx, mode='exec'):
48 """Parses xonsh code in a context-aware fashion. For context-free
49 parsing, please use the Parser class directly.
50 """
51 if ctx is None:
52 ctx = set()
53 elif isinstance(ctx, Mapping):
54 ctx = set(ctx.keys())
55
56 # Parsing actually happens in a couple of phases. The first is a
57 # shortcut for a context-free parser. Normally, all subprocess
58 # lines should be wrapped in $(), to indicate that they are a
59 # subproc. But that would be super annoying. Unfortnately, Python
60 # mode - after indentation - is whitespace agnostic while, using
61 # the Python token, subproc mode is whitespace aware. That is to say,
62 # in Python mode "ls -l", "ls-l", and "ls - l" all parse to the
63 # same AST because whitespace doesn't matter to the minus binary op.
64 # However, these phases all have very different meaning in subproc
65 # mode. The 'right' way to deal with this is to make the entire
66 # grammar whitespace aware, and then ignore all of the whitespace
67 # tokens for all of the Python rules. The lazy way implemented here
68 # is to parse a line a second time with a $() wrapper if it fails
69 # the first time. This is a context-free phase.
70 tree = self._parse_ctx_free(input, mode=mode)
71 if tree is None:
72 return None
73
74 # Now we need to perform context-aware AST transformation. This is
75 # because the "ls -l" is valid Python. The only way that we know
76 # it is not actually Python is by checking to see if the first token
77 # (ls) is part of the execution context. If it isn't, then we will
78 # assume that this line is supposed to be a subprocess line, assuming
79 # it also is valid as a subprocess line.
80 tree = self.ctxtransformer.ctxvisit(tree, input, ctx, mode=mode)
81 return tree
82
83 def compile(self, input, mode='exec', glbs=None, locs=None, stacklevel=2,
84 filename=None):
85 """Compiles xonsh code into a Python code object, which may then
86 be execed or evaled.
87 """
88 if filename is None:
89 filename = self.filename
90 if glbs is None or locs is None:
91 frame = inspect.stack()[stacklevel][0]
92 glbs = frame.f_globals if glbs is None else glbs
93 locs = frame.f_locals if locs is None else locs
94 ctx = set(dir(builtins)) | set(glbs.keys()) | set(locs.keys())
95 tree = self.parse(input, ctx, mode=mode)
96 if tree is None:
97 return None # handles comment only input
98 code = compile(tree, filename, mode)
99 return code
100
101 def eval(self, input, glbs=None, locs=None, stacklevel=2):
102 """Evaluates (and returns) xonsh code."""
103 if isinstance(input, types.CodeType):
104 code = input
105 else:
106 code = self.compile(input=input,
107 glbs=glbs,
108 locs=locs,
109 mode='eval',
110 stacklevel=stacklevel)
111 if code is None:
112 return None # handles comment only input
113 return eval(code, glbs, locs)
114
115 def exec(self, input, mode='exec', glbs=None, locs=None, stacklevel=2):
116 """Execute xonsh code."""
117 if isinstance(input, types.CodeType):
118 code = input
119 else:
120 code = self.compile(input=input,
121 glbs=glbs,
122 locs=locs,
123 mode=mode,
124 stacklevel=stacklevel)
125 if code is None:
126 return None # handles comment only input
127 return exec(code, glbs, locs)
128
129 def _find_next_break(self, line, mincol):
130 if mincol >= 1:
131 line = line[mincol:]
132 if ';' not in line:
133 return None
134 maxcol = None
135 self.parser.lexer.input(line)
136 for tok in self.parser.lexer:
137 if tok.type == 'SEMI':
138 maxcol = tok.lexpos + mincol + 1
139 break
140 return maxcol
141
142 def _parse_ctx_free(self, input, mode='exec'):
143 last_error_line = last_error_col = -1
144 parsed = False
145 original_error = None
146 while not parsed:
147 try:
148 tree = self.parser.parse(input,
149 filename=self.filename,
150 mode=mode,
151 debug_level=self.debug_level)
152 parsed = True
153 except IndentationError as e:
154 if original_error is None:
155 raise e
156 else:
157 raise original_error
158 except SyntaxError as e:
159 if original_error is None:
160 original_error = e
161 if (e.loc is None) or (last_error_line == e.loc.lineno and
162 last_error_col in (e.loc.column + 1,
163 e.loc.column)):
164 raise original_error
165 last_error_col = e.loc.column
166 last_error_line = e.loc.lineno
167 idx = last_error_line - 1
168 lines = input.splitlines()
169 line = lines[idx]
170 if input.endswith('\n'):
171 lines.append('')
172 if len(line.strip()) == 0:
173 # whitespace only lines are not valid syntax in Python's
174 # interactive mode='single', who knew?! Just ignore them.
175 # this might cause actual sytax errors to have bad line
176 # numbers reported, but should only effect interactive mode
177 del lines[idx]
178 last_error_line = last_error_col = -1
179 input = '\n'.join(lines)
180 continue
181 maxcol = self._find_next_break(line, last_error_col)
182 sbpline = subproc_toks(line,
183 returnline=True,
184 maxcol=maxcol,
185 lexer=self.parser.lexer)
186 if sbpline is None:
187 # subprocess line had no valid tokens, likely because
188 # it only contained a comment.
189 del lines[idx]
190 last_error_line = last_error_col = -1
191 input = '\n'.join(lines)
192 continue
193 else:
194 lines[idx] = sbpline
195 last_error_col += 3
196 input = '\n'.join(lines)
197 return tree
198
[end of xonsh/execer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xonsh/execer.py b/xonsh/execer.py
--- a/xonsh/execer.py
+++ b/xonsh/execer.py
@@ -178,6 +178,12 @@
last_error_line = last_error_col = -1
input = '\n'.join(lines)
continue
+ if last_error_line > 1 and lines[idx-1].rstrip()[-1] == ':':
+ # catch non-indented blocks and raise error.
+ prev_indent = len(lines[idx-1]) - len(lines[idx-1].lstrip())
+ curr_indent = len(lines[idx]) - len(lines[idx].lstrip())
+ if prev_indent == curr_indent:
+ raise original_error
maxcol = self._find_next_break(line, last_error_col)
sbpline = subproc_toks(line,
returnline=True,
|
{"golden_diff": "diff --git a/xonsh/execer.py b/xonsh/execer.py\n--- a/xonsh/execer.py\n+++ b/xonsh/execer.py\n@@ -178,6 +178,12 @@\n last_error_line = last_error_col = -1\n input = '\\n'.join(lines)\n continue\n+ if last_error_line > 1 and lines[idx-1].rstrip()[-1] == ':':\n+ # catch non-indented blocks and raise error.\n+ prev_indent = len(lines[idx-1]) - len(lines[idx-1].lstrip())\n+ curr_indent = len(lines[idx]) - len(lines[idx].lstrip())\n+ if prev_indent == curr_indent:\n+ raise original_error\n maxcol = self._find_next_break(line, last_error_col)\n sbpline = subproc_toks(line,\n returnline=True,\n", "issue": "xonsh hangs on incorrectly indented for loop\n```\ncondamaster%=\ud83d\udc1a for i in range(1):\n.............. print(i)\n..............\n\n```\n\nXonsh hangs here. The issue is that the print(i) isn't indented (side-issue, it should indent automatically, just like IPython). \n\n", "before_files": [{"content": "\"\"\"Implements the xonsh executer\"\"\"\nfrom __future__ import print_function, unicode_literals\nimport re\nimport os\nimport types\nimport inspect\nimport builtins\nfrom collections import Iterable, Sequence, Mapping\n\nfrom xonsh import ast\nfrom xonsh.parser import Parser\nfrom xonsh.tools import subproc_toks\nfrom xonsh.built_ins import load_builtins, unload_builtins\n\n\nclass Execer(object):\n \"\"\"Executes xonsh code in a context.\"\"\"\n\n def __init__(self,\n filename='<xonsh-code>',\n debug_level=0,\n parser_args=None,\n unload=True):\n \"\"\"Parameters\n ----------\n filename : str, optional\n File we are to execute.\n debug_level : int, optional\n Debugging level to use in lexing and parsing.\n parser_args : dict, optional\n Arguments to pass down to the parser.\n unload : bool, optional\n Whether or not to unload xonsh builtins upon deletion.\n \"\"\"\n parser_args = parser_args or {}\n self.parser = Parser(**parser_args)\n self.filename = filename\n self.debug_level = debug_level\n self.unload = unload\n self.ctxtransformer = ast.CtxAwareTransformer(self.parser)\n load_builtins(execer=self)\n\n def __del__(self):\n if self.unload:\n unload_builtins()\n\n def parse(self, input, ctx, mode='exec'):\n \"\"\"Parses xonsh code in a context-aware fashion. For context-free\n parsing, please use the Parser class directly.\n \"\"\"\n if ctx is None:\n ctx = set()\n elif isinstance(ctx, Mapping):\n ctx = set(ctx.keys())\n\n # Parsing actually happens in a couple of phases. The first is a\n # shortcut for a context-free parser. Normally, all subprocess\n # lines should be wrapped in $(), to indicate that they are a\n # subproc. But that would be super annoying. Unfortnately, Python\n # mode - after indentation - is whitespace agnostic while, using\n # the Python token, subproc mode is whitespace aware. That is to say,\n # in Python mode \"ls -l\", \"ls-l\", and \"ls - l\" all parse to the\n # same AST because whitespace doesn't matter to the minus binary op.\n # However, these phases all have very different meaning in subproc\n # mode. The 'right' way to deal with this is to make the entire\n # grammar whitespace aware, and then ignore all of the whitespace\n # tokens for all of the Python rules. The lazy way implemented here\n # is to parse a line a second time with a $() wrapper if it fails\n # the first time. This is a context-free phase.\n tree = self._parse_ctx_free(input, mode=mode)\n if tree is None:\n return None\n\n # Now we need to perform context-aware AST transformation. This is\n # because the \"ls -l\" is valid Python. The only way that we know\n # it is not actually Python is by checking to see if the first token\n # (ls) is part of the execution context. If it isn't, then we will\n # assume that this line is supposed to be a subprocess line, assuming\n # it also is valid as a subprocess line.\n tree = self.ctxtransformer.ctxvisit(tree, input, ctx, mode=mode)\n return tree\n\n def compile(self, input, mode='exec', glbs=None, locs=None, stacklevel=2,\n filename=None):\n \"\"\"Compiles xonsh code into a Python code object, which may then\n be execed or evaled.\n \"\"\"\n if filename is None:\n filename = self.filename\n if glbs is None or locs is None:\n frame = inspect.stack()[stacklevel][0]\n glbs = frame.f_globals if glbs is None else glbs\n locs = frame.f_locals if locs is None else locs\n ctx = set(dir(builtins)) | set(glbs.keys()) | set(locs.keys())\n tree = self.parse(input, ctx, mode=mode)\n if tree is None:\n return None # handles comment only input\n code = compile(tree, filename, mode)\n return code\n\n def eval(self, input, glbs=None, locs=None, stacklevel=2):\n \"\"\"Evaluates (and returns) xonsh code.\"\"\"\n if isinstance(input, types.CodeType):\n code = input\n else:\n code = self.compile(input=input,\n glbs=glbs,\n locs=locs,\n mode='eval',\n stacklevel=stacklevel)\n if code is None:\n return None # handles comment only input\n return eval(code, glbs, locs)\n\n def exec(self, input, mode='exec', glbs=None, locs=None, stacklevel=2):\n \"\"\"Execute xonsh code.\"\"\"\n if isinstance(input, types.CodeType):\n code = input\n else:\n code = self.compile(input=input,\n glbs=glbs,\n locs=locs,\n mode=mode,\n stacklevel=stacklevel)\n if code is None:\n return None # handles comment only input\n return exec(code, glbs, locs)\n\n def _find_next_break(self, line, mincol):\n if mincol >= 1:\n line = line[mincol:]\n if ';' not in line:\n return None\n maxcol = None\n self.parser.lexer.input(line)\n for tok in self.parser.lexer:\n if tok.type == 'SEMI':\n maxcol = tok.lexpos + mincol + 1\n break\n return maxcol\n\n def _parse_ctx_free(self, input, mode='exec'):\n last_error_line = last_error_col = -1\n parsed = False\n original_error = None\n while not parsed:\n try:\n tree = self.parser.parse(input,\n filename=self.filename,\n mode=mode,\n debug_level=self.debug_level)\n parsed = True\n except IndentationError as e:\n if original_error is None:\n raise e\n else:\n raise original_error\n except SyntaxError as e:\n if original_error is None:\n original_error = e\n if (e.loc is None) or (last_error_line == e.loc.lineno and\n last_error_col in (e.loc.column + 1,\n e.loc.column)):\n raise original_error\n last_error_col = e.loc.column\n last_error_line = e.loc.lineno\n idx = last_error_line - 1\n lines = input.splitlines()\n line = lines[idx]\n if input.endswith('\\n'):\n lines.append('')\n if len(line.strip()) == 0:\n # whitespace only lines are not valid syntax in Python's\n # interactive mode='single', who knew?! Just ignore them.\n # this might cause actual sytax errors to have bad line\n # numbers reported, but should only effect interactive mode\n del lines[idx]\n last_error_line = last_error_col = -1\n input = '\\n'.join(lines)\n continue\n maxcol = self._find_next_break(line, last_error_col)\n sbpline = subproc_toks(line,\n returnline=True,\n maxcol=maxcol,\n lexer=self.parser.lexer)\n if sbpline is None:\n # subprocess line had no valid tokens, likely because\n # it only contained a comment.\n del lines[idx]\n last_error_line = last_error_col = -1\n input = '\\n'.join(lines)\n continue\n else:\n lines[idx] = sbpline\n last_error_col += 3\n input = '\\n'.join(lines)\n return tree\n", "path": "xonsh/execer.py"}]}
| 2,790 | 193 |
gh_patches_debug_23898
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-15807
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HTMX: Bulk actions in ObjectListView without function
### Deployment Type
Self-hosted
### NetBox Version
v4.0-beta1
### Python Version
3.11
### Steps to Reproduce
1. Set user preferences to HTMX Navigation enabled
2. Create two or more IP addresses
3. Select the IP addresses in list view
4. Click on "Edit Selected" or "Delete Selected"
### Expected Behavior
For "Edit Selected", the bulk edit form opens. For "Delete Selected", the usual "xxx Objects will be deleted" confirmation dialog appears.
### Observed Behavior
Nothing at all happens.
With "HTMX Navigation" set to "disabled", everything works as usual.
</issue>
<code>
[start of netbox/utilities/templatetags/buttons.py]
1 from django import template
2 from django.contrib.contenttypes.models import ContentType
3 from django.urls import NoReverseMatch, reverse
4
5 from core.models import ObjectType
6 from extras.models import Bookmark, ExportTemplate
7 from utilities.querydict import prepare_cloned_fields
8 from utilities.views import get_viewname
9
10 __all__ = (
11 'add_button',
12 'bookmark_button',
13 'bulk_delete_button',
14 'bulk_edit_button',
15 'clone_button',
16 'delete_button',
17 'edit_button',
18 'export_button',
19 'import_button',
20 'sync_button',
21 )
22
23 register = template.Library()
24
25
26 #
27 # Instance buttons
28 #
29
30 @register.inclusion_tag('buttons/bookmark.html', takes_context=True)
31 def bookmark_button(context, instance):
32 # Check if this user has already bookmarked the object
33 content_type = ContentType.objects.get_for_model(instance)
34 bookmark = Bookmark.objects.filter(
35 object_type=content_type,
36 object_id=instance.pk,
37 user=context['request'].user
38 ).first()
39
40 # Compile form URL & data
41 if bookmark:
42 form_url = reverse('extras:bookmark_delete', kwargs={'pk': bookmark.pk})
43 form_data = {
44 'confirm': 'true',
45 }
46 else:
47 form_url = reverse('extras:bookmark_add')
48 form_data = {
49 'object_type': content_type.pk,
50 'object_id': instance.pk,
51 }
52
53 return {
54 'bookmark': bookmark,
55 'form_url': form_url,
56 'form_data': form_data,
57 'return_url': instance.get_absolute_url(),
58 }
59
60
61 @register.inclusion_tag('buttons/clone.html')
62 def clone_button(instance):
63 url = reverse(get_viewname(instance, 'add'))
64
65 # Populate cloned field values
66 param_string = prepare_cloned_fields(instance).urlencode()
67 if param_string:
68 url = f'{url}?{param_string}'
69 else:
70 url = None
71
72 return {
73 'url': url,
74 }
75
76
77 @register.inclusion_tag('buttons/edit.html')
78 def edit_button(instance):
79 viewname = get_viewname(instance, 'edit')
80 url = reverse(viewname, kwargs={'pk': instance.pk})
81
82 return {
83 'url': url,
84 }
85
86
87 @register.inclusion_tag('buttons/delete.html')
88 def delete_button(instance):
89 viewname = get_viewname(instance, 'delete')
90 url = reverse(viewname, kwargs={'pk': instance.pk})
91
92 return {
93 'url': url,
94 }
95
96
97 @register.inclusion_tag('buttons/sync.html')
98 def sync_button(instance):
99 viewname = get_viewname(instance, 'sync')
100 url = reverse(viewname, kwargs={'pk': instance.pk})
101
102 return {
103 'url': url,
104 }
105
106
107 #
108 # List buttons
109 #
110
111 @register.inclusion_tag('buttons/add.html')
112 def add_button(model, action='add'):
113 try:
114 url = reverse(get_viewname(model, action))
115 except NoReverseMatch:
116 url = None
117
118 return {
119 'url': url,
120 }
121
122
123 @register.inclusion_tag('buttons/import.html')
124 def import_button(model, action='import'):
125 try:
126 url = reverse(get_viewname(model, action))
127 except NoReverseMatch:
128 url = None
129
130 return {
131 'url': url,
132 }
133
134
135 @register.inclusion_tag('buttons/export.html', takes_context=True)
136 def export_button(context, model):
137 object_type = ObjectType.objects.get_for_model(model)
138 user = context['request'].user
139
140 # Determine if the "all data" export returns CSV or YAML
141 data_format = 'YAML' if hasattr(object_type.model_class(), 'to_yaml') else 'CSV'
142
143 # Retrieve all export templates for this model
144 export_templates = ExportTemplate.objects.restrict(user, 'view').filter(object_types=object_type)
145
146 return {
147 'perms': context['perms'],
148 'object_type': object_type,
149 'url_params': context['request'].GET.urlencode() if context['request'].GET else '',
150 'export_templates': export_templates,
151 'data_format': data_format,
152 }
153
154
155 @register.inclusion_tag('buttons/bulk_edit.html')
156 def bulk_edit_button(model, action='bulk_edit', query_params=None):
157 try:
158 url = reverse(get_viewname(model, action))
159 if query_params:
160 url = f'{url}?{query_params.urlencode()}'
161 except NoReverseMatch:
162 url = None
163
164 return {
165 'url': url,
166 }
167
168
169 @register.inclusion_tag('buttons/bulk_delete.html')
170 def bulk_delete_button(model, action='bulk_delete', query_params=None):
171 try:
172 url = reverse(get_viewname(model, action))
173 if query_params:
174 url = f'{url}?{query_params.urlencode()}'
175 except NoReverseMatch:
176 url = None
177
178 return {
179 'url': url,
180 }
181
[end of netbox/utilities/templatetags/buttons.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netbox/utilities/templatetags/buttons.py b/netbox/utilities/templatetags/buttons.py
--- a/netbox/utilities/templatetags/buttons.py
+++ b/netbox/utilities/templatetags/buttons.py
@@ -152,8 +152,8 @@
}
[email protected]_tag('buttons/bulk_edit.html')
-def bulk_edit_button(model, action='bulk_edit', query_params=None):
[email protected]_tag('buttons/bulk_edit.html', takes_context=True)
+def bulk_edit_button(context, model, action='bulk_edit', query_params=None):
try:
url = reverse(get_viewname(model, action))
if query_params:
@@ -162,12 +162,13 @@
url = None
return {
+ 'htmx_navigation': context.get('htmx_navigation'),
'url': url,
}
[email protected]_tag('buttons/bulk_delete.html')
-def bulk_delete_button(model, action='bulk_delete', query_params=None):
[email protected]_tag('buttons/bulk_delete.html', takes_context=True)
+def bulk_delete_button(context, model, action='bulk_delete', query_params=None):
try:
url = reverse(get_viewname(model, action))
if query_params:
@@ -176,5 +177,6 @@
url = None
return {
+ 'htmx_navigation': context.get('htmx_navigation'),
'url': url,
}
|
{"golden_diff": "diff --git a/netbox/utilities/templatetags/buttons.py b/netbox/utilities/templatetags/buttons.py\n--- a/netbox/utilities/templatetags/buttons.py\n+++ b/netbox/utilities/templatetags/buttons.py\n@@ -152,8 +152,8 @@\n }\n \n \[email protected]_tag('buttons/bulk_edit.html')\n-def bulk_edit_button(model, action='bulk_edit', query_params=None):\[email protected]_tag('buttons/bulk_edit.html', takes_context=True)\n+def bulk_edit_button(context, model, action='bulk_edit', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n@@ -162,12 +162,13 @@\n url = None\n \n return {\n+ 'htmx_navigation': context.get('htmx_navigation'),\n 'url': url,\n }\n \n \[email protected]_tag('buttons/bulk_delete.html')\n-def bulk_delete_button(model, action='bulk_delete', query_params=None):\[email protected]_tag('buttons/bulk_delete.html', takes_context=True)\n+def bulk_delete_button(context, model, action='bulk_delete', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n@@ -176,5 +177,6 @@\n url = None\n \n return {\n+ 'htmx_navigation': context.get('htmx_navigation'),\n 'url': url,\n }\n", "issue": "HTMX: Bulk actions in ObjectListView without function\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv4.0-beta1\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Set user preferences to HTMX Navigation enabled\r\n2. Create two or more IP addresses\r\n3. Select the IP addresses in list view\r\n4. Click on \"Edit Selected\" or \"Delete Selected\"\n\n### Expected Behavior\n\nFor \"Edit Selected\", the bulk edit form opens. For \"Delete Selected\", the usual \"xxx Objects will be deleted\" confirmation dialog appears.\n\n### Observed Behavior\n\nNothing at all happens.\r\n\r\nWith \"HTMX Navigation\" set to \"disabled\", everything works as usual.\n", "before_files": [{"content": "from django import template\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.urls import NoReverseMatch, reverse\n\nfrom core.models import ObjectType\nfrom extras.models import Bookmark, ExportTemplate\nfrom utilities.querydict import prepare_cloned_fields\nfrom utilities.views import get_viewname\n\n__all__ = (\n 'add_button',\n 'bookmark_button',\n 'bulk_delete_button',\n 'bulk_edit_button',\n 'clone_button',\n 'delete_button',\n 'edit_button',\n 'export_button',\n 'import_button',\n 'sync_button',\n)\n\nregister = template.Library()\n\n\n#\n# Instance buttons\n#\n\[email protected]_tag('buttons/bookmark.html', takes_context=True)\ndef bookmark_button(context, instance):\n # Check if this user has already bookmarked the object\n content_type = ContentType.objects.get_for_model(instance)\n bookmark = Bookmark.objects.filter(\n object_type=content_type,\n object_id=instance.pk,\n user=context['request'].user\n ).first()\n\n # Compile form URL & data\n if bookmark:\n form_url = reverse('extras:bookmark_delete', kwargs={'pk': bookmark.pk})\n form_data = {\n 'confirm': 'true',\n }\n else:\n form_url = reverse('extras:bookmark_add')\n form_data = {\n 'object_type': content_type.pk,\n 'object_id': instance.pk,\n }\n\n return {\n 'bookmark': bookmark,\n 'form_url': form_url,\n 'form_data': form_data,\n 'return_url': instance.get_absolute_url(),\n }\n\n\[email protected]_tag('buttons/clone.html')\ndef clone_button(instance):\n url = reverse(get_viewname(instance, 'add'))\n\n # Populate cloned field values\n param_string = prepare_cloned_fields(instance).urlencode()\n if param_string:\n url = f'{url}?{param_string}'\n else:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/edit.html')\ndef edit_button(instance):\n viewname = get_viewname(instance, 'edit')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/delete.html')\ndef delete_button(instance):\n viewname = get_viewname(instance, 'delete')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/sync.html')\ndef sync_button(instance):\n viewname = get_viewname(instance, 'sync')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\n#\n# List buttons\n#\n\[email protected]_tag('buttons/add.html')\ndef add_button(model, action='add'):\n try:\n url = reverse(get_viewname(model, action))\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/import.html')\ndef import_button(model, action='import'):\n try:\n url = reverse(get_viewname(model, action))\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/export.html', takes_context=True)\ndef export_button(context, model):\n object_type = ObjectType.objects.get_for_model(model)\n user = context['request'].user\n\n # Determine if the \"all data\" export returns CSV or YAML\n data_format = 'YAML' if hasattr(object_type.model_class(), 'to_yaml') else 'CSV'\n\n # Retrieve all export templates for this model\n export_templates = ExportTemplate.objects.restrict(user, 'view').filter(object_types=object_type)\n\n return {\n 'perms': context['perms'],\n 'object_type': object_type,\n 'url_params': context['request'].GET.urlencode() if context['request'].GET else '',\n 'export_templates': export_templates,\n 'data_format': data_format,\n }\n\n\[email protected]_tag('buttons/bulk_edit.html')\ndef bulk_edit_button(model, action='bulk_edit', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n url = f'{url}?{query_params.urlencode()}'\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/bulk_delete.html')\ndef bulk_delete_button(model, action='bulk_delete', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n url = f'{url}?{query_params.urlencode()}'\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n", "path": "netbox/utilities/templatetags/buttons.py"}]}
| 2,180 | 328 |
gh_patches_debug_10868
|
rasdani/github-patches
|
git_diff
|
coreruleset__coreruleset-2615
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Regex assembler seems to be broken again.
### Describe the bug
`util/regexp-assemble/regexp-assemble.py --log-level DEBUG update 942521`
```
Processing 942521, chain offset 0
detected processor: <class 'NoneType'>
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found regular input '##!^ ^\n'
Found regular input '##!+ i\n'
Found simple comment
Found simple comment
Found regular input "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b\n"
Found regular input '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b\n'
Found regular input '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b\n'
processor will process: ['##!^ ^\n', '##!+ i\n', "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b\n", '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b\n']
preprocessed lines: ['##!^ ^\n', '##!+ i\n', "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b\n", '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b\n']
Completing assembly
Running assembler with lines: ['##!^ ^', '##!+ i', "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b", '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b']
Assembler errors: b'', output b'(?i)^(?:[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"|[^\']*?(?:\'[^\']*?\'[^\']*?)*?\'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\\s*(\\w+)\\b\n'
Completed assembly: (?i)^(?:[^\"]*?(?:\"[^\"]*?\"[^\"]*?)*?\"|[^']*?(?:'[^']*?'[^']*?)*?'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\s*(\w+)\b
Traceback (most recent call last):
File "util/regexp-assemble/regexp-assemble.py", line 227, in <module>
namespace.func(namespace)
File "util/regexp-assemble/regexp-assemble.py", line 146, in handle_update
updater.run(False)
File "/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py", line 8, in run
self.write_updates()
File "/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py", line 24, in write_updates
with open(file_path, "w") as handle:
FileNotFoundError: [Errno 2] No such file or directory: 'rules/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf'
```
It seems that there is double `rules/rules` so updating a rule doesn't work.
@theseion
</issue>
<code>
[start of util/regexp-assemble/lib/operators/updater.py]
1 from msc_pyparser import MSCWriter
2 from lib.operators.parser import Parser
3
4
5 class Updater(Parser):
6 def run(self, process_all: bool):
7 self.perform_compare_or_update(process_all, self.update_regex)
8 self.write_updates()
9
10 def update_regex(
11 self,
12 rule_id: str,
13 generated_regex: str,
14 current_regex: str,
15 config: dict,
16 config_key: str,
17 ):
18 config[config_key] = generated_regex
19
20 def write_updates(self):
21 for rule_prefix, parser in self.parsers.items():
22 writer = MSCWriter(parser.configlines)
23 file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]
24 with open(file_path, "w") as handle:
25 writer.generate()
26 # add extra new line at the end of file
27 writer.output.append("")
28 handle.write("\n".join(writer.output))
29
[end of util/regexp-assemble/lib/operators/updater.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/util/regexp-assemble/lib/operators/updater.py b/util/regexp-assemble/lib/operators/updater.py
--- a/util/regexp-assemble/lib/operators/updater.py
+++ b/util/regexp-assemble/lib/operators/updater.py
@@ -20,8 +20,7 @@
def write_updates(self):
for rule_prefix, parser in self.parsers.items():
writer = MSCWriter(parser.configlines)
- file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]
- with open(file_path, "w") as handle:
+ with self.prefix_to_file_map[rule_prefix].open("w") as handle:
writer.generate()
# add extra new line at the end of file
writer.output.append("")
|
{"golden_diff": "diff --git a/util/regexp-assemble/lib/operators/updater.py b/util/regexp-assemble/lib/operators/updater.py\n--- a/util/regexp-assemble/lib/operators/updater.py\n+++ b/util/regexp-assemble/lib/operators/updater.py\n@@ -20,8 +20,7 @@\n def write_updates(self):\n for rule_prefix, parser in self.parsers.items():\n writer = MSCWriter(parser.configlines)\n- file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]\n- with open(file_path, \"w\") as handle:\n+ with self.prefix_to_file_map[rule_prefix].open(\"w\") as handle:\n writer.generate()\n # add extra new line at the end of file\n writer.output.append(\"\")\n", "issue": "Regex assembler seems to be broken again.\n### Describe the bug\r\n`util/regexp-assemble/regexp-assemble.py --log-level DEBUG update 942521`\r\n\r\n```\r\nProcessing 942521, chain offset 0\r\ndetected processor: <class 'NoneType'>\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound regular input '##!^ ^\\n'\r\nFound regular input '##!+ i\\n'\r\nFound simple comment\r\nFound simple comment\r\nFound regular input \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\\n\"\r\nFound regular input '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b\\n'\r\nFound regular input '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b\\n'\r\nprocessor will process: ['##!^ ^\\n', '##!+ i\\n', \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\\n\", '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b\\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b\\n']\r\npreprocessed lines: ['##!^ ^\\n', '##!+ i\\n', \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\\n\", '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b\\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b\\n']\r\nCompleting assembly\r\nRunning assembler with lines: ['##!^ ^', '##!+ i', \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\", '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b']\r\nAssembler errors: b'', output b'(?i)^(?:[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"|[^\\']*?(?:\\'[^\\']*?\\'[^\\']*?)*?\\'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\\\\s*(\\\\w+)\\\\b\\n'\r\nCompleted assembly: (?i)^(?:[^\\\"]*?(?:\\\"[^\\\"]*?\\\"[^\\\"]*?)*?\\\"|[^']*?(?:'[^']*?'[^']*?)*?'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\\s*(\\w+)\\b\r\nTraceback (most recent call last):\r\n File \"util/regexp-assemble/regexp-assemble.py\", line 227, in <module>\r\n namespace.func(namespace)\r\n File \"util/regexp-assemble/regexp-assemble.py\", line 146, in handle_update\r\n updater.run(False)\r\n File \"/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py\", line 8, in run\r\n self.write_updates()\r\n File \"/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py\", line 24, in write_updates\r\n with open(file_path, \"w\") as handle:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'rules/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf'\r\n```\r\n\r\nIt seems that there is double `rules/rules` so updating a rule doesn't work. \r\n\r\n@theseion \n", "before_files": [{"content": "from msc_pyparser import MSCWriter\nfrom lib.operators.parser import Parser\n\n\nclass Updater(Parser):\n def run(self, process_all: bool):\n self.perform_compare_or_update(process_all, self.update_regex)\n self.write_updates()\n\n def update_regex(\n self,\n rule_id: str,\n generated_regex: str,\n current_regex: str,\n config: dict,\n config_key: str,\n ):\n config[config_key] = generated_regex\n\n def write_updates(self):\n for rule_prefix, parser in self.parsers.items():\n writer = MSCWriter(parser.configlines)\n file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]\n with open(file_path, \"w\") as handle:\n writer.generate()\n # add extra new line at the end of file\n writer.output.append(\"\")\n handle.write(\"\\n\".join(writer.output))\n", "path": "util/regexp-assemble/lib/operators/updater.py"}]}
| 1,736 | 162 |
gh_patches_debug_15905
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-1085
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LibibertyConan: 404: Not Found
```bash
conans.errors.NotFoundException: Not found: https://github.com/gcc-mirror/gcc/archive/gcc-9_1_0-release.tar.gz
```
when i open url above in browser:
```bash
404: Not Found
```
Provide more mirrors, please
Related to https://github.com/conan-io/conan-center-index/blob/master/recipes/libiberty/all/conanfile.py#L40
</issue>
<code>
[start of recipes/libiberty/all/conanfile.py]
1 from conans import ConanFile, tools, AutoToolsBuildEnvironment
2 from conans.errors import ConanInvalidConfiguration
3 import os
4
5
6 class LibibertyConan(ConanFile):
7 name = "libiberty"
8 version = "9.1.0"
9 description = "A collection of subroutines used by various GNU programs"
10 topics = ("conan", "libiberty", "gnu", "gnu-collection")
11 url = "https://github.com/conan-io/conan-center-index"
12 homepage = "https://gcc.gnu.org/onlinedocs/libiberty"
13 license = "LGPL-2.1"
14 settings = "os", "arch", "compiler", "build_type"
15 options = {"fPIC": [True, False]}
16 default_options = {"fPIC": True}
17 _autotools = None
18
19 @property
20 def _source_subfolder(self):
21 return "source_subfolder"
22
23 @property
24 def _libiberty_folder(self):
25 return os.path.join(self._source_subfolder, self.name)
26
27 def config_options(self):
28 if self.settings.os == 'Windows':
29 del self.options.fPIC
30
31 def configure(self):
32 if self.settings.compiler == "Visual Studio":
33 raise ConanInvalidConfiguration("libiberty can not be built by Visual Studio.")
34 del self.settings.compiler.libcxx
35 del self.settings.compiler.cppstd
36
37 def source(self):
38 tools.get(**self.conan_data["sources"][self.version])
39 pkg_version = self.version.replace('.', '_')
40 extracted_dir = "gcc-gcc-{}-release".format(pkg_version)
41 os.rename(extracted_dir, self._source_subfolder)
42 tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))
43 tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))
44
45 def _configure_autotools(self):
46 if not self._autotools:
47 args = ["--enable-install-libiberty"]
48 self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
49 self._autotools.configure(args=args, configure_dir=self._libiberty_folder)
50 return self._autotools
51
52 def build(self):
53 autotools = self._configure_autotools()
54 autotools.make()
55
56 def package(self):
57 self.copy(pattern="COPYING.LIB", src=self._libiberty_folder, dst="licenses")
58 autotools = self._configure_autotools()
59 autotools.install()
60 self._package_x86()
61
62 def _package_x86(self):
63 lib32dir = os.path.join(self.package_folder, "lib32")
64 if os.path.exists(lib32dir):
65 libdir = os.path.join(self.package_folder, "lib")
66 tools.rmdir(libdir)
67 os.rename(lib32dir, libdir)
68
69 def package_info(self):
70 self.cpp_info.libs = tools.collect_libs(self)
71
72
[end of recipes/libiberty/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/recipes/libiberty/all/conanfile.py b/recipes/libiberty/all/conanfile.py
--- a/recipes/libiberty/all/conanfile.py
+++ b/recipes/libiberty/all/conanfile.py
@@ -36,8 +36,7 @@
def source(self):
tools.get(**self.conan_data["sources"][self.version])
- pkg_version = self.version.replace('.', '_')
- extracted_dir = "gcc-gcc-{}-release".format(pkg_version)
+ extracted_dir = "gcc-" + self.version
os.rename(extracted_dir, self._source_subfolder)
tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))
tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))
@@ -68,4 +67,3 @@
def package_info(self):
self.cpp_info.libs = tools.collect_libs(self)
-
|
{"golden_diff": "diff --git a/recipes/libiberty/all/conanfile.py b/recipes/libiberty/all/conanfile.py\n--- a/recipes/libiberty/all/conanfile.py\n+++ b/recipes/libiberty/all/conanfile.py\n@@ -36,8 +36,7 @@\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n- pkg_version = self.version.replace('.', '_')\n- extracted_dir = \"gcc-gcc-{}-release\".format(pkg_version)\n+ extracted_dir = \"gcc-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))\n tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))\n@@ -68,4 +67,3 @@\n \n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n-\n", "issue": "LibibertyConan: 404: Not Found\n```bash\r\nconans.errors.NotFoundException: Not found: https://github.com/gcc-mirror/gcc/archive/gcc-9_1_0-release.tar.gz\r\n```\r\n\r\nwhen i open url above in browser:\r\n\r\n```bash\r\n404: Not Found\r\n```\r\n\r\nProvide more mirrors, please\r\n\r\nRelated to https://github.com/conan-io/conan-center-index/blob/master/recipes/libiberty/all/conanfile.py#L40\n", "before_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment\nfrom conans.errors import ConanInvalidConfiguration\nimport os\n\n\nclass LibibertyConan(ConanFile):\n name = \"libiberty\"\n version = \"9.1.0\"\n description = \"A collection of subroutines used by various GNU programs\"\n topics = (\"conan\", \"libiberty\", \"gnu\", \"gnu-collection\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://gcc.gnu.org/onlinedocs/libiberty\"\n license = \"LGPL-2.1\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"fPIC\": [True, False]}\n default_options = {\"fPIC\": True}\n _autotools = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _libiberty_folder(self):\n return os.path.join(self._source_subfolder, self.name)\n\n def config_options(self):\n if self.settings.os == 'Windows':\n del self.options.fPIC\n\n def configure(self):\n if self.settings.compiler == \"Visual Studio\":\n raise ConanInvalidConfiguration(\"libiberty can not be built by Visual Studio.\")\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n pkg_version = self.version.replace('.', '_')\n extracted_dir = \"gcc-gcc-{}-release\".format(pkg_version)\n os.rename(extracted_dir, self._source_subfolder)\n tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))\n tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))\n\n def _configure_autotools(self):\n if not self._autotools:\n args = [\"--enable-install-libiberty\"]\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.configure(args=args, configure_dir=self._libiberty_folder)\n return self._autotools\n\n def build(self):\n autotools = self._configure_autotools()\n autotools.make()\n\n def package(self):\n self.copy(pattern=\"COPYING.LIB\", src=self._libiberty_folder, dst=\"licenses\")\n autotools = self._configure_autotools()\n autotools.install()\n self._package_x86()\n\n def _package_x86(self):\n lib32dir = os.path.join(self.package_folder, \"lib32\")\n if os.path.exists(lib32dir):\n libdir = os.path.join(self.package_folder, \"lib\")\n tools.rmdir(libdir)\n os.rename(lib32dir, libdir)\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n\n", "path": "recipes/libiberty/all/conanfile.py"}]}
| 1,429 | 208 |
gh_patches_debug_25172
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-987
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
~/.local/etc/mindsdb and ~/.local/var/lib/mindsdb have been created in current working directory
**Your Environment**
* Python version: 3.7.6
* Pip version: 19.2.3
* Operating system: MacOS 10.15.7
* Python environment used (e.g. venv, conda): venv
* Mindsdb version you tried to install:2.13.8 (staging)
**Bug description**
~/.local/etc/mindsdb and ~/.local/var/lib/mindsdb have been created in current working directory instead of creating
.local/etc/mindsdb and .local/var/lib/mindsdb in user home directory
**To Reproduce**
1. Create next directories and revoke `write` permissions from current user for all of these dirs:
f'{mindsdb_path}/etc/',
f'{mindsdb_path}/var/'
'/etc/mindsdb',
'/var/lib/mindsdb'
**Expected behavior**
.local/etc/mindsdb and .local/var/lib/mindsdb created in user home directory
**Actual behavior**
~/.local/etc/mindsdb and ~/.local/var/lib/mindsdb created in current working directory:
**Additional context**
```
mindsdb_venv) ~/repos/MindsDB/mindsdb itsyplen$ python -m mindsdb
Athena Datasource is not available by default. If you wish to use it, please install mindsdb_native[extra_data_sources]
SnowflakeDS Datasource is not available by default. If you wish to use it, please install mindsdb_native[snowflake]
Google Cloud Storage Datasource is not available by default. If you wish to use it, please install mindsdb_native[extra_data_sources]
Configuration file:
~/.local/etc/mindsdb/config.json
Storage path:
/Users/itsyplen/repos/MindsDB/mindsdb/~/.local/etc/mindsdb/~/.local/var/lib/mindsdb
Versions:
```
</issue>
<code>
[start of mindsdb/utilities/fs.py]
1 import inspect
2 import os
3 import time
4 from pathlib import Path
5 import json
6 import shutil
7 import pickle
8 from distutils.version import LooseVersion
9 import logging
10
11
12 def create_directory(path):
13 path = Path(path)
14 path.mkdir(mode=0o777, exist_ok=True, parents=True)
15
16
17 def get_paths():
18 this_file_path = os.path.abspath(inspect.getfile(inspect.currentframe()))
19 mindsdb_path = os.path.abspath(Path(this_file_path).parent.parent.parent)
20
21 tuples = [
22 (
23 f'{mindsdb_path}/etc/',
24 f'{mindsdb_path}/var/'
25 )
26 ]
27
28 # if windows
29 if os.name == 'nt':
30 tuples.extend([
31 (
32 os.path.join(os.environ['APPDATA'], 'mindsdb'),
33 os.path.join(os.environ['APPDATA'], 'mindsdb'),
34 )
35 ])
36 else:
37 tuples.extend([
38 (
39 '/etc/mindsdb',
40 '/var/lib/mindsdb'
41 ),
42 (
43 '~/.local/etc/mindsdb',
44 '~/.local/var/lib/mindsdb'
45 )
46 ])
47
48 return tuples
49
50
51 def get_or_create_dir_struct():
52 for tup in get_paths():
53 try:
54 for dir in tup:
55 assert(os.path.exists(dir))
56 assert(os.access(dir, os.W_OK) is True)
57
58 config_dir = tup[0]
59 if 'DEV_CONFIG_PATH' in os.environ:
60 config_dir = os.environ['DEV_CONFIG_PATH']
61
62 return config_dir, tup[1]
63 except Exception:
64 pass
65
66 for tup in get_paths():
67 try:
68 for dir in tup:
69 create_directory(dir)
70 assert(os.access(dir, os.W_OK) is True)
71
72 config_dir = tup[0]
73 if 'DEV_CONFIG_PATH' in os.environ:
74 config_dir = os.environ['DEV_CONFIG_PATH']
75
76 return config_dir, tup[1]
77
78 except Exception:
79 pass
80
81 raise Exception('MindsDB storage directory does not exist and could not be created')
82
83
84 def do_init_migration(paths):
85 ''' That initial migration for storage structure. Should be called once after user updates to 2.8.0.
86 When we decide all active users has update (after a month?), this function can be removed.
87 '''
88 # move predictors files by their directories
89 endings = [
90 '_heavy_model_metadata.pickle',
91 '_light_model_metadata.pickle',
92 '_lightwood_data'
93 ]
94 for ending in endings:
95 for p in Path(paths['predictors']).iterdir():
96 if p.is_file() and p.name.endswith(ending):
97 predictor_name = p.name[:-len(ending)]
98 predictor_path = Path(paths['predictors']).joinpath(predictor_name)
99 create_directory(predictor_path)
100 new_file_name = ending[1:]
101 shutil.move(
102 str(p),
103 str(predictor_path.joinpath(new_file_name))
104 )
105 if new_file_name == 'light_model_metadata.pickle':
106 with open(str(predictor_path.joinpath(new_file_name)), 'rb') as fp:
107 lmd = pickle.load(fp)
108
109 if 'ludwig_data' in lmd and 'ludwig_save_path' in lmd['ludwig_data']:
110 lmd['ludwig_data']['ludwig_save_path'] = os.path.join(paths['predictors'], lmd['name'], 'ludwig_data')
111
112 if 'lightwood_data' in lmd and 'save_path' in lmd['lightwood_data']:
113 lmd['lightwood_data']['save_path'] = os.path.join(paths['predictors'], lmd['name'], 'lightwood_data')
114
115 with open(os.path.join(paths['predictors'], lmd['name'], 'light_model_metadata.pickle'), 'wb') as fp:
116 pickle.dump(lmd, fp, protocol=pickle.HIGHEST_PROTOCOL)
117
118 for p in Path(paths['predictors']).iterdir():
119 if p.is_file() and p.name != 'start.mdb_base':
120 p.unlink()
121
122 # mopve each datasource files from ds_name/datasource/{file} to ds_name/{file}
123 for p in Path(paths['datasources']).iterdir():
124 if p.is_dir():
125 datasource_folder = p.joinpath('datasource')
126 if datasource_folder.is_dir():
127 for f in datasource_folder.iterdir():
128 shutil.move(
129 str(f),
130 str(p.joinpath(f.name))
131 )
132 shutil.rmtree(datasource_folder)
133
134
135 def update_versions_file(config, versions):
136 versions_file_path = os.path.join(config.paths['root'], 'versions.json')
137 old_versions = {}
138 if Path(versions_file_path).is_file():
139 try:
140 with open(versions_file_path, 'rt') as f:
141 old_versions = json.loads(f.read())
142 except Exception:
143 pass
144
145 # do here anything for update
146 if len(old_versions) == 0:
147 do_init_migration(config.paths)
148
149 with open(versions_file_path, 'wt') as f:
150 json.dump(versions, f, indent=4, sort_keys=True)
151
152
153 def create_dirs_recursive(path):
154 if isinstance(path, dict):
155 for p in path.values():
156 create_dirs_recursive(p)
157 elif isinstance(path, str):
158 create_directory(path)
159 else:
160 raise ValueError(f'Wrong path: {path}')
161
162
163 def archive_obsolete_predictors(config, old_version):
164 ''' move all predictors trained on mindsdb with version less than
165 old_version to folder for obsolete predictors
166
167 Predictors are outdated in:
168 v2.11.0 - in mindsdb_native added ['data_analysis_v2']['columns']
169 '''
170 obsolete_predictors = []
171 obsolete_predictors_dir = config.paths['obsolete']['predictors']
172 for f in Path(config.paths['predictors']).iterdir():
173 if f.is_dir():
174 if not f.joinpath('versions.json').is_file():
175 obsolete_predictors.append(f.name)
176 else:
177 with open(f.joinpath('versions.json'), 'rt') as vf:
178 versions = json.loads(vf.read())
179 if LooseVersion(versions['mindsdb']) < LooseVersion(old_version):
180 obsolete_predictors.append(f.name)
181 if len(obsolete_predictors) > 0:
182 print('These predictors are outdated and moved to {storage_dir}/obsolete/ folder:')
183 for p in obsolete_predictors:
184 print(f' - {p}')
185 new_path = Path(obsolete_predictors_dir).joinpath(p)
186 if Path(obsolete_predictors_dir).joinpath(p).is_dir():
187 i = 1
188 while Path(obsolete_predictors_dir).joinpath(f'{p}_{i}').is_dir():
189 i += 1
190 new_path = Path(obsolete_predictors_dir).joinpath(f'{p}_{i}')
191
192 shutil.move(
193 Path(config.paths['predictors']).joinpath(p),
194 new_path
195 )
196
197
198 def remove_corrupted_predictors(config, mindsdb_native):
199 ''' Checking that all predictors can be loaded.
200 If not - then move such predictir to {storage_dir}/tmp/corrupted_predictors
201 '''
202 for p in [x for x in Path(config.paths['predictors']).iterdir() if x.is_dir()]:
203 model_name = p.name
204 try:
205 mindsdb_native.get_model_data(model_name)
206 except Exception as e:
207 log = logging.getLogger('mindsdb.main')
208 log.error(f"Error: predictor '{model_name}' corrupted. Move predictor data to '{{storage_dir}}/tmp/corrupted_predictors' dir.")
209 log.error(f"Reason is: {e}")
210 corrupted_predictors_dir = Path(config.paths['tmp']).joinpath('corrupted_predictors')
211 create_directory(corrupted_predictors_dir)
212 shutil.move(
213 str(p),
214 str(corrupted_predictors_dir.joinpath( model_name + str(int(time.time())) ))
215 )
216
[end of mindsdb/utilities/fs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mindsdb/utilities/fs.py b/mindsdb/utilities/fs.py
--- a/mindsdb/utilities/fs.py
+++ b/mindsdb/utilities/fs.py
@@ -40,8 +40,8 @@
'/var/lib/mindsdb'
),
(
- '~/.local/etc/mindsdb',
- '~/.local/var/lib/mindsdb'
+ '{}/.local/etc/mindsdb'.format(Path.home()),
+ '{}/.local/var/lib/mindsdb'.format(Path.home())
)
])
@@ -51,9 +51,9 @@
def get_or_create_dir_struct():
for tup in get_paths():
try:
- for dir in tup:
- assert(os.path.exists(dir))
- assert(os.access(dir, os.W_OK) is True)
+ for _dir in tup:
+ assert os.path.exists(_dir)
+ assert os.access(_dir, os.W_OK) is True
config_dir = tup[0]
if 'DEV_CONFIG_PATH' in os.environ:
@@ -65,9 +65,9 @@
for tup in get_paths():
try:
- for dir in tup:
- create_directory(dir)
- assert(os.access(dir, os.W_OK) is True)
+ for _dir in tup:
+ create_directory(_dir)
+ assert os.access(_dir, os.W_OK) is True
config_dir = tup[0]
if 'DEV_CONFIG_PATH' in os.environ:
|
{"golden_diff": "diff --git a/mindsdb/utilities/fs.py b/mindsdb/utilities/fs.py\n--- a/mindsdb/utilities/fs.py\n+++ b/mindsdb/utilities/fs.py\n@@ -40,8 +40,8 @@\n '/var/lib/mindsdb'\n ),\n (\n- '~/.local/etc/mindsdb',\n- '~/.local/var/lib/mindsdb'\n+ '{}/.local/etc/mindsdb'.format(Path.home()),\n+ '{}/.local/var/lib/mindsdb'.format(Path.home())\n )\n ])\n \n@@ -51,9 +51,9 @@\n def get_or_create_dir_struct():\n for tup in get_paths():\n try:\n- for dir in tup:\n- assert(os.path.exists(dir))\n- assert(os.access(dir, os.W_OK) is True)\n+ for _dir in tup:\n+ assert os.path.exists(_dir)\n+ assert os.access(_dir, os.W_OK) is True\n \n config_dir = tup[0]\n if 'DEV_CONFIG_PATH' in os.environ:\n@@ -65,9 +65,9 @@\n \n for tup in get_paths():\n try:\n- for dir in tup:\n- create_directory(dir)\n- assert(os.access(dir, os.W_OK) is True)\n+ for _dir in tup:\n+ create_directory(_dir)\n+ assert os.access(_dir, os.W_OK) is True\n \n config_dir = tup[0]\n if 'DEV_CONFIG_PATH' in os.environ:\n", "issue": "~/.local/etc/mindsdb and ~/.local/var/lib/mindsdb have been created in current working directory\n**Your Environment**\r\n\r\n* Python version: 3.7.6\r\n* Pip version: 19.2.3\r\n* Operating system: MacOS 10.15.7\r\n* Python environment used (e.g. venv, conda): venv\r\n* Mindsdb version you tried to install:2.13.8 (staging)\r\n\r\n**Bug description**\r\n~/.local/etc/mindsdb and ~/.local/var/lib/mindsdb have been created in current working directory instead of creating\r\n.local/etc/mindsdb and .local/var/lib/mindsdb in user home directory\r\n\r\n\r\n**To Reproduce**\r\n1. Create next directories and revoke `write` permissions from current user for all of these dirs:\r\n f'{mindsdb_path}/etc/',\r\n f'{mindsdb_path}/var/'\r\n '/etc/mindsdb',\r\n '/var/lib/mindsdb'\r\n\r\n**Expected behavior**\r\n.local/etc/mindsdb and .local/var/lib/mindsdb created in user home directory\r\n\r\n**Actual behavior**\r\n~/.local/etc/mindsdb and ~/.local/var/lib/mindsdb created in current working directory:\r\n\r\n\r\n**Additional context**\r\n```\r\nmindsdb_venv) ~/repos/MindsDB/mindsdb itsyplen$ python -m mindsdb\r\nAthena Datasource is not available by default. If you wish to use it, please install mindsdb_native[extra_data_sources]\r\nSnowflakeDS Datasource is not available by default. If you wish to use it, please install mindsdb_native[snowflake]\r\nGoogle Cloud Storage Datasource is not available by default. If you wish to use it, please install mindsdb_native[extra_data_sources]\r\nConfiguration file:\r\n ~/.local/etc/mindsdb/config.json\r\nStorage path:\r\n /Users/itsyplen/repos/MindsDB/mindsdb/~/.local/etc/mindsdb/~/.local/var/lib/mindsdb\r\nVersions:\r\n```\r\n\n", "before_files": [{"content": "import inspect\nimport os\nimport time\nfrom pathlib import Path\nimport json\nimport shutil\nimport pickle\nfrom distutils.version import LooseVersion\nimport logging\n\n\ndef create_directory(path):\n path = Path(path)\n path.mkdir(mode=0o777, exist_ok=True, parents=True)\n\n\ndef get_paths():\n this_file_path = os.path.abspath(inspect.getfile(inspect.currentframe()))\n mindsdb_path = os.path.abspath(Path(this_file_path).parent.parent.parent)\n\n tuples = [\n (\n f'{mindsdb_path}/etc/',\n f'{mindsdb_path}/var/'\n )\n ]\n\n # if windows\n if os.name == 'nt':\n tuples.extend([\n (\n os.path.join(os.environ['APPDATA'], 'mindsdb'),\n os.path.join(os.environ['APPDATA'], 'mindsdb'),\n )\n ])\n else:\n tuples.extend([\n (\n '/etc/mindsdb',\n '/var/lib/mindsdb'\n ),\n (\n '~/.local/etc/mindsdb',\n '~/.local/var/lib/mindsdb'\n )\n ])\n\n return tuples\n\n\ndef get_or_create_dir_struct():\n for tup in get_paths():\n try:\n for dir in tup:\n assert(os.path.exists(dir))\n assert(os.access(dir, os.W_OK) is True)\n\n config_dir = tup[0]\n if 'DEV_CONFIG_PATH' in os.environ:\n config_dir = os.environ['DEV_CONFIG_PATH']\n\n return config_dir, tup[1]\n except Exception:\n pass\n\n for tup in get_paths():\n try:\n for dir in tup:\n create_directory(dir)\n assert(os.access(dir, os.W_OK) is True)\n\n config_dir = tup[0]\n if 'DEV_CONFIG_PATH' in os.environ:\n config_dir = os.environ['DEV_CONFIG_PATH']\n\n return config_dir, tup[1]\n\n except Exception:\n pass\n\n raise Exception('MindsDB storage directory does not exist and could not be created')\n\n\ndef do_init_migration(paths):\n ''' That initial migration for storage structure. Should be called once after user updates to 2.8.0.\n When we decide all active users has update (after a month?), this function can be removed.\n '''\n # move predictors files by their directories\n endings = [\n '_heavy_model_metadata.pickle',\n '_light_model_metadata.pickle',\n '_lightwood_data'\n ]\n for ending in endings:\n for p in Path(paths['predictors']).iterdir():\n if p.is_file() and p.name.endswith(ending):\n predictor_name = p.name[:-len(ending)]\n predictor_path = Path(paths['predictors']).joinpath(predictor_name)\n create_directory(predictor_path)\n new_file_name = ending[1:]\n shutil.move(\n str(p),\n str(predictor_path.joinpath(new_file_name))\n )\n if new_file_name == 'light_model_metadata.pickle':\n with open(str(predictor_path.joinpath(new_file_name)), 'rb') as fp:\n lmd = pickle.load(fp)\n\n if 'ludwig_data' in lmd and 'ludwig_save_path' in lmd['ludwig_data']:\n lmd['ludwig_data']['ludwig_save_path'] = os.path.join(paths['predictors'], lmd['name'], 'ludwig_data')\n\n if 'lightwood_data' in lmd and 'save_path' in lmd['lightwood_data']:\n lmd['lightwood_data']['save_path'] = os.path.join(paths['predictors'], lmd['name'], 'lightwood_data')\n\n with open(os.path.join(paths['predictors'], lmd['name'], 'light_model_metadata.pickle'), 'wb') as fp:\n pickle.dump(lmd, fp, protocol=pickle.HIGHEST_PROTOCOL)\n\n for p in Path(paths['predictors']).iterdir():\n if p.is_file() and p.name != 'start.mdb_base':\n p.unlink()\n\n # mopve each datasource files from ds_name/datasource/{file} to ds_name/{file}\n for p in Path(paths['datasources']).iterdir():\n if p.is_dir():\n datasource_folder = p.joinpath('datasource')\n if datasource_folder.is_dir():\n for f in datasource_folder.iterdir():\n shutil.move(\n str(f),\n str(p.joinpath(f.name))\n )\n shutil.rmtree(datasource_folder)\n\n\ndef update_versions_file(config, versions):\n versions_file_path = os.path.join(config.paths['root'], 'versions.json')\n old_versions = {}\n if Path(versions_file_path).is_file():\n try:\n with open(versions_file_path, 'rt') as f:\n old_versions = json.loads(f.read())\n except Exception:\n pass\n\n # do here anything for update\n if len(old_versions) == 0:\n do_init_migration(config.paths)\n\n with open(versions_file_path, 'wt') as f:\n json.dump(versions, f, indent=4, sort_keys=True)\n\n\ndef create_dirs_recursive(path):\n if isinstance(path, dict):\n for p in path.values():\n create_dirs_recursive(p)\n elif isinstance(path, str):\n create_directory(path)\n else:\n raise ValueError(f'Wrong path: {path}')\n\n\ndef archive_obsolete_predictors(config, old_version):\n ''' move all predictors trained on mindsdb with version less than\n old_version to folder for obsolete predictors\n\n Predictors are outdated in:\n v2.11.0 - in mindsdb_native added ['data_analysis_v2']['columns']\n '''\n obsolete_predictors = []\n obsolete_predictors_dir = config.paths['obsolete']['predictors']\n for f in Path(config.paths['predictors']).iterdir():\n if f.is_dir():\n if not f.joinpath('versions.json').is_file():\n obsolete_predictors.append(f.name)\n else:\n with open(f.joinpath('versions.json'), 'rt') as vf:\n versions = json.loads(vf.read())\n if LooseVersion(versions['mindsdb']) < LooseVersion(old_version):\n obsolete_predictors.append(f.name)\n if len(obsolete_predictors) > 0:\n print('These predictors are outdated and moved to {storage_dir}/obsolete/ folder:')\n for p in obsolete_predictors:\n print(f' - {p}')\n new_path = Path(obsolete_predictors_dir).joinpath(p)\n if Path(obsolete_predictors_dir).joinpath(p).is_dir():\n i = 1\n while Path(obsolete_predictors_dir).joinpath(f'{p}_{i}').is_dir():\n i += 1\n new_path = Path(obsolete_predictors_dir).joinpath(f'{p}_{i}')\n\n shutil.move(\n Path(config.paths['predictors']).joinpath(p),\n new_path\n )\n\n\ndef remove_corrupted_predictors(config, mindsdb_native):\n ''' Checking that all predictors can be loaded.\n If not - then move such predictir to {storage_dir}/tmp/corrupted_predictors\n '''\n for p in [x for x in Path(config.paths['predictors']).iterdir() if x.is_dir()]:\n model_name = p.name\n try:\n mindsdb_native.get_model_data(model_name)\n except Exception as e:\n log = logging.getLogger('mindsdb.main')\n log.error(f\"Error: predictor '{model_name}' corrupted. Move predictor data to '{{storage_dir}}/tmp/corrupted_predictors' dir.\")\n log.error(f\"Reason is: {e}\")\n corrupted_predictors_dir = Path(config.paths['tmp']).joinpath('corrupted_predictors')\n create_directory(corrupted_predictors_dir)\n shutil.move(\n str(p),\n str(corrupted_predictors_dir.joinpath( model_name + str(int(time.time())) ))\n )\n", "path": "mindsdb/utilities/fs.py"}]}
| 3,194 | 329 |
gh_patches_debug_9199
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-166
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kalender-URL schöner anzeigen
Als Nutzer möchte ich auf meiner Profilseite ein read-only-Feld sehen, in dem die URL zu meinem Kalender-Feed steht. Der Link soll nicht klickbar sein, damit die ics-Datei nicht heruntergeladen wird. Neben dem Link soll ein Button zum Kopieren der URL vorhanden sein.
</issue>
<code>
[start of ephios/extra/context.py]
1 import importlib
2 import subprocess
3
4 from django.templatetags.static import static
5 from django.utils.translation import get_language
6
7 from ephios.extra.signals import footer_link
8
9 try:
10 EPHIOS_VERSION = (
11 subprocess.check_output(["git", "rev-parse", "--short", "HEAD"]).decode().strip()
12 )
13 except (subprocess.CalledProcessError, FileNotFoundError):
14 # suggested in https://github.com/python-poetry/poetry/issues/273
15 EPHIOS_VERSION = "v" + importlib.metadata.version("ephios")
16
17
18 def ephios_base_context(request):
19 footer = {}
20 for receiver, result in footer_link.send(None, request=request):
21 for label, url in result.items():
22 footer[label] = url
23
24 datatables_translation_url = None
25 if get_language() == "de-de":
26 datatables_translation_url = static("datatables/german.json")
27
28 return {
29 "footer": footer,
30 "datatables_translation_url": datatables_translation_url,
31 "ephios_version": EPHIOS_VERSION,
32 }
33
[end of ephios/extra/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ephios/extra/context.py b/ephios/extra/context.py
--- a/ephios/extra/context.py
+++ b/ephios/extra/context.py
@@ -5,6 +5,7 @@
from django.utils.translation import get_language
from ephios.extra.signals import footer_link
+from ephios.settings import SITE_URL
try:
EPHIOS_VERSION = (
@@ -29,4 +30,5 @@
"footer": footer,
"datatables_translation_url": datatables_translation_url,
"ephios_version": EPHIOS_VERSION,
+ "SITE_URL": SITE_URL,
}
|
{"golden_diff": "diff --git a/ephios/extra/context.py b/ephios/extra/context.py\n--- a/ephios/extra/context.py\n+++ b/ephios/extra/context.py\n@@ -5,6 +5,7 @@\n from django.utils.translation import get_language\n \n from ephios.extra.signals import footer_link\n+from ephios.settings import SITE_URL\n \n try:\n EPHIOS_VERSION = (\n@@ -29,4 +30,5 @@\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n+ \"SITE_URL\": SITE_URL,\n }\n", "issue": "Kalender-URL sch\u00f6ner anzeigen\nAls Nutzer m\u00f6chte ich auf meiner Profilseite ein read-only-Feld sehen, in dem die URL zu meinem Kalender-Feed steht. Der Link soll nicht klickbar sein, damit die ics-Datei nicht heruntergeladen wird. Neben dem Link soll ein Button zum Kopieren der URL vorhanden sein.\n", "before_files": [{"content": "import importlib\nimport subprocess\n\nfrom django.templatetags.static import static\nfrom django.utils.translation import get_language\n\nfrom ephios.extra.signals import footer_link\n\ntry:\n EPHIOS_VERSION = (\n subprocess.check_output([\"git\", \"rev-parse\", \"--short\", \"HEAD\"]).decode().strip()\n )\nexcept (subprocess.CalledProcessError, FileNotFoundError):\n # suggested in https://github.com/python-poetry/poetry/issues/273\n EPHIOS_VERSION = \"v\" + importlib.metadata.version(\"ephios\")\n\n\ndef ephios_base_context(request):\n footer = {}\n for receiver, result in footer_link.send(None, request=request):\n for label, url in result.items():\n footer[label] = url\n\n datatables_translation_url = None\n if get_language() == \"de-de\":\n datatables_translation_url = static(\"datatables/german.json\")\n\n return {\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n }\n", "path": "ephios/extra/context.py"}]}
| 905 | 139 |
gh_patches_debug_33040
|
rasdani/github-patches
|
git_diff
|
webkom__lego-903
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Semesters are missing from interest form e-mails
It seems that the signal to the feed handler is triggered before the semesters are stored on the model? The semesters show up when viewing the interest later, but are not present when the email is generated. Could this have happened after the changes to semesters, @odinuge?
A quick fix would be to send the email during creation, instead of in the feed handler.
</issue>
<code>
[start of lego/apps/feed/feed_handlers/company_interest_handler.py]
1 from lego.apps.companies.models import CompanyInterest
2 from lego.apps.companies.notifications import CompanyInterestNotification
3 from lego.apps.feed.activities import Activity
4 from lego.apps.feed.feed_handlers.base_handler import BaseHandler
5 from lego.apps.feed.feed_manager import feed_manager
6 from lego.apps.feed.feeds.notification_feed import NotificationFeed
7 from lego.apps.feed.registry import register_handler
8 from lego.apps.feed.verbs import CompanyInterestVerb
9 from lego.apps.users.models import AbakusGroup
10
11
12 class CompanyInterestHandler(BaseHandler):
13
14 model = CompanyInterest
15 manager = feed_manager
16
17 def handle_create(self, company_interest):
18
19 activity = Activity(
20 actor=company_interest,
21 verb=CompanyInterestVerb,
22 object=company_interest,
23 time=company_interest.created_at,
24 extra_context={}
25 )
26
27 recipients = [
28 member.user for member in AbakusGroup.objects.get(name="Bedkom").memberships.all()
29 ]
30
31 self.manager.add_activity(
32 activity, [recipient.pk for recipient in recipients], [NotificationFeed]
33 )
34
35 for recipient in recipients:
36 notification = CompanyInterestNotification(
37 recipient, company_interest=company_interest
38 )
39 notification.notify()
40
41 def handle_update(self, company_interest):
42 pass
43
44 def handle_delete(self, company_interest):
45 pass
46
47
48 register_handler(CompanyInterestHandler)
49
[end of lego/apps/feed/feed_handlers/company_interest_handler.py]
[start of lego/apps/companies/serializers.py]
1 from rest_framework import serializers
2 from rest_framework.fields import CharField
3
4 from lego.apps.comments.serializers import CommentSerializer
5 from lego.apps.companies.models import (Company, CompanyContact, CompanyFile, CompanyInterest,
6 Semester, SemesterStatus)
7 from lego.apps.files.fields import FileField, ImageField
8 from lego.apps.users.fields import PublicUserField
9 from lego.apps.users.models import User
10 from lego.utils.serializers import BasisModelSerializer
11
12
13 class SemesterSerializer(BasisModelSerializer):
14 class Meta:
15 model = Semester
16 fields = ('id', 'year', 'semester', 'active_interest_form')
17
18
19 class SemesterStatusSerializer(serializers.ModelSerializer):
20
21 class Meta:
22 model = SemesterStatus
23 fields = ('id', 'semester', 'contacted_status')
24
25 def create(self, validated_data):
26 company = Company.objects.get(pk=self.context['view'].kwargs['company_pk'])
27 validated_data['company'] = company
28 return super().create(validated_data)
29
30
31 class SemesterStatusDetailSerializer(SemesterStatusSerializer):
32 contract = FileField(required=False, allow_null=True)
33 statistics = FileField(required=False, allow_null=True)
34 evaluation = FileField(required=False, allow_null=True)
35
36 contract_name = CharField(source='contract_id', read_only=True)
37 statistics_name = CharField(source='statistics_id', read_only=True)
38 evaluation_name = CharField(source='evaluation_id', read_only=True)
39
40 class Meta:
41 model = SemesterStatus
42 fields = ('id', 'semester', 'contacted_status', 'contract', 'statistics',
43 'evaluation', 'contract_name', 'statistics_name', 'evaluation_name')
44
45
46 class CompanyContactSerializer(BasisModelSerializer):
47
48 class Meta:
49 model = CompanyContact
50 fields = ('id', 'name', 'role', 'mail', 'phone', 'mobile')
51
52 def create(self, validated_data):
53 company = Company.objects.get(pk=self.context['view'].kwargs['company_pk'])
54 validated_data['company'] = company
55 return super().create(validated_data)
56
57
58 class CompanyFileSerializer(serializers.ModelSerializer):
59 file = FileField()
60
61 class Meta:
62 model = CompanyFile
63 fields = ('id', 'file')
64
65 def create(self, validated_data):
66 company = Company.objects.get(pk=self.context['view'].kwargs['company_pk'])
67 validated_data['company'] = company
68 return super().create(validated_data)
69
70
71 class CompanyListSerializer(BasisModelSerializer):
72 logo = ImageField(required=False, options={'height': 500})
73 thumbnail = ImageField(
74 source='logo',
75 required=False,
76 options={'height': 500, 'width': 500, 'smart': True}
77 )
78
79 class Meta:
80 model = Company
81 fields = ('id', 'name', 'description', 'website', 'company_type', 'address', 'logo',
82 'thumbnail')
83
84
85 class CompanyAdminListSerializer(BasisModelSerializer):
86 semester_statuses = SemesterStatusSerializer(many=True, read_only=True)
87 student_contact = PublicUserField(required=False, queryset=User.objects.all())
88
89 class Meta:
90 model = Company
91 fields = ('id', 'name', 'semester_statuses', 'student_contact', 'admin_comment',
92 'active')
93
94
95 class CompanyDetailSerializer(BasisModelSerializer):
96 logo = ImageField(required=False, options={'height': 500})
97
98 class Meta:
99 model = Company
100 fields = ('id', 'name', 'description', 'phone', 'company_type', 'website', 'address',
101 'logo')
102
103
104 class CompanyAdminDetailSerializer(BasisModelSerializer):
105 comments = CommentSerializer(read_only=True, many=True)
106 comment_target = CharField(read_only=True)
107
108 student_contact = PublicUserField(required=False, allow_null=True, queryset=User.objects.all())
109 semester_statuses = SemesterStatusDetailSerializer(many=True, read_only=True)
110 company_contacts = CompanyContactSerializer(many=True, read_only=True)
111
112 logo = ImageField(required=False, options={'height': 500})
113 files = CompanyFileSerializer(many=True, read_only=True)
114
115 class Meta:
116 model = Company
117 fields = ('id', 'name', 'student_contact', 'description', 'phone',
118 'company_type', 'website', 'address', 'payment_mail', 'comments',
119 'comment_target', 'semester_statuses', 'active', 'admin_comment',
120 'logo', 'files', 'company_contacts')
121
122
123 class CompanyInterestSerializer(serializers.ModelSerializer):
124 class Meta:
125 model = CompanyInterest
126 fields = ('id', 'company_name', 'contact_person', 'mail', 'semesters', 'events',
127 'other_offers', 'comment')
128
129
130 class CompanyInterestListSerializer(serializers.ModelSerializer):
131 class Meta:
132 model = CompanyInterest
133 fields = ('id', 'company_name', 'contact_person', 'mail')
134
135
136 class CompanySearchSerializer(serializers.ModelSerializer):
137 """
138 Public company information available on search.
139 """
140 class Meta:
141 model = Company
142 fields = ('id', 'name', 'description', 'website', 'company_type', 'address')
143
[end of lego/apps/companies/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lego/apps/companies/serializers.py b/lego/apps/companies/serializers.py
--- a/lego/apps/companies/serializers.py
+++ b/lego/apps/companies/serializers.py
@@ -4,6 +4,7 @@
from lego.apps.comments.serializers import CommentSerializer
from lego.apps.companies.models import (Company, CompanyContact, CompanyFile, CompanyInterest,
Semester, SemesterStatus)
+from lego.apps.feed.registry import get_handler
from lego.apps.files.fields import FileField, ImageField
from lego.apps.users.fields import PublicUserField
from lego.apps.users.models import User
@@ -126,6 +127,15 @@
fields = ('id', 'company_name', 'contact_person', 'mail', 'semesters', 'events',
'other_offers', 'comment')
+ def create(self, validated_data):
+ semesters = validated_data.pop('semesters')
+ company_interest = CompanyInterest.objects.create(**validated_data)
+ company_interest.semesters.add(*semesters)
+ company_interest.save()
+ get_handler(CompanyInterest).handle_interest(company_interest)
+
+ return company_interest
+
class CompanyInterestListSerializer(serializers.ModelSerializer):
class Meta:
diff --git a/lego/apps/feed/feed_handlers/company_interest_handler.py b/lego/apps/feed/feed_handlers/company_interest_handler.py
--- a/lego/apps/feed/feed_handlers/company_interest_handler.py
+++ b/lego/apps/feed/feed_handlers/company_interest_handler.py
@@ -14,7 +14,7 @@
model = CompanyInterest
manager = feed_manager
- def handle_create(self, company_interest):
+ def handle_interest(self, company_interest):
activity = Activity(
actor=company_interest,
@@ -38,6 +38,9 @@
)
notification.notify()
+ def handle_create(self, company_interest):
+ pass
+
def handle_update(self, company_interest):
pass
|
{"golden_diff": "diff --git a/lego/apps/companies/serializers.py b/lego/apps/companies/serializers.py\n--- a/lego/apps/companies/serializers.py\n+++ b/lego/apps/companies/serializers.py\n@@ -4,6 +4,7 @@\n from lego.apps.comments.serializers import CommentSerializer\n from lego.apps.companies.models import (Company, CompanyContact, CompanyFile, CompanyInterest,\n Semester, SemesterStatus)\n+from lego.apps.feed.registry import get_handler\n from lego.apps.files.fields import FileField, ImageField\n from lego.apps.users.fields import PublicUserField\n from lego.apps.users.models import User\n@@ -126,6 +127,15 @@\n fields = ('id', 'company_name', 'contact_person', 'mail', 'semesters', 'events',\n 'other_offers', 'comment')\n \n+ def create(self, validated_data):\n+ semesters = validated_data.pop('semesters')\n+ company_interest = CompanyInterest.objects.create(**validated_data)\n+ company_interest.semesters.add(*semesters)\n+ company_interest.save()\n+ get_handler(CompanyInterest).handle_interest(company_interest)\n+\n+ return company_interest\n+\n \n class CompanyInterestListSerializer(serializers.ModelSerializer):\n class Meta:\ndiff --git a/lego/apps/feed/feed_handlers/company_interest_handler.py b/lego/apps/feed/feed_handlers/company_interest_handler.py\n--- a/lego/apps/feed/feed_handlers/company_interest_handler.py\n+++ b/lego/apps/feed/feed_handlers/company_interest_handler.py\n@@ -14,7 +14,7 @@\n model = CompanyInterest\n manager = feed_manager\n \n- def handle_create(self, company_interest):\n+ def handle_interest(self, company_interest):\n \n activity = Activity(\n actor=company_interest,\n@@ -38,6 +38,9 @@\n )\n notification.notify()\n \n+ def handle_create(self, company_interest):\n+ pass\n+\n def handle_update(self, company_interest):\n pass\n", "issue": "Semesters are missing from interest form e-mails\nIt seems that the signal to the feed handler is triggered before the semesters are stored on the model? The semesters show up when viewing the interest later, but are not present when the email is generated. Could this have happened after the changes to semesters, @odinuge?\r\n\r\nA quick fix would be to send the email during creation, instead of in the feed handler.\n", "before_files": [{"content": "from lego.apps.companies.models import CompanyInterest\nfrom lego.apps.companies.notifications import CompanyInterestNotification\nfrom lego.apps.feed.activities import Activity\nfrom lego.apps.feed.feed_handlers.base_handler import BaseHandler\nfrom lego.apps.feed.feed_manager import feed_manager\nfrom lego.apps.feed.feeds.notification_feed import NotificationFeed\nfrom lego.apps.feed.registry import register_handler\nfrom lego.apps.feed.verbs import CompanyInterestVerb\nfrom lego.apps.users.models import AbakusGroup\n\n\nclass CompanyInterestHandler(BaseHandler):\n\n model = CompanyInterest\n manager = feed_manager\n\n def handle_create(self, company_interest):\n\n activity = Activity(\n actor=company_interest,\n verb=CompanyInterestVerb,\n object=company_interest,\n time=company_interest.created_at,\n extra_context={}\n )\n\n recipients = [\n member.user for member in AbakusGroup.objects.get(name=\"Bedkom\").memberships.all()\n ]\n\n self.manager.add_activity(\n activity, [recipient.pk for recipient in recipients], [NotificationFeed]\n )\n\n for recipient in recipients:\n notification = CompanyInterestNotification(\n recipient, company_interest=company_interest\n )\n notification.notify()\n\n def handle_update(self, company_interest):\n pass\n\n def handle_delete(self, company_interest):\n pass\n\n\nregister_handler(CompanyInterestHandler)\n", "path": "lego/apps/feed/feed_handlers/company_interest_handler.py"}, {"content": "from rest_framework import serializers\nfrom rest_framework.fields import CharField\n\nfrom lego.apps.comments.serializers import CommentSerializer\nfrom lego.apps.companies.models import (Company, CompanyContact, CompanyFile, CompanyInterest,\n Semester, SemesterStatus)\nfrom lego.apps.files.fields import FileField, ImageField\nfrom lego.apps.users.fields import PublicUserField\nfrom lego.apps.users.models import User\nfrom lego.utils.serializers import BasisModelSerializer\n\n\nclass SemesterSerializer(BasisModelSerializer):\n class Meta:\n model = Semester\n fields = ('id', 'year', 'semester', 'active_interest_form')\n\n\nclass SemesterStatusSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = SemesterStatus\n fields = ('id', 'semester', 'contacted_status')\n\n def create(self, validated_data):\n company = Company.objects.get(pk=self.context['view'].kwargs['company_pk'])\n validated_data['company'] = company\n return super().create(validated_data)\n\n\nclass SemesterStatusDetailSerializer(SemesterStatusSerializer):\n contract = FileField(required=False, allow_null=True)\n statistics = FileField(required=False, allow_null=True)\n evaluation = FileField(required=False, allow_null=True)\n\n contract_name = CharField(source='contract_id', read_only=True)\n statistics_name = CharField(source='statistics_id', read_only=True)\n evaluation_name = CharField(source='evaluation_id', read_only=True)\n\n class Meta:\n model = SemesterStatus\n fields = ('id', 'semester', 'contacted_status', 'contract', 'statistics',\n 'evaluation', 'contract_name', 'statistics_name', 'evaluation_name')\n\n\nclass CompanyContactSerializer(BasisModelSerializer):\n\n class Meta:\n model = CompanyContact\n fields = ('id', 'name', 'role', 'mail', 'phone', 'mobile')\n\n def create(self, validated_data):\n company = Company.objects.get(pk=self.context['view'].kwargs['company_pk'])\n validated_data['company'] = company\n return super().create(validated_data)\n\n\nclass CompanyFileSerializer(serializers.ModelSerializer):\n file = FileField()\n\n class Meta:\n model = CompanyFile\n fields = ('id', 'file')\n\n def create(self, validated_data):\n company = Company.objects.get(pk=self.context['view'].kwargs['company_pk'])\n validated_data['company'] = company\n return super().create(validated_data)\n\n\nclass CompanyListSerializer(BasisModelSerializer):\n logo = ImageField(required=False, options={'height': 500})\n thumbnail = ImageField(\n source='logo',\n required=False,\n options={'height': 500, 'width': 500, 'smart': True}\n )\n\n class Meta:\n model = Company\n fields = ('id', 'name', 'description', 'website', 'company_type', 'address', 'logo',\n 'thumbnail')\n\n\nclass CompanyAdminListSerializer(BasisModelSerializer):\n semester_statuses = SemesterStatusSerializer(many=True, read_only=True)\n student_contact = PublicUserField(required=False, queryset=User.objects.all())\n\n class Meta:\n model = Company\n fields = ('id', 'name', 'semester_statuses', 'student_contact', 'admin_comment',\n 'active')\n\n\nclass CompanyDetailSerializer(BasisModelSerializer):\n logo = ImageField(required=False, options={'height': 500})\n\n class Meta:\n model = Company\n fields = ('id', 'name', 'description', 'phone', 'company_type', 'website', 'address',\n 'logo')\n\n\nclass CompanyAdminDetailSerializer(BasisModelSerializer):\n comments = CommentSerializer(read_only=True, many=True)\n comment_target = CharField(read_only=True)\n\n student_contact = PublicUserField(required=False, allow_null=True, queryset=User.objects.all())\n semester_statuses = SemesterStatusDetailSerializer(many=True, read_only=True)\n company_contacts = CompanyContactSerializer(many=True, read_only=True)\n\n logo = ImageField(required=False, options={'height': 500})\n files = CompanyFileSerializer(many=True, read_only=True)\n\n class Meta:\n model = Company\n fields = ('id', 'name', 'student_contact', 'description', 'phone',\n 'company_type', 'website', 'address', 'payment_mail', 'comments',\n 'comment_target', 'semester_statuses', 'active', 'admin_comment',\n 'logo', 'files', 'company_contacts')\n\n\nclass CompanyInterestSerializer(serializers.ModelSerializer):\n class Meta:\n model = CompanyInterest\n fields = ('id', 'company_name', 'contact_person', 'mail', 'semesters', 'events',\n 'other_offers', 'comment')\n\n\nclass CompanyInterestListSerializer(serializers.ModelSerializer):\n class Meta:\n model = CompanyInterest\n fields = ('id', 'company_name', 'contact_person', 'mail')\n\n\nclass CompanySearchSerializer(serializers.ModelSerializer):\n \"\"\"\n Public company information available on search.\n \"\"\"\n class Meta:\n model = Company\n fields = ('id', 'name', 'description', 'website', 'company_type', 'address')\n", "path": "lego/apps/companies/serializers.py"}]}
| 2,459 | 435 |
gh_patches_debug_2639
|
rasdani/github-patches
|
git_diff
|
pypa__pip-10009
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update quickstart guide to reflect user research
Updates quickstart guide to reflect most common tasks as discovered in our "buy a feature" user research.
Preview: https://pip--9137.org.readthedocs.build/en/9137/quickstart/
</issue>
<code>
[start of docs/html/conf.py]
1 """Sphinx configuration file for pip's documentation."""
2
3 import glob
4 import os
5 import pathlib
6 import re
7 import sys
8 from typing import List, Tuple
9
10 # Add the docs/ directory to sys.path, because pip_sphinxext.py is there.
11 docs_dir = os.path.dirname(os.path.dirname(__file__))
12 sys.path.insert(0, docs_dir)
13
14 # -- General configuration ------------------------------------------------------------
15
16 extensions = [
17 # first-party extensions
18 "sphinx.ext.autodoc",
19 "sphinx.ext.todo",
20 "sphinx.ext.extlinks",
21 "sphinx.ext.intersphinx",
22 # our extensions
23 "pip_sphinxext",
24 # third-party extensions
25 "myst_parser",
26 "sphinx_copybutton",
27 "sphinx_inline_tabs",
28 "sphinxcontrib.towncrier",
29 ]
30
31 # General information about the project.
32 project = "pip"
33 copyright = "2008-2020, PyPA"
34
35 # Find the version and release information.
36 # We have a single source of truth for our version number: pip's __init__.py file.
37 # This next bit of code reads from it.
38 file_with_version = os.path.join(docs_dir, "..", "src", "pip", "__init__.py")
39 with open(file_with_version) as f:
40 for line in f:
41 m = re.match(r'__version__ = "(.*)"', line)
42 if m:
43 __version__ = m.group(1)
44 # The short X.Y version.
45 version = ".".join(__version__.split(".")[:2])
46 # The full version, including alpha/beta/rc tags.
47 release = __version__
48 break
49 else: # AKA no-break
50 version = release = "dev"
51
52 print("pip version:", version)
53 print("pip release:", release)
54
55 # -- Options for smartquotes ----------------------------------------------------------
56
57 # Disable the conversion of dashes so that long options like "--find-links" won't
58 # render as "-find-links" if included in the text.The default of "qDe" converts normal
59 # quote characters ('"' and "'"), en and em dashes ("--" and "---"), and ellipses "..."
60 smartquotes_action = "qe"
61
62 # -- Options for intersphinx ----------------------------------------------------------
63
64 intersphinx_mapping = {
65 "python": ("https://docs.python.org/3", None),
66 "pypug": ("https://packaging.python.org", None),
67 }
68
69 # -- Options for extlinks -------------------------------------------------------------
70
71 extlinks = {
72 "issue": ("https://github.com/pypa/pip/issues/%s", "#"),
73 "pull": ("https://github.com/pypa/pip/pull/%s", "PR #"),
74 "pypi": ("https://pypi.org/project/%s/", ""),
75 }
76
77 # -- Options for towncrier_draft extension --------------------------------------------
78
79 towncrier_draft_autoversion_mode = "draft" # or: 'sphinx-release', 'sphinx-version'
80 towncrier_draft_include_empty = True
81 towncrier_draft_working_directory = pathlib.Path(docs_dir).parent
82 # Not yet supported: towncrier_draft_config_path = 'pyproject.toml' # relative to cwd
83
84 # -- Options for HTML -----------------------------------------------------------------
85
86 html_theme = "furo"
87 html_title = f"{project} documentation v{release}"
88
89 # Disable the generation of the various indexes
90 html_use_modindex = False
91 html_use_index = False
92
93 # -- Options for Manual Pages ---------------------------------------------------------
94
95
96 # List of manual pages generated
97 def determine_man_pages() -> List[Tuple[str, str, str, str, int]]:
98 """Determine which man pages need to be generated."""
99
100 def to_document_name(path: str, base_dir: str) -> str:
101 """Convert a provided path to a Sphinx "document name"."""
102 relative_path = os.path.relpath(path, base_dir)
103 root, _ = os.path.splitext(relative_path)
104 return root.replace(os.sep, "/")
105
106 # Crawl the entire man/commands/ directory and list every file with appropriate
107 # name and details.
108 man_dir = os.path.join(docs_dir, "man")
109 raw_subcommands = glob.glob(os.path.join(man_dir, "commands/*.rst"))
110 if not raw_subcommands:
111 raise FileNotFoundError(
112 "The individual subcommand manpages could not be found!"
113 )
114
115 retval = [
116 ("index", "pip", "package manager for Python packages", "pip developers", 1),
117 ]
118 for fname in raw_subcommands:
119 fname_base = to_document_name(fname, man_dir)
120 outname = "pip-" + fname_base.split("/")[1]
121 description = "description of {} command".format(outname.replace("-", " "))
122
123 retval.append((fname_base, outname, description, "pip developers", 1))
124
125 return retval
126
127
128 man_pages = determine_man_pages()
129
[end of docs/html/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/html/conf.py b/docs/html/conf.py
--- a/docs/html/conf.py
+++ b/docs/html/conf.py
@@ -30,7 +30,7 @@
# General information about the project.
project = "pip"
-copyright = "2008-2020, PyPA"
+copyright = "The pip developers"
# Find the version and release information.
# We have a single source of truth for our version number: pip's __init__.py file.
|
{"golden_diff": "diff --git a/docs/html/conf.py b/docs/html/conf.py\n--- a/docs/html/conf.py\n+++ b/docs/html/conf.py\n@@ -30,7 +30,7 @@\n \n # General information about the project.\n project = \"pip\"\n-copyright = \"2008-2020, PyPA\"\n+copyright = \"The pip developers\"\n \n # Find the version and release information.\n # We have a single source of truth for our version number: pip's __init__.py file.\n", "issue": "Update quickstart guide to reflect user research\nUpdates quickstart guide to reflect most common tasks as discovered in our \"buy a feature\" user research.\r\n\r\nPreview: https://pip--9137.org.readthedocs.build/en/9137/quickstart/\n", "before_files": [{"content": "\"\"\"Sphinx configuration file for pip's documentation.\"\"\"\n\nimport glob\nimport os\nimport pathlib\nimport re\nimport sys\nfrom typing import List, Tuple\n\n# Add the docs/ directory to sys.path, because pip_sphinxext.py is there.\ndocs_dir = os.path.dirname(os.path.dirname(__file__))\nsys.path.insert(0, docs_dir)\n\n# -- General configuration ------------------------------------------------------------\n\nextensions = [\n # first-party extensions\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.intersphinx\",\n # our extensions\n \"pip_sphinxext\",\n # third-party extensions\n \"myst_parser\",\n \"sphinx_copybutton\",\n \"sphinx_inline_tabs\",\n \"sphinxcontrib.towncrier\",\n]\n\n# General information about the project.\nproject = \"pip\"\ncopyright = \"2008-2020, PyPA\"\n\n# Find the version and release information.\n# We have a single source of truth for our version number: pip's __init__.py file.\n# This next bit of code reads from it.\nfile_with_version = os.path.join(docs_dir, \"..\", \"src\", \"pip\", \"__init__.py\")\nwith open(file_with_version) as f:\n for line in f:\n m = re.match(r'__version__ = \"(.*)\"', line)\n if m:\n __version__ = m.group(1)\n # The short X.Y version.\n version = \".\".join(__version__.split(\".\")[:2])\n # The full version, including alpha/beta/rc tags.\n release = __version__\n break\n else: # AKA no-break\n version = release = \"dev\"\n\nprint(\"pip version:\", version)\nprint(\"pip release:\", release)\n\n# -- Options for smartquotes ----------------------------------------------------------\n\n# Disable the conversion of dashes so that long options like \"--find-links\" won't\n# render as \"-find-links\" if included in the text.The default of \"qDe\" converts normal\n# quote characters ('\"' and \"'\"), en and em dashes (\"--\" and \"---\"), and ellipses \"...\"\nsmartquotes_action = \"qe\"\n\n# -- Options for intersphinx ----------------------------------------------------------\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pypug\": (\"https://packaging.python.org\", None),\n}\n\n# -- Options for extlinks -------------------------------------------------------------\n\nextlinks = {\n \"issue\": (\"https://github.com/pypa/pip/issues/%s\", \"#\"),\n \"pull\": (\"https://github.com/pypa/pip/pull/%s\", \"PR #\"),\n \"pypi\": (\"https://pypi.org/project/%s/\", \"\"),\n}\n\n# -- Options for towncrier_draft extension --------------------------------------------\n\ntowncrier_draft_autoversion_mode = \"draft\" # or: 'sphinx-release', 'sphinx-version'\ntowncrier_draft_include_empty = True\ntowncrier_draft_working_directory = pathlib.Path(docs_dir).parent\n# Not yet supported: towncrier_draft_config_path = 'pyproject.toml' # relative to cwd\n\n# -- Options for HTML -----------------------------------------------------------------\n\nhtml_theme = \"furo\"\nhtml_title = f\"{project} documentation v{release}\"\n\n# Disable the generation of the various indexes\nhtml_use_modindex = False\nhtml_use_index = False\n\n# -- Options for Manual Pages ---------------------------------------------------------\n\n\n# List of manual pages generated\ndef determine_man_pages() -> List[Tuple[str, str, str, str, int]]:\n \"\"\"Determine which man pages need to be generated.\"\"\"\n\n def to_document_name(path: str, base_dir: str) -> str:\n \"\"\"Convert a provided path to a Sphinx \"document name\".\"\"\"\n relative_path = os.path.relpath(path, base_dir)\n root, _ = os.path.splitext(relative_path)\n return root.replace(os.sep, \"/\")\n\n # Crawl the entire man/commands/ directory and list every file with appropriate\n # name and details.\n man_dir = os.path.join(docs_dir, \"man\")\n raw_subcommands = glob.glob(os.path.join(man_dir, \"commands/*.rst\"))\n if not raw_subcommands:\n raise FileNotFoundError(\n \"The individual subcommand manpages could not be found!\"\n )\n\n retval = [\n (\"index\", \"pip\", \"package manager for Python packages\", \"pip developers\", 1),\n ]\n for fname in raw_subcommands:\n fname_base = to_document_name(fname, man_dir)\n outname = \"pip-\" + fname_base.split(\"/\")[1]\n description = \"description of {} command\".format(outname.replace(\"-\", \" \"))\n\n retval.append((fname_base, outname, description, \"pip developers\", 1))\n\n return retval\n\n\nman_pages = determine_man_pages()\n", "path": "docs/html/conf.py"}]}
| 1,902 | 108 |
gh_patches_debug_31734
|
rasdani/github-patches
|
git_diff
|
psf__black-4189
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SyntaxWarning on regexp on first run of black
When running `black` on the following code:
```
text = re.sub(
"([_a-zA-Z0-9-+]+)(\.[_a-zA-Z0-9-+]+)*"
"@([a-zA-Z0-9-]+)(\.[a-zA-Z0-9-]+)*(\.[a-zA-Z]{2,4})",
'<a href="mailto:\g<0>">\g<0></a>',
text,
)
text = re.sub(
"(ftp|http|https):\/\/(\w+:{0,1}\w*@)?"
"(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?",
'<a href="\g<0>">\g<0></a>',
text,
)
```
I get the following warnings (written twice):
```
<unknown>:2: SyntaxWarning: invalid escape sequence '\.'
<unknown>:3: SyntaxWarning: invalid escape sequence '\.'
<unknown>:4: SyntaxWarning: invalid escape sequence '\g'
<unknown>:8: SyntaxWarning: invalid escape sequence '\/'
<unknown>:9: SyntaxWarning: invalid escape sequence '\S'
<unknown>:10: SyntaxWarning: invalid escape sequence '\g'
<unknown>:2: SyntaxWarning: invalid escape sequence '\.'
<unknown>:3: SyntaxWarning: invalid escape sequence '\.'
<unknown>:4: SyntaxWarning: invalid escape sequence '\g'
<unknown>:8: SyntaxWarning: invalid escape sequence '\/'
<unknown>:9: SyntaxWarning: invalid escape sequence '\S'
<unknown>:10: SyntaxWarning: invalid escape sequence '\g'
```
When re-running `black` on the same file, the warnings are not shown again. I have to modify the lines (adding a space for instance) to see the warnings again.
Are these warnings normal? (The syntax is normally correct according to the documentation of the `re` module.)
If they are normal, should they really appear twice in the output? And why don't they appear again when running a second time `black`?
</issue>
<code>
[start of src/black/parsing.py]
1 """
2 Parse Python code and perform AST validation.
3 """
4
5 import ast
6 import sys
7 from typing import Iterable, Iterator, List, Set, Tuple
8
9 from black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature
10 from black.nodes import syms
11 from blib2to3 import pygram
12 from blib2to3.pgen2 import driver
13 from blib2to3.pgen2.grammar import Grammar
14 from blib2to3.pgen2.parse import ParseError
15 from blib2to3.pgen2.tokenize import TokenError
16 from blib2to3.pytree import Leaf, Node
17
18
19 class InvalidInput(ValueError):
20 """Raised when input source code fails all parse attempts."""
21
22
23 def get_grammars(target_versions: Set[TargetVersion]) -> List[Grammar]:
24 if not target_versions:
25 # No target_version specified, so try all grammars.
26 return [
27 # Python 3.7-3.9
28 pygram.python_grammar_async_keywords,
29 # Python 3.0-3.6
30 pygram.python_grammar,
31 # Python 3.10+
32 pygram.python_grammar_soft_keywords,
33 ]
34
35 grammars = []
36 # If we have to parse both, try to parse async as a keyword first
37 if not supports_feature(
38 target_versions, Feature.ASYNC_IDENTIFIERS
39 ) and not supports_feature(target_versions, Feature.PATTERN_MATCHING):
40 # Python 3.7-3.9
41 grammars.append(pygram.python_grammar_async_keywords)
42 if not supports_feature(target_versions, Feature.ASYNC_KEYWORDS):
43 # Python 3.0-3.6
44 grammars.append(pygram.python_grammar)
45 if any(Feature.PATTERN_MATCHING in VERSION_TO_FEATURES[v] for v in target_versions):
46 # Python 3.10+
47 grammars.append(pygram.python_grammar_soft_keywords)
48
49 # At least one of the above branches must have been taken, because every Python
50 # version has exactly one of the two 'ASYNC_*' flags
51 return grammars
52
53
54 def lib2to3_parse(src_txt: str, target_versions: Iterable[TargetVersion] = ()) -> Node:
55 """Given a string with source, return the lib2to3 Node."""
56 if not src_txt.endswith("\n"):
57 src_txt += "\n"
58
59 grammars = get_grammars(set(target_versions))
60 errors = {}
61 for grammar in grammars:
62 drv = driver.Driver(grammar)
63 try:
64 result = drv.parse_string(src_txt, True)
65 break
66
67 except ParseError as pe:
68 lineno, column = pe.context[1]
69 lines = src_txt.splitlines()
70 try:
71 faulty_line = lines[lineno - 1]
72 except IndexError:
73 faulty_line = "<line number missing in source>"
74 errors[grammar.version] = InvalidInput(
75 f"Cannot parse: {lineno}:{column}: {faulty_line}"
76 )
77
78 except TokenError as te:
79 # In edge cases these are raised; and typically don't have a "faulty_line".
80 lineno, column = te.args[1]
81 errors[grammar.version] = InvalidInput(
82 f"Cannot parse: {lineno}:{column}: {te.args[0]}"
83 )
84
85 else:
86 # Choose the latest version when raising the actual parsing error.
87 assert len(errors) >= 1
88 exc = errors[max(errors)]
89 raise exc from None
90
91 if isinstance(result, Leaf):
92 result = Node(syms.file_input, [result])
93 return result
94
95
96 def matches_grammar(src_txt: str, grammar: Grammar) -> bool:
97 drv = driver.Driver(grammar)
98 try:
99 drv.parse_string(src_txt, True)
100 except (ParseError, TokenError, IndentationError):
101 return False
102 else:
103 return True
104
105
106 def lib2to3_unparse(node: Node) -> str:
107 """Given a lib2to3 node, return its string representation."""
108 code = str(node)
109 return code
110
111
112 def parse_single_version(
113 src: str, version: Tuple[int, int], *, type_comments: bool
114 ) -> ast.AST:
115 filename = "<unknown>"
116 return ast.parse(
117 src, filename, feature_version=version, type_comments=type_comments
118 )
119
120
121 def parse_ast(src: str) -> ast.AST:
122 # TODO: support Python 4+ ;)
123 versions = [(3, minor) for minor in range(3, sys.version_info[1] + 1)]
124
125 first_error = ""
126 for version in sorted(versions, reverse=True):
127 try:
128 return parse_single_version(src, version, type_comments=True)
129 except SyntaxError as e:
130 if not first_error:
131 first_error = str(e)
132
133 # Try to parse without type comments
134 for version in sorted(versions, reverse=True):
135 try:
136 return parse_single_version(src, version, type_comments=False)
137 except SyntaxError:
138 pass
139
140 raise SyntaxError(first_error)
141
142
143 def _normalize(lineend: str, value: str) -> str:
144 # To normalize, we strip any leading and trailing space from
145 # each line...
146 stripped: List[str] = [i.strip() for i in value.splitlines()]
147 normalized = lineend.join(stripped)
148 # ...and remove any blank lines at the beginning and end of
149 # the whole string
150 return normalized.strip()
151
152
153 def stringify_ast(node: ast.AST, depth: int = 0) -> Iterator[str]:
154 """Simple visitor generating strings to compare ASTs by content."""
155
156 if (
157 isinstance(node, ast.Constant)
158 and isinstance(node.value, str)
159 and node.kind == "u"
160 ):
161 # It's a quirk of history that we strip the u prefix over here. We used to
162 # rewrite the AST nodes for Python version compatibility and we never copied
163 # over the kind
164 node.kind = None
165
166 yield f"{' ' * depth}{node.__class__.__name__}("
167
168 for field in sorted(node._fields): # noqa: F402
169 # TypeIgnore has only one field 'lineno' which breaks this comparison
170 if isinstance(node, ast.TypeIgnore):
171 break
172
173 try:
174 value: object = getattr(node, field)
175 except AttributeError:
176 continue
177
178 yield f"{' ' * (depth + 1)}{field}="
179
180 if isinstance(value, list):
181 for item in value:
182 # Ignore nested tuples within del statements, because we may insert
183 # parentheses and they change the AST.
184 if (
185 field == "targets"
186 and isinstance(node, ast.Delete)
187 and isinstance(item, ast.Tuple)
188 ):
189 for elt in item.elts:
190 yield from stringify_ast(elt, depth + 2)
191
192 elif isinstance(item, ast.AST):
193 yield from stringify_ast(item, depth + 2)
194
195 elif isinstance(value, ast.AST):
196 yield from stringify_ast(value, depth + 2)
197
198 else:
199 normalized: object
200 if (
201 isinstance(node, ast.Constant)
202 and field == "value"
203 and isinstance(value, str)
204 ):
205 # Constant strings may be indented across newlines, if they are
206 # docstrings; fold spaces after newlines when comparing. Similarly,
207 # trailing and leading space may be removed.
208 normalized = _normalize("\n", value)
209 elif field == "type_comment" and isinstance(value, str):
210 # Trailing whitespace in type comments is removed.
211 normalized = value.rstrip()
212 else:
213 normalized = value
214 yield f"{' ' * (depth + 2)}{normalized!r}, # {value.__class__.__name__}"
215
216 yield f"{' ' * depth}) # /{node.__class__.__name__}"
217
[end of src/black/parsing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/black/parsing.py b/src/black/parsing.py
--- a/src/black/parsing.py
+++ b/src/black/parsing.py
@@ -4,6 +4,7 @@
import ast
import sys
+import warnings
from typing import Iterable, Iterator, List, Set, Tuple
from black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature
@@ -109,13 +110,16 @@
return code
-def parse_single_version(
+def _parse_single_version(
src: str, version: Tuple[int, int], *, type_comments: bool
) -> ast.AST:
filename = "<unknown>"
- return ast.parse(
- src, filename, feature_version=version, type_comments=type_comments
- )
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore", SyntaxWarning)
+ warnings.simplefilter("ignore", DeprecationWarning)
+ return ast.parse(
+ src, filename, feature_version=version, type_comments=type_comments
+ )
def parse_ast(src: str) -> ast.AST:
@@ -125,7 +129,7 @@
first_error = ""
for version in sorted(versions, reverse=True):
try:
- return parse_single_version(src, version, type_comments=True)
+ return _parse_single_version(src, version, type_comments=True)
except SyntaxError as e:
if not first_error:
first_error = str(e)
@@ -133,7 +137,7 @@
# Try to parse without type comments
for version in sorted(versions, reverse=True):
try:
- return parse_single_version(src, version, type_comments=False)
+ return _parse_single_version(src, version, type_comments=False)
except SyntaxError:
pass
|
{"golden_diff": "diff --git a/src/black/parsing.py b/src/black/parsing.py\n--- a/src/black/parsing.py\n+++ b/src/black/parsing.py\n@@ -4,6 +4,7 @@\n \n import ast\n import sys\n+import warnings\n from typing import Iterable, Iterator, List, Set, Tuple\n \n from black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature\n@@ -109,13 +110,16 @@\n return code\n \n \n-def parse_single_version(\n+def _parse_single_version(\n src: str, version: Tuple[int, int], *, type_comments: bool\n ) -> ast.AST:\n filename = \"<unknown>\"\n- return ast.parse(\n- src, filename, feature_version=version, type_comments=type_comments\n- )\n+ with warnings.catch_warnings():\n+ warnings.simplefilter(\"ignore\", SyntaxWarning)\n+ warnings.simplefilter(\"ignore\", DeprecationWarning)\n+ return ast.parse(\n+ src, filename, feature_version=version, type_comments=type_comments\n+ )\n \n \n def parse_ast(src: str) -> ast.AST:\n@@ -125,7 +129,7 @@\n first_error = \"\"\n for version in sorted(versions, reverse=True):\n try:\n- return parse_single_version(src, version, type_comments=True)\n+ return _parse_single_version(src, version, type_comments=True)\n except SyntaxError as e:\n if not first_error:\n first_error = str(e)\n@@ -133,7 +137,7 @@\n # Try to parse without type comments\n for version in sorted(versions, reverse=True):\n try:\n- return parse_single_version(src, version, type_comments=False)\n+ return _parse_single_version(src, version, type_comments=False)\n except SyntaxError:\n pass\n", "issue": "SyntaxWarning on regexp on first run of black\nWhen running `black` on the following code:\r\n\r\n```\r\ntext = re.sub(\r\n \"([_a-zA-Z0-9-+]+)(\\.[_a-zA-Z0-9-+]+)*\"\r\n \"@([a-zA-Z0-9-]+)(\\.[a-zA-Z0-9-]+)*(\\.[a-zA-Z]{2,4})\",\r\n '<a href=\"mailto:\\g<0>\">\\g<0></a>',\r\n text,\r\n)\r\ntext = re.sub(\r\n \"(ftp|http|https):\\/\\/(\\w+:{0,1}\\w*@)?\"\r\n \"(\\S+)(:[0-9]+)?(\\/|\\/([\\w#!:.?+=&%@!\\-\\/]))?\",\r\n '<a href=\"\\g<0>\">\\g<0></a>',\r\n text,\r\n)\r\n```\r\n\r\nI get the following warnings (written twice):\r\n\r\n```\r\n<unknown>:2: SyntaxWarning: invalid escape sequence '\\.'\r\n<unknown>:3: SyntaxWarning: invalid escape sequence '\\.'\r\n<unknown>:4: SyntaxWarning: invalid escape sequence '\\g'\r\n<unknown>:8: SyntaxWarning: invalid escape sequence '\\/'\r\n<unknown>:9: SyntaxWarning: invalid escape sequence '\\S'\r\n<unknown>:10: SyntaxWarning: invalid escape sequence '\\g'\r\n<unknown>:2: SyntaxWarning: invalid escape sequence '\\.'\r\n<unknown>:3: SyntaxWarning: invalid escape sequence '\\.'\r\n<unknown>:4: SyntaxWarning: invalid escape sequence '\\g'\r\n<unknown>:8: SyntaxWarning: invalid escape sequence '\\/'\r\n<unknown>:9: SyntaxWarning: invalid escape sequence '\\S'\r\n<unknown>:10: SyntaxWarning: invalid escape sequence '\\g'\r\n```\r\n\r\nWhen re-running `black` on the same file, the warnings are not shown again. I have to modify the lines (adding a space for instance) to see the warnings again.\r\n\r\nAre these warnings normal? (The syntax is normally correct according to the documentation of the `re` module.)\r\nIf they are normal, should they really appear twice in the output? And why don't they appear again when running a second time `black`?\n", "before_files": [{"content": "\"\"\"\nParse Python code and perform AST validation.\n\"\"\"\n\nimport ast\nimport sys\nfrom typing import Iterable, Iterator, List, Set, Tuple\n\nfrom black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature\nfrom black.nodes import syms\nfrom blib2to3 import pygram\nfrom blib2to3.pgen2 import driver\nfrom blib2to3.pgen2.grammar import Grammar\nfrom blib2to3.pgen2.parse import ParseError\nfrom blib2to3.pgen2.tokenize import TokenError\nfrom blib2to3.pytree import Leaf, Node\n\n\nclass InvalidInput(ValueError):\n \"\"\"Raised when input source code fails all parse attempts.\"\"\"\n\n\ndef get_grammars(target_versions: Set[TargetVersion]) -> List[Grammar]:\n if not target_versions:\n # No target_version specified, so try all grammars.\n return [\n # Python 3.7-3.9\n pygram.python_grammar_async_keywords,\n # Python 3.0-3.6\n pygram.python_grammar,\n # Python 3.10+\n pygram.python_grammar_soft_keywords,\n ]\n\n grammars = []\n # If we have to parse both, try to parse async as a keyword first\n if not supports_feature(\n target_versions, Feature.ASYNC_IDENTIFIERS\n ) and not supports_feature(target_versions, Feature.PATTERN_MATCHING):\n # Python 3.7-3.9\n grammars.append(pygram.python_grammar_async_keywords)\n if not supports_feature(target_versions, Feature.ASYNC_KEYWORDS):\n # Python 3.0-3.6\n grammars.append(pygram.python_grammar)\n if any(Feature.PATTERN_MATCHING in VERSION_TO_FEATURES[v] for v in target_versions):\n # Python 3.10+\n grammars.append(pygram.python_grammar_soft_keywords)\n\n # At least one of the above branches must have been taken, because every Python\n # version has exactly one of the two 'ASYNC_*' flags\n return grammars\n\n\ndef lib2to3_parse(src_txt: str, target_versions: Iterable[TargetVersion] = ()) -> Node:\n \"\"\"Given a string with source, return the lib2to3 Node.\"\"\"\n if not src_txt.endswith(\"\\n\"):\n src_txt += \"\\n\"\n\n grammars = get_grammars(set(target_versions))\n errors = {}\n for grammar in grammars:\n drv = driver.Driver(grammar)\n try:\n result = drv.parse_string(src_txt, True)\n break\n\n except ParseError as pe:\n lineno, column = pe.context[1]\n lines = src_txt.splitlines()\n try:\n faulty_line = lines[lineno - 1]\n except IndexError:\n faulty_line = \"<line number missing in source>\"\n errors[grammar.version] = InvalidInput(\n f\"Cannot parse: {lineno}:{column}: {faulty_line}\"\n )\n\n except TokenError as te:\n # In edge cases these are raised; and typically don't have a \"faulty_line\".\n lineno, column = te.args[1]\n errors[grammar.version] = InvalidInput(\n f\"Cannot parse: {lineno}:{column}: {te.args[0]}\"\n )\n\n else:\n # Choose the latest version when raising the actual parsing error.\n assert len(errors) >= 1\n exc = errors[max(errors)]\n raise exc from None\n\n if isinstance(result, Leaf):\n result = Node(syms.file_input, [result])\n return result\n\n\ndef matches_grammar(src_txt: str, grammar: Grammar) -> bool:\n drv = driver.Driver(grammar)\n try:\n drv.parse_string(src_txt, True)\n except (ParseError, TokenError, IndentationError):\n return False\n else:\n return True\n\n\ndef lib2to3_unparse(node: Node) -> str:\n \"\"\"Given a lib2to3 node, return its string representation.\"\"\"\n code = str(node)\n return code\n\n\ndef parse_single_version(\n src: str, version: Tuple[int, int], *, type_comments: bool\n) -> ast.AST:\n filename = \"<unknown>\"\n return ast.parse(\n src, filename, feature_version=version, type_comments=type_comments\n )\n\n\ndef parse_ast(src: str) -> ast.AST:\n # TODO: support Python 4+ ;)\n versions = [(3, minor) for minor in range(3, sys.version_info[1] + 1)]\n\n first_error = \"\"\n for version in sorted(versions, reverse=True):\n try:\n return parse_single_version(src, version, type_comments=True)\n except SyntaxError as e:\n if not first_error:\n first_error = str(e)\n\n # Try to parse without type comments\n for version in sorted(versions, reverse=True):\n try:\n return parse_single_version(src, version, type_comments=False)\n except SyntaxError:\n pass\n\n raise SyntaxError(first_error)\n\n\ndef _normalize(lineend: str, value: str) -> str:\n # To normalize, we strip any leading and trailing space from\n # each line...\n stripped: List[str] = [i.strip() for i in value.splitlines()]\n normalized = lineend.join(stripped)\n # ...and remove any blank lines at the beginning and end of\n # the whole string\n return normalized.strip()\n\n\ndef stringify_ast(node: ast.AST, depth: int = 0) -> Iterator[str]:\n \"\"\"Simple visitor generating strings to compare ASTs by content.\"\"\"\n\n if (\n isinstance(node, ast.Constant)\n and isinstance(node.value, str)\n and node.kind == \"u\"\n ):\n # It's a quirk of history that we strip the u prefix over here. We used to\n # rewrite the AST nodes for Python version compatibility and we never copied\n # over the kind\n node.kind = None\n\n yield f\"{' ' * depth}{node.__class__.__name__}(\"\n\n for field in sorted(node._fields): # noqa: F402\n # TypeIgnore has only one field 'lineno' which breaks this comparison\n if isinstance(node, ast.TypeIgnore):\n break\n\n try:\n value: object = getattr(node, field)\n except AttributeError:\n continue\n\n yield f\"{' ' * (depth + 1)}{field}=\"\n\n if isinstance(value, list):\n for item in value:\n # Ignore nested tuples within del statements, because we may insert\n # parentheses and they change the AST.\n if (\n field == \"targets\"\n and isinstance(node, ast.Delete)\n and isinstance(item, ast.Tuple)\n ):\n for elt in item.elts:\n yield from stringify_ast(elt, depth + 2)\n\n elif isinstance(item, ast.AST):\n yield from stringify_ast(item, depth + 2)\n\n elif isinstance(value, ast.AST):\n yield from stringify_ast(value, depth + 2)\n\n else:\n normalized: object\n if (\n isinstance(node, ast.Constant)\n and field == \"value\"\n and isinstance(value, str)\n ):\n # Constant strings may be indented across newlines, if they are\n # docstrings; fold spaces after newlines when comparing. Similarly,\n # trailing and leading space may be removed.\n normalized = _normalize(\"\\n\", value)\n elif field == \"type_comment\" and isinstance(value, str):\n # Trailing whitespace in type comments is removed.\n normalized = value.rstrip()\n else:\n normalized = value\n yield f\"{' ' * (depth + 2)}{normalized!r}, # {value.__class__.__name__}\"\n\n yield f\"{' ' * depth}) # /{node.__class__.__name__}\"\n", "path": "src/black/parsing.py"}]}
| 3,250 | 398 |
gh_patches_debug_3962
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-1874
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: fetch failed for source Stavanger Kommune
### I Have A Problem With:
A specific source
### What's Your Problem
Unable to fetch data for Stavanger Norway. Have worked flawless a while, but no there is no data.
### Source (if relevant)
stavanger_no
### Logs
```Shell
This error originated from a custom integration.
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 11:08:21 (2 occurrences)
Last logged: 11:13:26
fetch failed for source Stavanger Kommune: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py", line 49, in fetch r.raise_for_status() File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show?id=###removed###
```
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py]
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Stavanger Kommune"
8 DESCRIPTION = "Source for Stavanger Kommune, Norway"
9 URL = "https://www.stavanger.kommune.no/"
10 TEST_CASES = {
11 "TestcaseI": {
12 "id": "57bf9d36-722e-400b-ae93-d80f8e354724",
13 "municipality": "Stavanger",
14 "gnumber": "57",
15 "bnumber": "922",
16 "snumber": "0",
17 },
18 }
19
20 ICON_MAP = {
21 "Restavfall": "mdi:trash-can",
22 "Papp/papir": "mdi:recycle",
23 "Bio": "mdi:leaf",
24 "Juletre": "mdi:pine-tree",
25 }
26
27
28 class Source:
29 def __init__(self, id, municipality, gnumber, bnumber, snumber):
30 self._id = id
31 self._municipality = municipality
32 self._gnumber = gnumber
33 self._bnumber = bnumber
34 self._snumber = snumber
35
36 def fetch(self):
37 url = "https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show"
38 headers = {"referer": "https://www.stavanger.kommune.no"}
39
40 params = {
41 "id": self._id,
42 "municipality": self._municipality,
43 "gnumber": self._gnumber,
44 "bnumber": self._bnumber,
45 "snumber": self._snumber,
46 }
47
48 r = requests.get(url, params=params, headers=headers)
49 r.raise_for_status()
50
51 soup = BeautifulSoup(r.text, "html.parser")
52
53 tag = soup.find_all("option")
54 entries = []
55 for tag in soup.find_all("tr", {"class": "waste-calendar__item"}):
56 if tag.text.strip() == "Dato og dag\nAvfallstype":
57 continue
58
59 year = tag.parent.attrs["data-month"].split("-")[1]
60 date = tag.text.strip().split(" - ")
61 date = datetime.strptime(date[0] + "." + year, "%d.%m.%Y").date()
62
63 for img in tag.find_all("img"):
64 waste_type = img.get("title")
65 entries.append(
66 Collection(date, waste_type, icon=ICON_MAP.get(waste_type))
67 )
68
69 return entries
70
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
@@ -38,6 +38,7 @@
headers = {"referer": "https://www.stavanger.kommune.no"}
params = {
+ "ids": self._id,
"id": self._id,
"municipality": self._municipality,
"gnumber": self._gnumber,
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n@@ -38,6 +38,7 @@\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n \n params = {\n+ \"ids\": self._id,\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n", "issue": "[Bug]: fetch failed for source Stavanger Kommune\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nUnable to fetch data for Stavanger Norway. Have worked flawless a while, but no there is no data.\n\n### Source (if relevant)\n\nstavanger_no\n\n### Logs\n\n```Shell\nThis error originated from a custom integration.\r\n\r\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 11:08:21 (2 occurrences)\r\nLast logged: 11:13:26\r\n\r\nfetch failed for source Stavanger Kommune: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\", line 49, in fetch r.raise_for_status() File \"/usr/local/lib/python3.12/site-packages/requests/models.py\", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show?id=###removed###\n```\n\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Stavanger Kommune\"\nDESCRIPTION = \"Source for Stavanger Kommune, Norway\"\nURL = \"https://www.stavanger.kommune.no/\"\nTEST_CASES = {\n \"TestcaseI\": {\n \"id\": \"57bf9d36-722e-400b-ae93-d80f8e354724\",\n \"municipality\": \"Stavanger\",\n \"gnumber\": \"57\",\n \"bnumber\": \"922\",\n \"snumber\": \"0\",\n },\n}\n\nICON_MAP = {\n \"Restavfall\": \"mdi:trash-can\",\n \"Papp/papir\": \"mdi:recycle\",\n \"Bio\": \"mdi:leaf\",\n \"Juletre\": \"mdi:pine-tree\",\n}\n\n\nclass Source:\n def __init__(self, id, municipality, gnumber, bnumber, snumber):\n self._id = id\n self._municipality = municipality\n self._gnumber = gnumber\n self._bnumber = bnumber\n self._snumber = snumber\n\n def fetch(self):\n url = \"https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show\"\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n\n params = {\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n \"bnumber\": self._bnumber,\n \"snumber\": self._snumber,\n }\n\n r = requests.get(url, params=params, headers=headers)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n tag = soup.find_all(\"option\")\n entries = []\n for tag in soup.find_all(\"tr\", {\"class\": \"waste-calendar__item\"}):\n if tag.text.strip() == \"Dato og dag\\nAvfallstype\":\n continue\n\n year = tag.parent.attrs[\"data-month\"].split(\"-\")[1]\n date = tag.text.strip().split(\" - \")\n date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date()\n\n for img in tag.find_all(\"img\"):\n waste_type = img.get(\"title\")\n entries.append(\n Collection(date, waste_type, icon=ICON_MAP.get(waste_type))\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py"}]}
| 1,809 | 150 |
gh_patches_debug_14847
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmsegmentation-261
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
an unexpected keyword argument 'drop_out_ratio' when using config file ocrnet_r50-d8.py
**Describe the bug**
It seems that there is a small error in the config file:
https://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L26
and
https://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L38
**Error traceback**
```
Traceback (most recent call last):
File "./tools/train.py", line 161, in <module>
main()
File "./tools/train.py", line 131, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 56, in build_segmentor
return build(cfg, SEGMENTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 31, in build
return build_from_cfg(cfg, registry, default_args)
File "/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py", line 36, in __init__
pretrained=pretrained)
File "/root/userfolder/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py", line 34, in __init__
self._init_decode_head(decode_head)
File "/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py", line 44, in _init_decode_head
self.decode_head.append(builder.build_head(decode_head[i]))
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 46, in build_head
return build(cfg, HEADS)
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 31, in build
return build_from_cfg(cfg, registry, default_args)
File "/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/root/userfolder/mmsegmentation/mmseg/models/decode_heads/fcn_head.py", line 31, in __init__
super(FCNHead, self).__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'drop_out_ratio'
```
**Bug fix**
change the keyword `drop_out_ratio=0.1` to `dropout_ratio=0.1` will fix this.
</issue>
<code>
[start of configs/_base_/models/ocrnet_r50-d8.py]
1 # model settings
2 norm_cfg = dict(type='SyncBN', requires_grad=True)
3 model = dict(
4 type='CascadeEncoderDecoder',
5 num_stages=2,
6 pretrained='open-mmlab://resnet50_v1c',
7 backbone=dict(
8 type='ResNetV1c',
9 depth=50,
10 num_stages=4,
11 out_indices=(0, 1, 2, 3),
12 dilations=(1, 1, 2, 4),
13 strides=(1, 2, 1, 1),
14 norm_cfg=norm_cfg,
15 norm_eval=False,
16 style='pytorch',
17 contract_dilation=True),
18 decode_head=[
19 dict(
20 type='FCNHead',
21 in_channels=1024,
22 in_index=2,
23 channels=256,
24 num_convs=1,
25 concat_input=False,
26 drop_out_ratio=0.1,
27 num_classes=19,
28 norm_cfg=norm_cfg,
29 align_corners=False,
30 loss_decode=dict(
31 type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
32 dict(
33 type='OCRHead',
34 in_channels=2048,
35 in_index=3,
36 channels=512,
37 ocr_channels=256,
38 drop_out_ratio=0.1,
39 num_classes=19,
40 norm_cfg=norm_cfg,
41 align_corners=False,
42 loss_decode=dict(
43 type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))
44 ])
45 # model training and testing settings
46 train_cfg = dict()
47 test_cfg = dict(mode='whole')
48
[end of configs/_base_/models/ocrnet_r50-d8.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/configs/_base_/models/ocrnet_r50-d8.py b/configs/_base_/models/ocrnet_r50-d8.py
--- a/configs/_base_/models/ocrnet_r50-d8.py
+++ b/configs/_base_/models/ocrnet_r50-d8.py
@@ -23,7 +23,7 @@
channels=256,
num_convs=1,
concat_input=False,
- drop_out_ratio=0.1,
+ dropout_ratio=0.1,
num_classes=19,
norm_cfg=norm_cfg,
align_corners=False,
@@ -35,7 +35,7 @@
in_index=3,
channels=512,
ocr_channels=256,
- drop_out_ratio=0.1,
+ dropout_ratio=0.1,
num_classes=19,
norm_cfg=norm_cfg,
align_corners=False,
|
{"golden_diff": "diff --git a/configs/_base_/models/ocrnet_r50-d8.py b/configs/_base_/models/ocrnet_r50-d8.py\n--- a/configs/_base_/models/ocrnet_r50-d8.py\n+++ b/configs/_base_/models/ocrnet_r50-d8.py\n@@ -23,7 +23,7 @@\n channels=256,\n num_convs=1,\n concat_input=False,\n- drop_out_ratio=0.1,\n+ dropout_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n@@ -35,7 +35,7 @@\n in_index=3,\n channels=512,\n ocr_channels=256,\n- drop_out_ratio=0.1,\n+ dropout_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n", "issue": "an unexpected keyword argument 'drop_out_ratio' when using config file ocrnet_r50-d8.py\n**Describe the bug**\r\n\r\nIt seems that there is a small error in the config file: \r\nhttps://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L26\r\nand\r\nhttps://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L38\r\n\r\n**Error traceback**\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"./tools/train.py\", line 161, in <module>\r\n main()\r\n File \"./tools/train.py\", line 131, in main\r\n cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 56, in build_segmentor\r\n return build(cfg, SEGMENTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 31, in build\r\n return build_from_cfg(cfg, registry, default_args)\r\n File \"/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py\", line 171, in build_from_cfg\r\n return obj_cls(**args)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py\", line 36, in __init__\r\n pretrained=pretrained)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py\", line 34, in __init__\r\n self._init_decode_head(decode_head)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py\", line 44, in _init_decode_head\r\n self.decode_head.append(builder.build_head(decode_head[i]))\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 46, in build_head\r\n return build(cfg, HEADS)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 31, in build\r\n return build_from_cfg(cfg, registry, default_args)\r\n File \"/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py\", line 171, in build_from_cfg\r\n return obj_cls(**args)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/decode_heads/fcn_head.py\", line 31, in __init__\r\n super(FCNHead, self).__init__(**kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'drop_out_ratio'\r\n```\r\n\r\n**Bug fix**\r\nchange the keyword `drop_out_ratio=0.1` to `dropout_ratio=0.1` will fix this.\r\n\r\n\n", "before_files": [{"content": "# model settings\nnorm_cfg = dict(type='SyncBN', requires_grad=True)\nmodel = dict(\n type='CascadeEncoderDecoder',\n num_stages=2,\n pretrained='open-mmlab://resnet50_v1c',\n backbone=dict(\n type='ResNetV1c',\n depth=50,\n num_stages=4,\n out_indices=(0, 1, 2, 3),\n dilations=(1, 1, 2, 4),\n strides=(1, 2, 1, 1),\n norm_cfg=norm_cfg,\n norm_eval=False,\n style='pytorch',\n contract_dilation=True),\n decode_head=[\n dict(\n type='FCNHead',\n in_channels=1024,\n in_index=2,\n channels=256,\n num_convs=1,\n concat_input=False,\n drop_out_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),\n dict(\n type='OCRHead',\n in_channels=2048,\n in_index=3,\n channels=512,\n ocr_channels=256,\n drop_out_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))\n ])\n# model training and testing settings\ntrain_cfg = dict()\ntest_cfg = dict(mode='whole')\n", "path": "configs/_base_/models/ocrnet_r50-d8.py"}]}
| 1,713 | 213 |
gh_patches_debug_19626
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-2391
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tasks failing after migration from cfscraper to cloudscraper
Since #2381 one of my tasks stopped returning new entries.
After checking the log I noticed `cloudscraper` returns a ReCaptcha related error during the `rmz` automatic url-rewrite (a problem `cfscrape` doesn't have).
Upon further investigation, it seems that `cloudscraper` has this issue when importing an existing requests session generated by Flexget, since removing the last `task.requests` (for testing purposes) from https://github.com/Flexget/Flexget/blob/a4037da07f7d7707650596670c467ab3b7e748c9/flexget/plugins/operate/cfscraper.py#L39 results in a successful processing of the task.
Another finding: Disabling `urlrewriting` also did not trigger a ReCaptcha error (therefore completing the task) even without meddling with any of Flexget's .py's, however it also doesn't return a download URL if [rmz.py](https://github.com/Flexget/Flexget/blob/master/flexget/components/sites/sites/rmz.py) is not processed.
### Expected behaviour:
Task returning proper entries with populated urls
### Actual behaviour:
Task failing due to `cloudscraper` bumping into ReCaptcha during `rmz`url-rewrite
### Steps to reproduce:
- Step 1:
#### Config:
```
tasks:
cftask:
disable:
- seen
- seen_info_hash
- retry_failed
# - urlrewriting
cfscraper: yes
rss: http://rmz.cr/feed
series:
# Random entry from the feed below#
- Days of our lives
```
#### Log:
<details>
<summary>(click to expand)</summary>
```
$ flexget execute --tasks cftask
2019-05-09 00:34 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.
2019-05-09 00:34 VERBOSE details cftask Produced 100 entries.
2019-05-09 00:34 VERBOSE series.db cftask identified by is currently on `auto` for Days of our lives. Multiple id types may be accepted until it locks in on the appropriate type.
2019-05-09 00:34 VERBOSE task cftask ACCEPTED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by series plugin because choosing first acceptable match
2019-05-09 00:35 WARNING urlrewriter cftask URL rewriting rmz failed: Captcha
2019-05-09 00:35 ERROR entry cftask Failed [RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB) (None)
2019-05-09 00:35 VERBOSE task cftask FAILED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by urlrewriting plugin
2019-05-09 00:35 VERBOSE details cftask Summary - Accepted: 0 (Rejected: 0 Undecided: 99 Failed: 1)
```
</details>
### Additional information:
- FlexGet version: 2.20.26
- Python version: 2.7.14
- Installation method: pip
- Using daemon (yes/no): yes
- OS and version: Windows 10 (1809)
</issue>
<code>
[start of flexget/plugins/operate/cfscraper.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import logging
5
6 from flexget import plugin
7 from flexget.event import event
8 from flexget.utils.requests import Session
9
10 log = logging.getLogger('cfscraper')
11
12
13 class CFScraper(object):
14 """
15 Plugin that enables scraping of cloudflare protected sites.
16
17 Example::
18 cfscraper: yes
19 """
20
21 schema = {'type': 'boolean'}
22
23 @plugin.priority(253)
24 def on_task_start(self, task, config):
25 try:
26 import cloudscraper
27 except ImportError as e:
28 log.debug('Error importing cloudscraper: %s' % e)
29 raise plugin.DependencyError(
30 'cfscraper', 'cloudscraper', 'cloudscraper module required. ImportError: %s' % e
31 )
32
33 class CFScrapeWrapper(Session, cloudscraper.CloudScraper):
34 """
35 This class allows the FlexGet session to inherit from CloudScraper instead of the requests.Session directly.
36 """
37
38 if config is True:
39 task.requests = CFScrapeWrapper.create_scraper(task.requests)
40
41
42 @event('plugin.register')
43 def register_plugin():
44 plugin.register(CFScraper, 'cfscraper', api_ver=2)
45
[end of flexget/plugins/operate/cfscraper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flexget/plugins/operate/cfscraper.py b/flexget/plugins/operate/cfscraper.py
--- a/flexget/plugins/operate/cfscraper.py
+++ b/flexget/plugins/operate/cfscraper.py
@@ -6,6 +6,7 @@
from flexget import plugin
from flexget.event import event
from flexget.utils.requests import Session
+from collections import OrderedDict
log = logging.getLogger('cfscraper')
@@ -36,6 +37,18 @@
"""
if config is True:
+ task.requests.headers = (
+ OrderedDict(
+ [
+ ('User-Agent', task.requests.headers['User-Agent']),
+ ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
+ ('Accept-Language', 'en-US,en;q=0.5'),
+ ('Accept-Encoding', 'gzip, deflate'),
+ ('Connection', 'close'),
+ ('Upgrade-Insecure-Requests', '1')
+ ]
+ )
+ )
task.requests = CFScrapeWrapper.create_scraper(task.requests)
|
{"golden_diff": "diff --git a/flexget/plugins/operate/cfscraper.py b/flexget/plugins/operate/cfscraper.py\n--- a/flexget/plugins/operate/cfscraper.py\n+++ b/flexget/plugins/operate/cfscraper.py\n@@ -6,6 +6,7 @@\n from flexget import plugin\n from flexget.event import event\n from flexget.utils.requests import Session\n+from collections import OrderedDict\n \n log = logging.getLogger('cfscraper')\n \n@@ -36,6 +37,18 @@\n \"\"\"\n \n if config is True:\n+ task.requests.headers = (\n+ OrderedDict(\n+ [\n+ ('User-Agent', task.requests.headers['User-Agent']),\n+ ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),\n+ ('Accept-Language', 'en-US,en;q=0.5'),\n+ ('Accept-Encoding', 'gzip, deflate'),\n+ ('Connection', 'close'),\n+ ('Upgrade-Insecure-Requests', '1')\n+ ]\n+ )\n+ )\n task.requests = CFScrapeWrapper.create_scraper(task.requests)\n", "issue": "Tasks failing after migration from cfscraper to cloudscraper\nSince #2381 one of my tasks stopped returning new entries.\r\nAfter checking the log I noticed `cloudscraper` returns a ReCaptcha related error during the `rmz` automatic url-rewrite (a problem `cfscrape` doesn't have).\r\nUpon further investigation, it seems that `cloudscraper` has this issue when importing an existing requests session generated by Flexget, since removing the last `task.requests` (for testing purposes) from https://github.com/Flexget/Flexget/blob/a4037da07f7d7707650596670c467ab3b7e748c9/flexget/plugins/operate/cfscraper.py#L39 results in a successful processing of the task.\r\n\r\nAnother finding: Disabling `urlrewriting` also did not trigger a ReCaptcha error (therefore completing the task) even without meddling with any of Flexget's .py's, however it also doesn't return a download URL if [rmz.py](https://github.com/Flexget/Flexget/blob/master/flexget/components/sites/sites/rmz.py) is not processed.\r\n### Expected behaviour:\r\nTask returning proper entries with populated urls\r\n\r\n### Actual behaviour:\r\nTask failing due to `cloudscraper` bumping into ReCaptcha during `rmz`url-rewrite\r\n### Steps to reproduce:\r\n- Step 1: \r\n#### Config:\r\n```\r\ntasks:\r\n cftask:\r\n disable:\r\n - seen\r\n - seen_info_hash\r\n - retry_failed\r\n # - urlrewriting\r\n cfscraper: yes\r\n rss: http://rmz.cr/feed\r\n series:\r\n # Random entry from the feed below#\r\n - Days of our lives\r\n```\r\n \r\n#### Log:\r\n<details>\r\n <summary>(click to expand)</summary>\r\n\r\n```\r\n$ flexget execute --tasks cftask\r\n2019-05-09 00:34 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.\r\n2019-05-09 00:34 VERBOSE details cftask Produced 100 entries.\r\n2019-05-09 00:34 VERBOSE series.db cftask identified by is currently on `auto` for Days of our lives. Multiple id types may be accepted until it locks in on the appropriate type.\r\n2019-05-09 00:34 VERBOSE task cftask ACCEPTED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by series plugin because choosing first acceptable match\r\n2019-05-09 00:35 WARNING urlrewriter cftask URL rewriting rmz failed: Captcha\r\n2019-05-09 00:35 ERROR entry cftask Failed [RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB) (None)\r\n2019-05-09 00:35 VERBOSE task cftask FAILED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by urlrewriting plugin\r\n2019-05-09 00:35 VERBOSE details cftask Summary - Accepted: 0 (Rejected: 0 Undecided: 99 Failed: 1)\r\n```\r\n</details>\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 2.20.26\r\n- Python version: 2.7.14\r\n- Installation method: pip\r\n- Using daemon (yes/no): yes\r\n- OS and version: Windows 10 (1809)\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.requests import Session\n\nlog = logging.getLogger('cfscraper')\n\n\nclass CFScraper(object):\n \"\"\"\n Plugin that enables scraping of cloudflare protected sites.\n\n Example::\n cfscraper: yes\n \"\"\"\n\n schema = {'type': 'boolean'}\n\n @plugin.priority(253)\n def on_task_start(self, task, config):\n try:\n import cloudscraper\n except ImportError as e:\n log.debug('Error importing cloudscraper: %s' % e)\n raise plugin.DependencyError(\n 'cfscraper', 'cloudscraper', 'cloudscraper module required. ImportError: %s' % e\n )\n\n class CFScrapeWrapper(Session, cloudscraper.CloudScraper):\n \"\"\"\n This class allows the FlexGet session to inherit from CloudScraper instead of the requests.Session directly.\n \"\"\"\n\n if config is True:\n task.requests = CFScrapeWrapper.create_scraper(task.requests)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(CFScraper, 'cfscraper', api_ver=2)\n", "path": "flexget/plugins/operate/cfscraper.py"}]}
| 1,828 | 250 |
gh_patches_debug_13097
|
rasdani/github-patches
|
git_diff
|
obspy__obspy-555
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
stream deepcopy issue
taken from http://lists.obspy.org/pipermail/obspy-users/2013-April/000451.html
##
hello all,
i have some issues with deep copies of stream objects.
it looks like format-specific dicts are "linked" in deep copy operations instead of being duplicated.
this happened after some obspy upgrades, since i'm pretty sure my code using deep copy functions worked correctly (and was not changed) until some months ago.
here's a sample session.
``` python
$ ipython
Python 2.6.6 (r266:84292, Dec 26 2010, 22:31:48)
Type "copyright", "credits" or "license" for more information.
IPython 0.10 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: from obspy import read
In [2]: from obspy.arclink.client import Client
In [3]: from obspy.core import UTCDateTime
In [4]: client = Client(user="[email protected]")
In [5]: t = UTCDateTime("2009-08-24 00:20:03")
# let's grab some data first:
In [6]: st = client.getWaveform('BW', 'RJOB', '', 'EHZ', t, t + 30,
route=False)
In [7]: st[0].stats
Out[7]: Stats({'network': 'BW', '_format': 'MSEED', 'mseed': AttribDict({'dataquality': 'D', 'record_length': 512, 'byteorder': '>', 'encoding': 'STEIM1'}), 'sampling_rate': 200.0, 'station': 'RJOB', 'location': '', 'starttime': UTCDateTime(2009, 8, 24, 0, 20, 3), 'delta': 0.0050000000000000001, 'calib': 1.0, 'npts': 6001, 'endtime': UTCDateTime(2009, 8, 24, 0, 20, 33), 'channel': 'EHZ'})
# here comes the weird stuff:
In [8]: from copy import deepcopy as copy_deepcopy
In [9]: ct = copy_deepcopy(st)
In [10]: st[0].stats.mseed.dataquality = 'X'
In [11]: st[0].stats.mseed.dataquality
Out[11]: 'X'
In [12]: ct[0].stats.mseed.dataquality
Out[12]: 'X'
# BUT, dict data in common header is deep-copied as expected:
In [13]: st[0].stats.network = 'XX'
In [14]: st[0].stats.network
Out[14]: 'XX'
In [15]: ct[0].stats.network
Out[15]: 'BW'
```
</issue>
<code>
[start of obspy/core/util/attribdict.py]
1 # -*- coding: utf-8 -*-
2 """
3 AttribDict class for ObsPy.
4
5 :copyright:
6 The ObsPy Development Team ([email protected])
7 :license:
8 GNU Lesser General Public License, Version 3
9 (http://www.gnu.org/copyleft/lesser.html)
10 """
11
12 import collections
13
14
15 class AttribDict(collections.MutableMapping):
16 """
17 A class which behaves like a dictionary.
18
19 :type data: dict, optional
20 :param data: Dictionary with initial keywords.
21
22 .. rubric:: Basic Usage
23
24 You may use the following syntax to change or access data in this class.
25
26 >>> stats = AttribDict()
27 >>> stats.network = 'BW'
28 >>> stats['station'] = 'ROTZ'
29 >>> stats.get('network')
30 'BW'
31 >>> stats['network']
32 'BW'
33 >>> stats.station
34 'ROTZ'
35 >>> x = stats.keys()
36 >>> x = sorted(x)
37 >>> x[0:3]
38 ['network', 'station']
39 """
40 defaults = {}
41 readonly = []
42
43 def __init__(self, *args, **kwargs):
44 """
45 An AttribDict can be initialized in two ways. It can be given an
46 existing dictionary as a simple argument or alternatively all keyword
47 arguments will become (key, value) pairs.
48
49 >>> attrib_dict_1 = AttribDict({"a":1, "b":2})
50 >>> attrib_dict_2 = AttribDict(a=1, b=2)
51 >>> print attrib_dict_1
52 AttribDict({'a': 1, 'b': 2})
53 >>> assert(attrib_dict_1 == attrib_dict_2)
54 """
55 # set default values directly
56 self.__dict__.update(self.defaults)
57 # use overwritable update method to set arguments
58 self.update(dict(*args, **kwargs))
59
60 def __repr__(self):
61 return "%s(%s)" % (self.__class__.__name__, self.__dict__)
62
63 def __getitem__(self, name, default=None):
64 try:
65 return self.__dict__[name]
66 except KeyError:
67 # check if we got any default value given at class level
68 if name in self.defaults:
69 return self.defaults[name]
70 # if both are missing check for a given default value
71 if default is None:
72 raise
73 return default
74
75 def __setitem__(self, key, value):
76 if key in self.readonly:
77 msg = 'Attribute "%s" in %s object is read only!'
78 raise AttributeError(msg % (key, self.__class__.__name__))
79 self.__dict__[key] = value
80
81 def __delitem__(self, name):
82 del self.__dict__[name]
83
84 def __getstate__(self):
85 return self.__dict__
86
87 def __setstate__(self, adict):
88 # set default values
89 self.__dict__.update(self.defaults)
90 # update with pickle dictionary
91 self.update(adict)
92
93 __getattr__ = __getitem__
94 __setattr__ = __setitem__
95 __delattr__ = __delitem__
96
97 def copy(self):
98 return self.__class__(self.__dict__.copy())
99
100 def __deepcopy__(self, *args, **kwargs): # @UnusedVariable
101 ad = self.__class__()
102 ad.update(self.__dict__)
103 return ad
104
105 def update(self, adict={}):
106 for (key, value) in adict.iteritems():
107 if key in self.readonly:
108 continue
109 self.__setitem__(key, value)
110
111 def _pretty_str(self, priorized_keys=[], min_label_length=16):
112 """
113 Return better readable string representation of AttribDict object.
114
115 :type priorized_keys: List of str, optional
116 :param priorized_keys: Keywords of current AttribtDict which will be
117 shown before all other keywords. Those keywords must exists
118 otherwise an exception will be raised. Defaults to empty list.
119 :type min_label_length: int, optional
120 :param min_label_length: Minimum label length for keywords. Defaults
121 to ``16``.
122 :return: String representation of current AttribDict object.
123 """
124 keys = self.keys()
125 # determine longest key name for alignment of all items
126 try:
127 i = max(max([len(k) for k in keys]), min_label_length)
128 except ValueError:
129 # no keys
130 return ""
131 pattern = "%%%ds: %%s" % (i)
132 # check if keys exist
133 other_keys = [k for k in keys if k not in priorized_keys]
134 # priorized keys first + all other keys
135 keys = priorized_keys + sorted(other_keys)
136 head = [pattern % (k, self.__dict__[k]) for k in keys]
137 return "\n".join(head)
138
139 def __iter__(self):
140 return iter(self.__dict__)
141
142 def __len__(self):
143 return len(self.__dict__)
144
145
146 if __name__ == '__main__':
147 import doctest
148 doctest.testmod(exclude_empty=True)
149
[end of obspy/core/util/attribdict.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/obspy/core/util/attribdict.py b/obspy/core/util/attribdict.py
--- a/obspy/core/util/attribdict.py
+++ b/obspy/core/util/attribdict.py
@@ -10,6 +10,7 @@
"""
import collections
+import copy
class AttribDict(collections.MutableMapping):
@@ -95,11 +96,11 @@
__delattr__ = __delitem__
def copy(self):
- return self.__class__(self.__dict__.copy())
+ return copy.deepcopy(self)
def __deepcopy__(self, *args, **kwargs): # @UnusedVariable
ad = self.__class__()
- ad.update(self.__dict__)
+ ad.update(copy.deepcopy(self.__dict__))
return ad
def update(self, adict={}):
|
{"golden_diff": "diff --git a/obspy/core/util/attribdict.py b/obspy/core/util/attribdict.py\n--- a/obspy/core/util/attribdict.py\n+++ b/obspy/core/util/attribdict.py\n@@ -10,6 +10,7 @@\n \"\"\"\n \n import collections\n+import copy\n \n \n class AttribDict(collections.MutableMapping):\n@@ -95,11 +96,11 @@\n __delattr__ = __delitem__\n \n def copy(self):\n- return self.__class__(self.__dict__.copy())\n+ return copy.deepcopy(self)\n \n def __deepcopy__(self, *args, **kwargs): # @UnusedVariable\n ad = self.__class__()\n- ad.update(self.__dict__)\n+ ad.update(copy.deepcopy(self.__dict__))\n return ad\n \n def update(self, adict={}):\n", "issue": "stream deepcopy issue\ntaken from http://lists.obspy.org/pipermail/obspy-users/2013-April/000451.html\n## \n\nhello all,\n\ni have some issues with deep copies of stream objects.\n\nit looks like format-specific dicts are \"linked\" in deep copy operations instead of being duplicated.\n\nthis happened after some obspy upgrades, since i'm pretty sure my code using deep copy functions worked correctly (and was not changed) until some months ago.\n\nhere's a sample session.\n\n``` python\n$ ipython\nPython 2.6.6 (r266:84292, Dec 26 2010, 22:31:48)\nType \"copyright\", \"credits\" or \"license\" for more information.\n\nIPython 0.10 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object'. ?object also works, ?? prints more.\n\nIn [1]: from obspy import read\nIn [2]: from obspy.arclink.client import Client\nIn [3]: from obspy.core import UTCDateTime\nIn [4]: client = Client(user=\"[email protected]\")\nIn [5]: t = UTCDateTime(\"2009-08-24 00:20:03\")\n\n# let's grab some data first:\nIn [6]: st = client.getWaveform('BW', 'RJOB', '', 'EHZ', t, t + 30,\nroute=False)\nIn [7]: st[0].stats\nOut[7]: Stats({'network': 'BW', '_format': 'MSEED', 'mseed': AttribDict({'dataquality': 'D', 'record_length': 512, 'byteorder': '>', 'encoding': 'STEIM1'}), 'sampling_rate': 200.0, 'station': 'RJOB', 'location': '', 'starttime': UTCDateTime(2009, 8, 24, 0, 20, 3), 'delta': 0.0050000000000000001, 'calib': 1.0, 'npts': 6001, 'endtime': UTCDateTime(2009, 8, 24, 0, 20, 33), 'channel': 'EHZ'})\n\n# here comes the weird stuff:\nIn [8]: from copy import deepcopy as copy_deepcopy\nIn [9]: ct = copy_deepcopy(st)\nIn [10]: st[0].stats.mseed.dataquality = 'X'\nIn [11]: st[0].stats.mseed.dataquality\nOut[11]: 'X'\nIn [12]: ct[0].stats.mseed.dataquality\nOut[12]: 'X'\n\n# BUT, dict data in common header is deep-copied as expected:\nIn [13]: st[0].stats.network = 'XX'\nIn [14]: st[0].stats.network\nOut[14]: 'XX'\nIn [15]: ct[0].stats.network\nOut[15]: 'BW'\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAttribDict class for ObsPy.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (http://www.gnu.org/copyleft/lesser.html)\n\"\"\"\n\nimport collections\n\n\nclass AttribDict(collections.MutableMapping):\n \"\"\"\n A class which behaves like a dictionary.\n\n :type data: dict, optional\n :param data: Dictionary with initial keywords.\n\n .. rubric:: Basic Usage\n\n You may use the following syntax to change or access data in this class.\n\n >>> stats = AttribDict()\n >>> stats.network = 'BW'\n >>> stats['station'] = 'ROTZ'\n >>> stats.get('network')\n 'BW'\n >>> stats['network']\n 'BW'\n >>> stats.station\n 'ROTZ'\n >>> x = stats.keys()\n >>> x = sorted(x)\n >>> x[0:3]\n ['network', 'station']\n \"\"\"\n defaults = {}\n readonly = []\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n An AttribDict can be initialized in two ways. It can be given an\n existing dictionary as a simple argument or alternatively all keyword\n arguments will become (key, value) pairs.\n\n >>> attrib_dict_1 = AttribDict({\"a\":1, \"b\":2})\n >>> attrib_dict_2 = AttribDict(a=1, b=2)\n >>> print attrib_dict_1\n AttribDict({'a': 1, 'b': 2})\n >>> assert(attrib_dict_1 == attrib_dict_2)\n \"\"\"\n # set default values directly\n self.__dict__.update(self.defaults)\n # use overwritable update method to set arguments\n self.update(dict(*args, **kwargs))\n\n def __repr__(self):\n return \"%s(%s)\" % (self.__class__.__name__, self.__dict__)\n\n def __getitem__(self, name, default=None):\n try:\n return self.__dict__[name]\n except KeyError:\n # check if we got any default value given at class level\n if name in self.defaults:\n return self.defaults[name]\n # if both are missing check for a given default value\n if default is None:\n raise\n return default\n\n def __setitem__(self, key, value):\n if key in self.readonly:\n msg = 'Attribute \"%s\" in %s object is read only!'\n raise AttributeError(msg % (key, self.__class__.__name__))\n self.__dict__[key] = value\n\n def __delitem__(self, name):\n del self.__dict__[name]\n\n def __getstate__(self):\n return self.__dict__\n\n def __setstate__(self, adict):\n # set default values\n self.__dict__.update(self.defaults)\n # update with pickle dictionary\n self.update(adict)\n\n __getattr__ = __getitem__\n __setattr__ = __setitem__\n __delattr__ = __delitem__\n\n def copy(self):\n return self.__class__(self.__dict__.copy())\n\n def __deepcopy__(self, *args, **kwargs): # @UnusedVariable\n ad = self.__class__()\n ad.update(self.__dict__)\n return ad\n\n def update(self, adict={}):\n for (key, value) in adict.iteritems():\n if key in self.readonly:\n continue\n self.__setitem__(key, value)\n\n def _pretty_str(self, priorized_keys=[], min_label_length=16):\n \"\"\"\n Return better readable string representation of AttribDict object.\n\n :type priorized_keys: List of str, optional\n :param priorized_keys: Keywords of current AttribtDict which will be\n shown before all other keywords. Those keywords must exists\n otherwise an exception will be raised. Defaults to empty list.\n :type min_label_length: int, optional\n :param min_label_length: Minimum label length for keywords. Defaults\n to ``16``.\n :return: String representation of current AttribDict object.\n \"\"\"\n keys = self.keys()\n # determine longest key name for alignment of all items\n try:\n i = max(max([len(k) for k in keys]), min_label_length)\n except ValueError:\n # no keys\n return \"\"\n pattern = \"%%%ds: %%s\" % (i)\n # check if keys exist\n other_keys = [k for k in keys if k not in priorized_keys]\n # priorized keys first + all other keys\n keys = priorized_keys + sorted(other_keys)\n head = [pattern % (k, self.__dict__[k]) for k in keys]\n return \"\\n\".join(head)\n\n def __iter__(self):\n return iter(self.__dict__)\n\n def __len__(self):\n return len(self.__dict__)\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod(exclude_empty=True)\n", "path": "obspy/core/util/attribdict.py"}]}
| 2,706 | 188 |
gh_patches_debug_3581
|
rasdani/github-patches
|
git_diff
|
napari__napari-2537
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Using escape key permanently hides preferences window
## 🐛 Bug
Related to #2527 ... but perhaps different enough for a different issue:
If you use the escape key when the preferences window is open, it will indeed close the window. However, using command-W or File > Preferences will not re-show the window.
</issue>
<code>
[start of napari/_qt/dialogs/preferences_dialog.py]
1 import json
2
3 from qtpy.QtCore import QSize, Signal
4 from qtpy.QtWidgets import (
5 QDialog,
6 QHBoxLayout,
7 QLabel,
8 QListWidget,
9 QPushButton,
10 QStackedWidget,
11 QVBoxLayout,
12 QWidget,
13 )
14
15 from ..._vendor.qt_json_builder.qt_jsonschema_form import WidgetBuilder
16 from ...utils.settings import SETTINGS
17 from ...utils.translations import trans
18
19
20 class PreferencesDialog(QDialog):
21 """Preferences Dialog for Napari user settings."""
22
23 ui_schema = {
24 "call_order": {"ui:widget": "plugins"},
25 }
26
27 resized = Signal(QSize)
28 closed = Signal()
29
30 def __init__(self, parent=None):
31 super().__init__(parent)
32
33 self._list = QListWidget(self)
34 self._stack = QStackedWidget(self)
35
36 self._list.setObjectName("Preferences")
37
38 # Set up buttons
39 self._button_cancel = QPushButton(trans._("Cancel"))
40 self._button_ok = QPushButton(trans._("OK"))
41 self._default_restore = QPushButton(trans._("Restore defaults"))
42
43 # Setup
44 self.setWindowTitle(trans._("Preferences"))
45
46 # Layout
47 left_layout = QVBoxLayout()
48 left_layout.addWidget(self._list)
49 left_layout.addStretch()
50 left_layout.addWidget(self._default_restore)
51 left_layout.addWidget(self._button_cancel)
52 left_layout.addWidget(self._button_ok)
53
54 main_layout = QHBoxLayout()
55 main_layout.addLayout(left_layout, 1)
56 main_layout.addWidget(self._stack, 3)
57
58 self.setLayout(main_layout)
59
60 # Signals
61
62 self._list.currentRowChanged.connect(
63 lambda index: self._stack.setCurrentIndex(index)
64 )
65 self._button_cancel.clicked.connect(self.on_click_cancel)
66 self._button_ok.clicked.connect(self.on_click_ok)
67 self._default_restore.clicked.connect(self.restore_defaults)
68
69 # Make widget
70
71 self.make_dialog()
72 self._list.setCurrentRow(0)
73
74 def closeEvent(self, event):
75 """Override to emit signal."""
76 self.closed.emit()
77 super().closeEvent(event)
78
79 def resizeEvent(self, event):
80 """Override to emit signal."""
81 self.resized.emit(event.size())
82 super().resizeEvent(event)
83
84 def make_dialog(self):
85 """Removes settings not to be exposed to user and creates dialog pages."""
86
87 # Because there are multiple pages, need to keep a dictionary of values dicts.
88 # One set of keywords are for each page, then in each entry for a page, there are dicts
89 # of setting and its value.
90
91 self._values_orig_dict = {}
92 self._values_dict = {}
93 self._setting_changed_dict = {}
94
95 for page, setting in SETTINGS.schemas().items():
96 schema, values, properties = self.get_page_dict(setting)
97
98 self._setting_changed_dict[page] = {}
99 self._values_orig_dict[page] = values
100 self._values_dict[page] = values
101
102 # Only add pages if there are any properties to add.
103 if properties:
104 self.add_page(schema, values)
105
106 def get_page_dict(self, setting):
107 """Provides the schema, set of values for each setting, and the properties
108 for each setting.
109
110 Parameters
111 ----------
112 setting : dict
113 Dictionary of settings for a page within the settings manager.
114
115 Returns
116 -------
117 schema : dict
118 Json schema of the setting page.
119 values : dict
120 Dictionary of values currently set for each parameter in the settings.
121 properties : dict
122 Dictionary of properties within the json schema.
123
124 """
125
126 schema = json.loads(setting['json_schema'])
127 # Need to remove certain properties that will not be displayed on the GUI
128 properties = schema.pop('properties')
129 model = setting['model']
130 values = model.dict()
131 napari_config = getattr(model, "NapariConfig", None)
132 if napari_config is not None:
133 for val in napari_config.preferences_exclude:
134 properties.pop(val)
135 values.pop(val)
136
137 schema['properties'] = properties
138
139 return schema, values, properties
140
141 def restore_defaults(self):
142 """Launches dialog to confirm restore settings choice."""
143
144 widget = ConfirmDialog(
145 parent=self,
146 text=trans._("Are you sure you want to restore default settings?"),
147 )
148 widget.valueChanged.connect(self._reset_widgets)
149 widget.exec_()
150
151 def _reset_widgets(self):
152 """Deletes the widgets and rebuilds with defaults."""
153 self.close()
154 self._list.clear()
155
156 for n in range(self._stack.count()):
157 widget = self._stack.removeWidget(self._stack.currentWidget())
158 del widget
159
160 self.make_dialog()
161 self._list.setCurrentRow(0)
162 self.show()
163
164 def on_click_ok(self):
165 """Keeps the selected preferences saved to SETTINGS."""
166 self.close()
167
168 def on_click_cancel(self):
169 """Restores the settings in place when dialog was launched."""
170 # Need to check differences for each page.
171 for n in range(self._stack.count()):
172 # Must set the current row so that the proper list is updated
173 # in check differences.
174 self._list.setCurrentRow(n)
175 page = self._list.currentItem().text().split(" ")[0].lower()
176 # get new values for settings. If they were changed from values at beginning
177 # of preference dialog session, change them back.
178 # Using the settings value seems to be the best way to get the checkboxes right
179 # on the plugin call order widget.
180 setting = SETTINGS.schemas()[page]
181 schema, new_values, properties = self.get_page_dict(setting)
182 self.check_differences(self._values_orig_dict[page], new_values)
183
184 self._list.setCurrentRow(0)
185 self.close()
186
187 def add_page(self, schema, values):
188 """Creates a new page for each section in dialog.
189
190 Parameters
191 ----------
192 schema : dict
193 Json schema including all information to build each page in the
194 preferences dialog.
195 values : dict
196 Dictionary of current values set in preferences.
197 """
198 widget = self.build_page_dialog(schema, values)
199 self._list.addItem(schema["title"])
200 self._stack.addWidget(widget)
201
202 def build_page_dialog(self, schema, values):
203 """Builds the preferences widget using the json schema builder.
204
205 Parameters
206 ----------
207 schema : dict
208 Json schema including all information to build each page in the
209 preferences dialog.
210 values : dict
211 Dictionary of current values set in preferences.
212 """
213
214 builder = WidgetBuilder()
215 form = builder.create_form(schema, self.ui_schema)
216 # set state values for widget
217 form.widget.state = values
218 form.widget.on_changed.connect(
219 lambda d: self.check_differences(
220 d,
221 self._values_dict[schema["title"].lower()],
222 )
223 )
224
225 return form
226
227 def _values_changed(self, page, new_dict, old_dict):
228 """Loops through each setting in a page to determine if it changed.
229
230 Parameters
231 ----------
232 new_dict : dict
233 Dict that has the most recent changes by user. Each key is a setting value
234 and each item is the value.
235 old_dict : dict
236 Dict wtih values set at the begining of preferences dialog session.
237
238 """
239 for setting_name, value in new_dict.items():
240 if value != old_dict[setting_name]:
241 self._setting_changed_dict[page][setting_name] = value
242 elif (
243 value == old_dict[setting_name]
244 and setting_name in self._setting_changed_dict[page]
245 ):
246 self._setting_changed_dict[page].pop(setting_name)
247
248 def check_differences(self, new_dict, old_dict):
249 """Changes settings in settings manager with changes from dialog.
250
251 Parameters
252 ----------
253 new_dict : dict
254 Dict that has the most recent changes by user. Each key is a setting parameter
255 and each item is the value.
256 old_dict : dict
257 Dict wtih values set at the beginning of the preferences dialog session.
258 """
259 page = self._list.currentItem().text().split(" ")[0].lower()
260 self._values_changed(page, new_dict, old_dict)
261 different_values = self._setting_changed_dict[page]
262
263 if len(different_values) > 0:
264 # change the values in SETTINGS
265 for setting_name, value in different_values.items():
266 try:
267 setattr(SETTINGS._settings[page], setting_name, value)
268 self._values_dict[page] = new_dict
269 except: # noqa: E722
270 continue
271
272
273 class ConfirmDialog(QDialog):
274 """Dialog to confirms a user's choice to restore default settings."""
275
276 valueChanged = Signal(bool)
277
278 def __init__(
279 self,
280 parent: QWidget = None,
281 text: str = "",
282 ):
283 super().__init__(parent)
284
285 # Set up components
286 self._question = QLabel(self)
287 self._button_restore = QPushButton(trans._("Restore"))
288 self._button_cancel = QPushButton(trans._("Cancel"))
289
290 # Widget set up
291 self._question.setText(text)
292
293 # Layout
294 button_layout = QHBoxLayout()
295 button_layout.addWidget(self._button_cancel)
296 button_layout.addWidget(self._button_restore)
297
298 main_layout = QVBoxLayout()
299 main_layout.addWidget(self._question)
300 main_layout.addLayout(button_layout)
301
302 self.setLayout(main_layout)
303
304 # Signals
305 self._button_cancel.clicked.connect(self.on_click_cancel)
306 self._button_restore.clicked.connect(self.on_click_restore)
307
308 def on_click_cancel(self):
309 """Do not restore defaults and close window."""
310 self.close()
311
312 def on_click_restore(self):
313 """Restore defaults and close window."""
314 SETTINGS.reset()
315 self.valueChanged.emit(True)
316 self.close()
317
[end of napari/_qt/dialogs/preferences_dialog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/_qt/dialogs/preferences_dialog.py b/napari/_qt/dialogs/preferences_dialog.py
--- a/napari/_qt/dialogs/preferences_dialog.py
+++ b/napari/_qt/dialogs/preferences_dialog.py
@@ -76,6 +76,11 @@
self.closed.emit()
super().closeEvent(event)
+ def reject(self):
+ """Override to handle Escape."""
+ super().reject()
+ self.close()
+
def resizeEvent(self, event):
"""Override to emit signal."""
self.resized.emit(event.size())
|
{"golden_diff": "diff --git a/napari/_qt/dialogs/preferences_dialog.py b/napari/_qt/dialogs/preferences_dialog.py\n--- a/napari/_qt/dialogs/preferences_dialog.py\n+++ b/napari/_qt/dialogs/preferences_dialog.py\n@@ -76,6 +76,11 @@\n self.closed.emit()\n super().closeEvent(event)\n \n+ def reject(self):\n+ \"\"\"Override to handle Escape.\"\"\"\n+ super().reject()\n+ self.close()\n+\n def resizeEvent(self, event):\n \"\"\"Override to emit signal.\"\"\"\n self.resized.emit(event.size())\n", "issue": "Using escape key permanently hides preferences window\n## \ud83d\udc1b Bug\r\nRelated to #2527 ... but perhaps different enough for a different issue:\r\n\r\nIf you use the escape key when the preferences window is open, it will indeed close the window. However, using command-W or File > Preferences will not re-show the window.\n", "before_files": [{"content": "import json\n\nfrom qtpy.QtCore import QSize, Signal\nfrom qtpy.QtWidgets import (\n QDialog,\n QHBoxLayout,\n QLabel,\n QListWidget,\n QPushButton,\n QStackedWidget,\n QVBoxLayout,\n QWidget,\n)\n\nfrom ..._vendor.qt_json_builder.qt_jsonschema_form import WidgetBuilder\nfrom ...utils.settings import SETTINGS\nfrom ...utils.translations import trans\n\n\nclass PreferencesDialog(QDialog):\n \"\"\"Preferences Dialog for Napari user settings.\"\"\"\n\n ui_schema = {\n \"call_order\": {\"ui:widget\": \"plugins\"},\n }\n\n resized = Signal(QSize)\n closed = Signal()\n\n def __init__(self, parent=None):\n super().__init__(parent)\n\n self._list = QListWidget(self)\n self._stack = QStackedWidget(self)\n\n self._list.setObjectName(\"Preferences\")\n\n # Set up buttons\n self._button_cancel = QPushButton(trans._(\"Cancel\"))\n self._button_ok = QPushButton(trans._(\"OK\"))\n self._default_restore = QPushButton(trans._(\"Restore defaults\"))\n\n # Setup\n self.setWindowTitle(trans._(\"Preferences\"))\n\n # Layout\n left_layout = QVBoxLayout()\n left_layout.addWidget(self._list)\n left_layout.addStretch()\n left_layout.addWidget(self._default_restore)\n left_layout.addWidget(self._button_cancel)\n left_layout.addWidget(self._button_ok)\n\n main_layout = QHBoxLayout()\n main_layout.addLayout(left_layout, 1)\n main_layout.addWidget(self._stack, 3)\n\n self.setLayout(main_layout)\n\n # Signals\n\n self._list.currentRowChanged.connect(\n lambda index: self._stack.setCurrentIndex(index)\n )\n self._button_cancel.clicked.connect(self.on_click_cancel)\n self._button_ok.clicked.connect(self.on_click_ok)\n self._default_restore.clicked.connect(self.restore_defaults)\n\n # Make widget\n\n self.make_dialog()\n self._list.setCurrentRow(0)\n\n def closeEvent(self, event):\n \"\"\"Override to emit signal.\"\"\"\n self.closed.emit()\n super().closeEvent(event)\n\n def resizeEvent(self, event):\n \"\"\"Override to emit signal.\"\"\"\n self.resized.emit(event.size())\n super().resizeEvent(event)\n\n def make_dialog(self):\n \"\"\"Removes settings not to be exposed to user and creates dialog pages.\"\"\"\n\n # Because there are multiple pages, need to keep a dictionary of values dicts.\n # One set of keywords are for each page, then in each entry for a page, there are dicts\n # of setting and its value.\n\n self._values_orig_dict = {}\n self._values_dict = {}\n self._setting_changed_dict = {}\n\n for page, setting in SETTINGS.schemas().items():\n schema, values, properties = self.get_page_dict(setting)\n\n self._setting_changed_dict[page] = {}\n self._values_orig_dict[page] = values\n self._values_dict[page] = values\n\n # Only add pages if there are any properties to add.\n if properties:\n self.add_page(schema, values)\n\n def get_page_dict(self, setting):\n \"\"\"Provides the schema, set of values for each setting, and the properties\n for each setting.\n\n Parameters\n ----------\n setting : dict\n Dictionary of settings for a page within the settings manager.\n\n Returns\n -------\n schema : dict\n Json schema of the setting page.\n values : dict\n Dictionary of values currently set for each parameter in the settings.\n properties : dict\n Dictionary of properties within the json schema.\n\n \"\"\"\n\n schema = json.loads(setting['json_schema'])\n # Need to remove certain properties that will not be displayed on the GUI\n properties = schema.pop('properties')\n model = setting['model']\n values = model.dict()\n napari_config = getattr(model, \"NapariConfig\", None)\n if napari_config is not None:\n for val in napari_config.preferences_exclude:\n properties.pop(val)\n values.pop(val)\n\n schema['properties'] = properties\n\n return schema, values, properties\n\n def restore_defaults(self):\n \"\"\"Launches dialog to confirm restore settings choice.\"\"\"\n\n widget = ConfirmDialog(\n parent=self,\n text=trans._(\"Are you sure you want to restore default settings?\"),\n )\n widget.valueChanged.connect(self._reset_widgets)\n widget.exec_()\n\n def _reset_widgets(self):\n \"\"\"Deletes the widgets and rebuilds with defaults.\"\"\"\n self.close()\n self._list.clear()\n\n for n in range(self._stack.count()):\n widget = self._stack.removeWidget(self._stack.currentWidget())\n del widget\n\n self.make_dialog()\n self._list.setCurrentRow(0)\n self.show()\n\n def on_click_ok(self):\n \"\"\"Keeps the selected preferences saved to SETTINGS.\"\"\"\n self.close()\n\n def on_click_cancel(self):\n \"\"\"Restores the settings in place when dialog was launched.\"\"\"\n # Need to check differences for each page.\n for n in range(self._stack.count()):\n # Must set the current row so that the proper list is updated\n # in check differences.\n self._list.setCurrentRow(n)\n page = self._list.currentItem().text().split(\" \")[0].lower()\n # get new values for settings. If they were changed from values at beginning\n # of preference dialog session, change them back.\n # Using the settings value seems to be the best way to get the checkboxes right\n # on the plugin call order widget.\n setting = SETTINGS.schemas()[page]\n schema, new_values, properties = self.get_page_dict(setting)\n self.check_differences(self._values_orig_dict[page], new_values)\n\n self._list.setCurrentRow(0)\n self.close()\n\n def add_page(self, schema, values):\n \"\"\"Creates a new page for each section in dialog.\n\n Parameters\n ----------\n schema : dict\n Json schema including all information to build each page in the\n preferences dialog.\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n widget = self.build_page_dialog(schema, values)\n self._list.addItem(schema[\"title\"])\n self._stack.addWidget(widget)\n\n def build_page_dialog(self, schema, values):\n \"\"\"Builds the preferences widget using the json schema builder.\n\n Parameters\n ----------\n schema : dict\n Json schema including all information to build each page in the\n preferences dialog.\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n\n builder = WidgetBuilder()\n form = builder.create_form(schema, self.ui_schema)\n # set state values for widget\n form.widget.state = values\n form.widget.on_changed.connect(\n lambda d: self.check_differences(\n d,\n self._values_dict[schema[\"title\"].lower()],\n )\n )\n\n return form\n\n def _values_changed(self, page, new_dict, old_dict):\n \"\"\"Loops through each setting in a page to determine if it changed.\n\n Parameters\n ----------\n new_dict : dict\n Dict that has the most recent changes by user. Each key is a setting value\n and each item is the value.\n old_dict : dict\n Dict wtih values set at the begining of preferences dialog session.\n\n \"\"\"\n for setting_name, value in new_dict.items():\n if value != old_dict[setting_name]:\n self._setting_changed_dict[page][setting_name] = value\n elif (\n value == old_dict[setting_name]\n and setting_name in self._setting_changed_dict[page]\n ):\n self._setting_changed_dict[page].pop(setting_name)\n\n def check_differences(self, new_dict, old_dict):\n \"\"\"Changes settings in settings manager with changes from dialog.\n\n Parameters\n ----------\n new_dict : dict\n Dict that has the most recent changes by user. Each key is a setting parameter\n and each item is the value.\n old_dict : dict\n Dict wtih values set at the beginning of the preferences dialog session.\n \"\"\"\n page = self._list.currentItem().text().split(\" \")[0].lower()\n self._values_changed(page, new_dict, old_dict)\n different_values = self._setting_changed_dict[page]\n\n if len(different_values) > 0:\n # change the values in SETTINGS\n for setting_name, value in different_values.items():\n try:\n setattr(SETTINGS._settings[page], setting_name, value)\n self._values_dict[page] = new_dict\n except: # noqa: E722\n continue\n\n\nclass ConfirmDialog(QDialog):\n \"\"\"Dialog to confirms a user's choice to restore default settings.\"\"\"\n\n valueChanged = Signal(bool)\n\n def __init__(\n self,\n parent: QWidget = None,\n text: str = \"\",\n ):\n super().__init__(parent)\n\n # Set up components\n self._question = QLabel(self)\n self._button_restore = QPushButton(trans._(\"Restore\"))\n self._button_cancel = QPushButton(trans._(\"Cancel\"))\n\n # Widget set up\n self._question.setText(text)\n\n # Layout\n button_layout = QHBoxLayout()\n button_layout.addWidget(self._button_cancel)\n button_layout.addWidget(self._button_restore)\n\n main_layout = QVBoxLayout()\n main_layout.addWidget(self._question)\n main_layout.addLayout(button_layout)\n\n self.setLayout(main_layout)\n\n # Signals\n self._button_cancel.clicked.connect(self.on_click_cancel)\n self._button_restore.clicked.connect(self.on_click_restore)\n\n def on_click_cancel(self):\n \"\"\"Do not restore defaults and close window.\"\"\"\n self.close()\n\n def on_click_restore(self):\n \"\"\"Restore defaults and close window.\"\"\"\n SETTINGS.reset()\n self.valueChanged.emit(True)\n self.close()\n", "path": "napari/_qt/dialogs/preferences_dialog.py"}]}
| 3,554 | 127 |
gh_patches_debug_27677
|
rasdani/github-patches
|
git_diff
|
microsoft__AzureTRE-1685
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create basic, no-op `pipeline: {}` block in new `dev-vm` resource
This task is to enable future work on the `pipeline: {}` without interfering with existing functionality.
- [x] Copy the `guacamole-azure-linuxvm` to `guacamole-dev-vm`
- [x] Add a basic 3-step `pipeline: {}` to the schema to update the `display_name` of the firewall shared service, run main, then update the firewall name again.
- [x] Make sure the `pipeline: {}` block is ignored/removed during schema validation.
- [x] Start a doc to describe what the `pipeline:{}` is for. This will be updated as features are added in subsequent tasks.
</issue>
<code>
[start of api_app/db/repositories/resource_templates.py]
1 import uuid
2 from typing import List, Union
3
4 from azure.cosmos import CosmosClient
5 from pydantic import parse_obj_as
6
7 from core import config
8 from db.errors import DuplicateEntity, EntityDoesNotExist, EntityVersionExist
9 from db.repositories.base import BaseRepository
10 from models.domain.resource import ResourceType
11 from models.domain.resource_template import ResourceTemplate
12 from models.domain.user_resource_template import UserResourceTemplate
13 from models.schemas.resource_template import ResourceTemplateInCreate, ResourceTemplateInformation
14 from services.schema_service import enrich_shared_service_template, enrich_workspace_template, enrich_workspace_service_template, enrich_user_resource_template
15
16
17 class ResourceTemplateRepository(BaseRepository):
18 def __init__(self, client: CosmosClient):
19 super().__init__(client, config.STATE_STORE_RESOURCE_TEMPLATES_CONTAINER)
20
21 @staticmethod
22 def _template_by_name_query(name: str, resource_type: ResourceType) -> str:
23 return f'SELECT * FROM c WHERE c.resourceType = "{resource_type}" AND c.name = "{name}"'
24
25 @staticmethod
26 def enrich_template(template: ResourceTemplate, is_update: bool = False) -> dict:
27 if template.resourceType == ResourceType.Workspace:
28 return enrich_workspace_template(template, is_update=is_update)
29 elif template.resourceType == ResourceType.WorkspaceService:
30 return enrich_workspace_service_template(template, is_update=is_update)
31 elif template.resourceType == ResourceType.SharedService:
32 return enrich_shared_service_template(template, is_update=is_update)
33 else:
34 return enrich_user_resource_template(template, is_update=is_update)
35
36 def get_templates_information(self, resource_type: ResourceType, parent_service_name: str = "") -> List[ResourceTemplateInformation]:
37 """
38 Returns name/title/description for all current resource_type templates
39 """
40 query = f'SELECT c.name, c.title, c.description FROM c WHERE c.resourceType = "{resource_type}" AND c.current = true'
41 if resource_type == ResourceType.UserResource:
42 query += f' AND c.parentWorkspaceService = "{parent_service_name}"'
43 template_infos = self.query(query=query)
44 return [parse_obj_as(ResourceTemplateInformation, info) for info in template_infos]
45
46 def get_current_template(self, template_name: str, resource_type: ResourceType, parent_service_name: str = "") -> Union[ResourceTemplate, UserResourceTemplate]:
47 """
48 Returns full template for the current version of the 'template_name' template
49 """
50 query = self._template_by_name_query(template_name, resource_type) + ' AND c.current = true'
51 if resource_type == ResourceType.UserResource:
52 query += f' AND c.parentWorkspaceService = "{parent_service_name}"'
53 templates = self.query(query=query)
54 if len(templates) == 0:
55 raise EntityDoesNotExist
56 if len(templates) > 1:
57 raise DuplicateEntity
58 if resource_type == ResourceType.UserResource:
59 return parse_obj_as(UserResourceTemplate, templates[0])
60 else:
61 return parse_obj_as(ResourceTemplate, templates[0])
62
63 def get_template_by_name_and_version(self, name: str, version: str, resource_type: ResourceType, parent_service_name: str = None) -> Union[ResourceTemplate, UserResourceTemplate]:
64 """
65 Returns full template for the 'resource_type' template defined by 'template_name' and 'version'
66
67 For UserResource templates, you also need to pass in 'parent_service_name' as a parameter
68 """
69 query = self._template_by_name_query(name, resource_type) + f' AND c.version = "{version}"'
70
71 # If querying for a user resource, we also need to add the parentWorkspaceService (name) to the query
72 if resource_type == ResourceType.UserResource:
73 if parent_service_name:
74 query += f' AND c.parentWorkspaceService = "{parent_service_name}"'
75 else:
76 raise Exception("When getting a UserResource template, you must pass in a 'parent_service_name'")
77
78 # Execute the query and handle results
79 templates = self.query(query=query)
80 if len(templates) != 1:
81 raise EntityDoesNotExist
82 if resource_type == ResourceType.UserResource:
83 return parse_obj_as(UserResourceTemplate, templates[0])
84 else:
85 return parse_obj_as(ResourceTemplate, templates[0])
86
87 def create_template(self, template_input: ResourceTemplateInCreate, resource_type: ResourceType, parent_service_name: str = "") -> Union[ResourceTemplate, UserResourceTemplate]:
88 """
89 creates a template based on the input (workspace and workspace-services template)
90 """
91 template = {
92 "id": str(uuid.uuid4()),
93 "name": template_input.name,
94 "title": template_input.json_schema["title"],
95 "description": template_input.json_schema["description"],
96 "version": template_input.version,
97 "resourceType": resource_type,
98 "current": template_input.current,
99 "required": template_input.json_schema["required"],
100 "properties": template_input.json_schema["properties"],
101 "customActions": template_input.customActions
102 }
103
104 if resource_type == ResourceType.UserResource:
105 template["parentWorkspaceService"] = parent_service_name
106 template = parse_obj_as(UserResourceTemplate, template)
107 else:
108 template = parse_obj_as(ResourceTemplate, template)
109
110 self.save_item(template)
111 return template
112
113 def create_and_validate_template(self, template_input: ResourceTemplateInCreate, resource_type: ResourceType, workspace_service_template_name: str = "") -> dict:
114 """
115 Validates that we don't have a version conflict
116 Updates the current version for the template
117 Saves to the database and returns the enriched template
118 """
119 try:
120 template = self.get_template_by_name_and_version(template_input.name, template_input.version, resource_type, workspace_service_template_name)
121 if template:
122 raise EntityVersionExist
123 except EntityDoesNotExist:
124 try:
125 template = self.get_current_template(template_input.name, resource_type, workspace_service_template_name)
126 if template_input.current:
127 template.current = False
128 self.update_item(template)
129 except EntityDoesNotExist:
130 # first registration
131 template_input.current = True # For first time registration, template is always marked current
132 created_template = self.create_template(template_input, resource_type, workspace_service_template_name)
133 return self.enrich_template(created_template)
134
[end of api_app/db/repositories/resource_templates.py]
[start of api_app/models/domain/resource_template.py]
1 from typing import Dict, Any, List, Optional
2
3 from pydantic import Field
4
5 from models.domain.azuretremodel import AzureTREModel
6 from models.domain.resource import ResourceType
7
8
9 class Property(AzureTREModel):
10 type: str = Field(title="Property type")
11 title: str = Field("", title="Property description")
12 description: str = Field("", title="Property description")
13 default: Any = Field(None, title="Default value for the property")
14 enum: Optional[List[str]] = Field(None, title="Enum values")
15 const: Optional[Any] = Field(None, title="Constant value")
16 multipleOf: Optional[float] = Field(None, title="Multiple of")
17 maximum: Optional[float] = Field(None, title="Maximum value")
18 exclusiveMaximum: Optional[float] = Field(None, title="Exclusive maximum value")
19 minimum: Optional[float] = Field(None, title="Minimum value")
20 exclusiveMinimum: Optional[float] = Field(None, title="Exclusive minimum value")
21 maxLength: Optional[int] = Field(None, title="Maximum length")
22 minLength: Optional[int] = Field(None, title="Minimum length")
23 pattern: Optional[str] = Field(None, title="Pattern")
24 updateable: Optional[bool] = Field(None, title="Indicates that the field can be updated")
25 readOnly: Optional[bool] = Field(None, title="Indicates the field is read-only")
26
27
28 class CustomAction(AzureTREModel):
29 name: str = Field(None, title="Custom action name")
30 description: str = Field("", title="Action description")
31
32
33 class ResourceTemplate(AzureTREModel):
34 id: str
35 name: str = Field(title="Unique template name")
36 title: str = Field("", title="Template title or friendly name")
37 description: str = Field(title="Template description")
38 version: str = Field(title="Template version")
39 resourceType: ResourceType = Field(title="Type of resource this template is for (workspace/service)")
40 current: bool = Field(title="Is this the current version of this template")
41 type: str = "object"
42 required: List[str] = Field(title="List of properties which must be provided")
43 properties: Dict[str, Property] = Field(title="Template properties")
44 actions: List[CustomAction] = Field(default=[], title="Template actions")
45 customActions: List[CustomAction] = Field(default=[], title="Template custom actions")
46
47 # setting this to false means if extra, unexpected fields are supplied, the request is invalidated
48 additionalProperties: bool = Field(default=False, title="Prevent unspecified properties being applied")
49
[end of api_app/models/domain/resource_template.py]
[start of api_app/_version.py]
1 __version__ = "0.2.12"
2
[end of api_app/_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.12"
+__version__ = "0.2.13"
diff --git a/api_app/db/repositories/resource_templates.py b/api_app/db/repositories/resource_templates.py
--- a/api_app/db/repositories/resource_templates.py
+++ b/api_app/db/repositories/resource_templates.py
@@ -101,6 +101,9 @@
"customActions": template_input.customActions
}
+ if "pipeline" in template_input.json_schema:
+ template["pipeline"] = template_input.json_schema["pipeline"]
+
if resource_type == ResourceType.UserResource:
template["parentWorkspaceService"] = parent_service_name
template = parse_obj_as(UserResourceTemplate, template)
diff --git a/api_app/models/domain/resource_template.py b/api_app/models/domain/resource_template.py
--- a/api_app/models/domain/resource_template.py
+++ b/api_app/models/domain/resource_template.py
@@ -43,6 +43,7 @@
properties: Dict[str, Property] = Field(title="Template properties")
actions: List[CustomAction] = Field(default=[], title="Template actions")
customActions: List[CustomAction] = Field(default=[], title="Template custom actions")
+ pipeline: Optional[dict] = Field(default=None, title="Template pipeline to define updates to other resources")
# setting this to false means if extra, unexpected fields are supplied, the request is invalidated
additionalProperties: bool = Field(default=False, title="Prevent unspecified properties being applied")
|
{"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.12\"\n+__version__ = \"0.2.13\"\ndiff --git a/api_app/db/repositories/resource_templates.py b/api_app/db/repositories/resource_templates.py\n--- a/api_app/db/repositories/resource_templates.py\n+++ b/api_app/db/repositories/resource_templates.py\n@@ -101,6 +101,9 @@\n \"customActions\": template_input.customActions\n }\n \n+ if \"pipeline\" in template_input.json_schema:\n+ template[\"pipeline\"] = template_input.json_schema[\"pipeline\"]\n+\n if resource_type == ResourceType.UserResource:\n template[\"parentWorkspaceService\"] = parent_service_name\n template = parse_obj_as(UserResourceTemplate, template)\ndiff --git a/api_app/models/domain/resource_template.py b/api_app/models/domain/resource_template.py\n--- a/api_app/models/domain/resource_template.py\n+++ b/api_app/models/domain/resource_template.py\n@@ -43,6 +43,7 @@\n properties: Dict[str, Property] = Field(title=\"Template properties\")\n actions: List[CustomAction] = Field(default=[], title=\"Template actions\")\n customActions: List[CustomAction] = Field(default=[], title=\"Template custom actions\")\n+ pipeline: Optional[dict] = Field(default=None, title=\"Template pipeline to define updates to other resources\")\n \n # setting this to false means if extra, unexpected fields are supplied, the request is invalidated\n additionalProperties: bool = Field(default=False, title=\"Prevent unspecified properties being applied\")\n", "issue": "Create basic, no-op `pipeline: {}` block in new `dev-vm` resource\nThis task is to enable future work on the `pipeline: {}` without interfering with existing functionality.\r\n\r\n- [x] Copy the `guacamole-azure-linuxvm` to `guacamole-dev-vm`\r\n- [x] Add a basic 3-step `pipeline: {}` to the schema to update the `display_name` of the firewall shared service, run main, then update the firewall name again.\r\n- [x] Make sure the `pipeline: {}` block is ignored/removed during schema validation.\r\n- [x] Start a doc to describe what the `pipeline:{}` is for. This will be updated as features are added in subsequent tasks.\r\n\r\n\r\n\n", "before_files": [{"content": "import uuid\nfrom typing import List, Union\n\nfrom azure.cosmos import CosmosClient\nfrom pydantic import parse_obj_as\n\nfrom core import config\nfrom db.errors import DuplicateEntity, EntityDoesNotExist, EntityVersionExist\nfrom db.repositories.base import BaseRepository\nfrom models.domain.resource import ResourceType\nfrom models.domain.resource_template import ResourceTemplate\nfrom models.domain.user_resource_template import UserResourceTemplate\nfrom models.schemas.resource_template import ResourceTemplateInCreate, ResourceTemplateInformation\nfrom services.schema_service import enrich_shared_service_template, enrich_workspace_template, enrich_workspace_service_template, enrich_user_resource_template\n\n\nclass ResourceTemplateRepository(BaseRepository):\n def __init__(self, client: CosmosClient):\n super().__init__(client, config.STATE_STORE_RESOURCE_TEMPLATES_CONTAINER)\n\n @staticmethod\n def _template_by_name_query(name: str, resource_type: ResourceType) -> str:\n return f'SELECT * FROM c WHERE c.resourceType = \"{resource_type}\" AND c.name = \"{name}\"'\n\n @staticmethod\n def enrich_template(template: ResourceTemplate, is_update: bool = False) -> dict:\n if template.resourceType == ResourceType.Workspace:\n return enrich_workspace_template(template, is_update=is_update)\n elif template.resourceType == ResourceType.WorkspaceService:\n return enrich_workspace_service_template(template, is_update=is_update)\n elif template.resourceType == ResourceType.SharedService:\n return enrich_shared_service_template(template, is_update=is_update)\n else:\n return enrich_user_resource_template(template, is_update=is_update)\n\n def get_templates_information(self, resource_type: ResourceType, parent_service_name: str = \"\") -> List[ResourceTemplateInformation]:\n \"\"\"\n Returns name/title/description for all current resource_type templates\n \"\"\"\n query = f'SELECT c.name, c.title, c.description FROM c WHERE c.resourceType = \"{resource_type}\" AND c.current = true'\n if resource_type == ResourceType.UserResource:\n query += f' AND c.parentWorkspaceService = \"{parent_service_name}\"'\n template_infos = self.query(query=query)\n return [parse_obj_as(ResourceTemplateInformation, info) for info in template_infos]\n\n def get_current_template(self, template_name: str, resource_type: ResourceType, parent_service_name: str = \"\") -> Union[ResourceTemplate, UserResourceTemplate]:\n \"\"\"\n Returns full template for the current version of the 'template_name' template\n \"\"\"\n query = self._template_by_name_query(template_name, resource_type) + ' AND c.current = true'\n if resource_type == ResourceType.UserResource:\n query += f' AND c.parentWorkspaceService = \"{parent_service_name}\"'\n templates = self.query(query=query)\n if len(templates) == 0:\n raise EntityDoesNotExist\n if len(templates) > 1:\n raise DuplicateEntity\n if resource_type == ResourceType.UserResource:\n return parse_obj_as(UserResourceTemplate, templates[0])\n else:\n return parse_obj_as(ResourceTemplate, templates[0])\n\n def get_template_by_name_and_version(self, name: str, version: str, resource_type: ResourceType, parent_service_name: str = None) -> Union[ResourceTemplate, UserResourceTemplate]:\n \"\"\"\n Returns full template for the 'resource_type' template defined by 'template_name' and 'version'\n\n For UserResource templates, you also need to pass in 'parent_service_name' as a parameter\n \"\"\"\n query = self._template_by_name_query(name, resource_type) + f' AND c.version = \"{version}\"'\n\n # If querying for a user resource, we also need to add the parentWorkspaceService (name) to the query\n if resource_type == ResourceType.UserResource:\n if parent_service_name:\n query += f' AND c.parentWorkspaceService = \"{parent_service_name}\"'\n else:\n raise Exception(\"When getting a UserResource template, you must pass in a 'parent_service_name'\")\n\n # Execute the query and handle results\n templates = self.query(query=query)\n if len(templates) != 1:\n raise EntityDoesNotExist\n if resource_type == ResourceType.UserResource:\n return parse_obj_as(UserResourceTemplate, templates[0])\n else:\n return parse_obj_as(ResourceTemplate, templates[0])\n\n def create_template(self, template_input: ResourceTemplateInCreate, resource_type: ResourceType, parent_service_name: str = \"\") -> Union[ResourceTemplate, UserResourceTemplate]:\n \"\"\"\n creates a template based on the input (workspace and workspace-services template)\n \"\"\"\n template = {\n \"id\": str(uuid.uuid4()),\n \"name\": template_input.name,\n \"title\": template_input.json_schema[\"title\"],\n \"description\": template_input.json_schema[\"description\"],\n \"version\": template_input.version,\n \"resourceType\": resource_type,\n \"current\": template_input.current,\n \"required\": template_input.json_schema[\"required\"],\n \"properties\": template_input.json_schema[\"properties\"],\n \"customActions\": template_input.customActions\n }\n\n if resource_type == ResourceType.UserResource:\n template[\"parentWorkspaceService\"] = parent_service_name\n template = parse_obj_as(UserResourceTemplate, template)\n else:\n template = parse_obj_as(ResourceTemplate, template)\n\n self.save_item(template)\n return template\n\n def create_and_validate_template(self, template_input: ResourceTemplateInCreate, resource_type: ResourceType, workspace_service_template_name: str = \"\") -> dict:\n \"\"\"\n Validates that we don't have a version conflict\n Updates the current version for the template\n Saves to the database and returns the enriched template\n \"\"\"\n try:\n template = self.get_template_by_name_and_version(template_input.name, template_input.version, resource_type, workspace_service_template_name)\n if template:\n raise EntityVersionExist\n except EntityDoesNotExist:\n try:\n template = self.get_current_template(template_input.name, resource_type, workspace_service_template_name)\n if template_input.current:\n template.current = False\n self.update_item(template)\n except EntityDoesNotExist:\n # first registration\n template_input.current = True # For first time registration, template is always marked current\n created_template = self.create_template(template_input, resource_type, workspace_service_template_name)\n return self.enrich_template(created_template)\n", "path": "api_app/db/repositories/resource_templates.py"}, {"content": "from typing import Dict, Any, List, Optional\n\nfrom pydantic import Field\n\nfrom models.domain.azuretremodel import AzureTREModel\nfrom models.domain.resource import ResourceType\n\n\nclass Property(AzureTREModel):\n type: str = Field(title=\"Property type\")\n title: str = Field(\"\", title=\"Property description\")\n description: str = Field(\"\", title=\"Property description\")\n default: Any = Field(None, title=\"Default value for the property\")\n enum: Optional[List[str]] = Field(None, title=\"Enum values\")\n const: Optional[Any] = Field(None, title=\"Constant value\")\n multipleOf: Optional[float] = Field(None, title=\"Multiple of\")\n maximum: Optional[float] = Field(None, title=\"Maximum value\")\n exclusiveMaximum: Optional[float] = Field(None, title=\"Exclusive maximum value\")\n minimum: Optional[float] = Field(None, title=\"Minimum value\")\n exclusiveMinimum: Optional[float] = Field(None, title=\"Exclusive minimum value\")\n maxLength: Optional[int] = Field(None, title=\"Maximum length\")\n minLength: Optional[int] = Field(None, title=\"Minimum length\")\n pattern: Optional[str] = Field(None, title=\"Pattern\")\n updateable: Optional[bool] = Field(None, title=\"Indicates that the field can be updated\")\n readOnly: Optional[bool] = Field(None, title=\"Indicates the field is read-only\")\n\n\nclass CustomAction(AzureTREModel):\n name: str = Field(None, title=\"Custom action name\")\n description: str = Field(\"\", title=\"Action description\")\n\n\nclass ResourceTemplate(AzureTREModel):\n id: str\n name: str = Field(title=\"Unique template name\")\n title: str = Field(\"\", title=\"Template title or friendly name\")\n description: str = Field(title=\"Template description\")\n version: str = Field(title=\"Template version\")\n resourceType: ResourceType = Field(title=\"Type of resource this template is for (workspace/service)\")\n current: bool = Field(title=\"Is this the current version of this template\")\n type: str = \"object\"\n required: List[str] = Field(title=\"List of properties which must be provided\")\n properties: Dict[str, Property] = Field(title=\"Template properties\")\n actions: List[CustomAction] = Field(default=[], title=\"Template actions\")\n customActions: List[CustomAction] = Field(default=[], title=\"Template custom actions\")\n\n # setting this to false means if extra, unexpected fields are supplied, the request is invalidated\n additionalProperties: bool = Field(default=False, title=\"Prevent unspecified properties being applied\")\n", "path": "api_app/models/domain/resource_template.py"}, {"content": "__version__ = \"0.2.12\"\n", "path": "api_app/_version.py"}]}
| 3,012 | 362 |
gh_patches_debug_8244
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-4218
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[beta][v23] Erreur 500 via opensearch
Bonjour,
**Lien incriminé** : https://beta.zestedesavoir.com/rechercher/opensearch.xml
**Problème** : j'ai une erreur 500 en allant sur cette url.
</issue>
<code>
[start of zds/searchv2/views.py]
1 # coding: utf-8
2 import operator
3
4 from elasticsearch_dsl import Search
5 from elasticsearch_dsl.query import Match, MultiMatch, FunctionScore, Term, Terms, Range
6
7 from django.conf import settings
8 from django.core.exceptions import PermissionDenied
9 from django.contrib import messages
10 from django.utils.translation import ugettext_lazy as _
11 from django.shortcuts import render
12 from django.core.urlresolvers import reverse
13
14 from zds.searchv2.forms import SearchForm
15 from zds.searchv2.models import ESIndexManager
16 from zds.utils.paginator import ZdSPagingListView
17 from zds.forum.models import Forum
18
19
20 class SearchView(ZdSPagingListView):
21 """Research view
22 """
23
24 template_name = 'searchv2/search.html'
25 paginate_by = settings.ZDS_APP['search']['results_per_page']
26
27 search_form_class = SearchForm
28 search_form = None
29 search_query = None
30
31 authorized_forums = ''
32
33 index_manager = None
34
35 def __init__(self, **kwargs):
36 """Overridden because index manager must NOT be initialized elsewhere
37 """
38
39 super(SearchView, self).__init__(**kwargs)
40 self.index_manager = ESIndexManager(**settings.ES_SEARCH_INDEX)
41
42 def get(self, request, *args, **kwargs):
43 """Overridden to catch the request and fill the form.
44 """
45
46 if 'q' in request.GET:
47 self.search_query = ''.join(request.GET['q'])
48
49 self.search_form = self.search_form_class(data=self.request.GET)
50
51 if self.search_query and not self.search_form.is_valid():
52 raise PermissionDenied('research form is invalid')
53
54 return super(SearchView, self).get(request, *args, **kwargs)
55
56 def get_queryset(self):
57 if not self.index_manager.connected_to_es:
58 messages.warning(self.request, _(u'Impossible de se connecter à Elasticsearch'))
59 return []
60
61 if self.search_query:
62
63 # find forums the user is allowed to visit
64 user = self.request.user
65
66 forums_pub = Forum.objects.filter(group__isnull=True).all()
67 if user and user.is_authenticated():
68 forums_private = Forum \
69 .objects \
70 .filter(group__isnull=False, group__in=user.groups.all()) \
71 .all()
72 list_forums = list(forums_pub | forums_private)
73 else:
74 list_forums = list(forums_pub)
75
76 self.authorized_forums = [f.pk for f in list_forums]
77
78 search_queryset = Search()
79
80 # setting the different querysets (according to the selected models, if any)
81 part_querysets = []
82 chosen_groups = self.search_form.cleaned_data['models']
83
84 if chosen_groups:
85 models = []
86 for group in chosen_groups:
87 if group in settings.ZDS_APP['search']['search_groups']:
88 models.append(settings.ZDS_APP['search']['search_groups'][group][1])
89 else:
90 models = [v[1] for k, v in settings.ZDS_APP['search']['search_groups'].iteritems()]
91
92 models = reduce(operator.concat, models)
93
94 for model in models:
95 part_querysets.append(getattr(self, 'get_queryset_{}s'.format(model))())
96
97 queryset = part_querysets[0]
98 for query in part_querysets[1:]:
99 queryset |= query
100
101 # weighting:
102 weight_functions = []
103 for _type, weights in settings.ZDS_APP['search']['boosts'].items():
104 if _type in models:
105 weight_functions.append({'filter': Match(_type=_type), 'weight': weights['global']})
106
107 scored_queryset = FunctionScore(query=queryset, boost_mode='multiply', functions=weight_functions)
108 search_queryset = search_queryset.query(scored_queryset)
109
110 # highlighting:
111 search_queryset = search_queryset.highlight_options(
112 fragment_size=150, number_of_fragments=5, pre_tags=['[hl]'], post_tags=['[/hl]'])
113 search_queryset = search_queryset.highlight('text').highlight('text_html')
114
115 # executing:
116 return self.index_manager.setup_search(search_queryset)
117
118 return []
119
120 def get_queryset_publishedcontents(self):
121 """Find in PublishedContents.
122 """
123
124 query = Match(_type='publishedcontent') \
125 & MultiMatch(query=self.search_query, fields=['title', 'description', 'categories', 'tags', 'text'])
126
127 functions_score = [
128 {
129 'filter': Match(content_type='TUTORIAL'),
130 'weight': settings.ZDS_APP['search']['boosts']['publishedcontent']['if_tutorial']
131 },
132 {
133 'filter': Match(content_type='ARTICLE'),
134 'weight': settings.ZDS_APP['search']['boosts']['publishedcontent']['if_article']
135 },
136 ]
137
138 scored_query = FunctionScore(query=query, boost_mode='multiply', functions=functions_score)
139
140 return scored_query
141
142 def get_queryset_chapters(self):
143 """Find in chapters.
144 """
145
146 query = Match(_type='chapter') \
147 & MultiMatch(query=self.search_query, fields=['title', 'text'])
148
149 return query
150
151 def get_queryset_topics(self):
152 """Find in topics, and remove result if the forum is not allowed for the user.
153
154 Score is modified if :
155
156 + topic is solved ;
157 + Topic is sticky ;
158 + Topic is locked.
159
160 """
161
162 query = Match(_type='topic') \
163 & Terms(forum_pk=self.authorized_forums) \
164 & MultiMatch(query=self.search_query, fields=['title', 'subtitle', 'tags'])
165
166 functions_score = [
167 {'filter': Match(is_solved=True), 'weight': settings.ZDS_APP['search']['boosts']['topic']['if_solved']},
168 {'filter': Match(is_sticky=True), 'weight': settings.ZDS_APP['search']['boosts']['topic']['if_sticky']},
169 {'filter': Match(is_locked=True), 'weight': settings.ZDS_APP['search']['boosts']['topic']['if_locked']}
170 ]
171
172 scored_query = FunctionScore(query=query, boost_mode='multiply', functions=functions_score)
173
174 return scored_query
175
176 def get_queryset_posts(self):
177 """Find in posts, and remove result if the forum is not allowed for the user or if the message is invisible.
178
179 Score is modified if :
180
181 + Post is the first one in a topic ;
182 + Post is marked as "useful" ;
183 + Post has a like/dislike ratio above (more like than dislike) or below (the other way around) 1.0.
184 """
185
186 query = Match(_type='post') \
187 & Terms(forum_pk=self.authorized_forums) \
188 & Term(is_visible=True) \
189 & MultiMatch(query=self.search_query, fields=['text_html'])
190
191 functions_score = [
192 {'filter': Match(position=1), 'weight': settings.ZDS_APP['search']['boosts']['post']['if_first']},
193 {'filter': Match(is_useful=True), 'weight': settings.ZDS_APP['search']['boosts']['post']['if_useful']},
194 {
195 'filter': Range(like_dislike_ratio={'gt': 1}),
196 'weight': settings.ZDS_APP['search']['boosts']['post']['ld_ratio_above_1']
197 },
198 {
199 'filter': Range(like_dislike_ratio={'lt': 1}),
200 'weight': settings.ZDS_APP['search']['boosts']['post']['ld_ratio_below_1']
201 }
202 ]
203
204 scored_query = FunctionScore(query=query, boost_mode='multiply', functions=functions_score)
205
206 return scored_query
207
208 def get_context_data(self, **kwargs):
209 context = super(SearchView, self).get_context_data(**kwargs)
210 context['form'] = self.search_form
211 context['query'] = self.search_query is not None
212
213 return context
214
215
216 def opensearch(request):
217 """Generate OpenSearch Description file"""
218
219 return render(request, 'search/opensearch.xml', {
220 'site_name': settings.ZDS_APP['site']['litteral_name'],
221 'site_url': settings.ZDS_APP['site']['url'],
222 'email_contact': settings.ZDS_APP['site']['email_contact'],
223 'language': settings.LANGUAGE_CODE,
224 'search_url': settings.ZDS_APP['site']['url'] + reverse('search:query')
225 }, content_type='application/opensearchdescription+xml')
226
[end of zds/searchv2/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zds/searchv2/views.py b/zds/searchv2/views.py
--- a/zds/searchv2/views.py
+++ b/zds/searchv2/views.py
@@ -216,7 +216,7 @@
def opensearch(request):
"""Generate OpenSearch Description file"""
- return render(request, 'search/opensearch.xml', {
+ return render(request, 'searchv2/opensearch.xml', {
'site_name': settings.ZDS_APP['site']['litteral_name'],
'site_url': settings.ZDS_APP['site']['url'],
'email_contact': settings.ZDS_APP['site']['email_contact'],
|
{"golden_diff": "diff --git a/zds/searchv2/views.py b/zds/searchv2/views.py\n--- a/zds/searchv2/views.py\n+++ b/zds/searchv2/views.py\n@@ -216,7 +216,7 @@\n def opensearch(request):\n \"\"\"Generate OpenSearch Description file\"\"\"\n \n- return render(request, 'search/opensearch.xml', {\n+ return render(request, 'searchv2/opensearch.xml', {\n 'site_name': settings.ZDS_APP['site']['litteral_name'],\n 'site_url': settings.ZDS_APP['site']['url'],\n 'email_contact': settings.ZDS_APP['site']['email_contact'],\n", "issue": "[beta][v23] Erreur 500 via opensearch\nBonjour,\r\n\r\n**Lien incrimin\u00e9** : https://beta.zestedesavoir.com/rechercher/opensearch.xml\r\n\r\n**Probl\u00e8me** : j'ai une erreur 500 en allant sur cette url.\n", "before_files": [{"content": "# coding: utf-8\nimport operator\n\nfrom elasticsearch_dsl import Search\nfrom elasticsearch_dsl.query import Match, MultiMatch, FunctionScore, Term, Terms, Range\n\nfrom django.conf import settings\nfrom django.core.exceptions import PermissionDenied\nfrom django.contrib import messages\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.shortcuts import render\nfrom django.core.urlresolvers import reverse\n\nfrom zds.searchv2.forms import SearchForm\nfrom zds.searchv2.models import ESIndexManager\nfrom zds.utils.paginator import ZdSPagingListView\nfrom zds.forum.models import Forum\n\n\nclass SearchView(ZdSPagingListView):\n \"\"\"Research view\n \"\"\"\n\n template_name = 'searchv2/search.html'\n paginate_by = settings.ZDS_APP['search']['results_per_page']\n\n search_form_class = SearchForm\n search_form = None\n search_query = None\n\n authorized_forums = ''\n\n index_manager = None\n\n def __init__(self, **kwargs):\n \"\"\"Overridden because index manager must NOT be initialized elsewhere\n \"\"\"\n\n super(SearchView, self).__init__(**kwargs)\n self.index_manager = ESIndexManager(**settings.ES_SEARCH_INDEX)\n\n def get(self, request, *args, **kwargs):\n \"\"\"Overridden to catch the request and fill the form.\n \"\"\"\n\n if 'q' in request.GET:\n self.search_query = ''.join(request.GET['q'])\n\n self.search_form = self.search_form_class(data=self.request.GET)\n\n if self.search_query and not self.search_form.is_valid():\n raise PermissionDenied('research form is invalid')\n\n return super(SearchView, self).get(request, *args, **kwargs)\n\n def get_queryset(self):\n if not self.index_manager.connected_to_es:\n messages.warning(self.request, _(u'Impossible de se connecter \u00e0 Elasticsearch'))\n return []\n\n if self.search_query:\n\n # find forums the user is allowed to visit\n user = self.request.user\n\n forums_pub = Forum.objects.filter(group__isnull=True).all()\n if user and user.is_authenticated():\n forums_private = Forum \\\n .objects \\\n .filter(group__isnull=False, group__in=user.groups.all()) \\\n .all()\n list_forums = list(forums_pub | forums_private)\n else:\n list_forums = list(forums_pub)\n\n self.authorized_forums = [f.pk for f in list_forums]\n\n search_queryset = Search()\n\n # setting the different querysets (according to the selected models, if any)\n part_querysets = []\n chosen_groups = self.search_form.cleaned_data['models']\n\n if chosen_groups:\n models = []\n for group in chosen_groups:\n if group in settings.ZDS_APP['search']['search_groups']:\n models.append(settings.ZDS_APP['search']['search_groups'][group][1])\n else:\n models = [v[1] for k, v in settings.ZDS_APP['search']['search_groups'].iteritems()]\n\n models = reduce(operator.concat, models)\n\n for model in models:\n part_querysets.append(getattr(self, 'get_queryset_{}s'.format(model))())\n\n queryset = part_querysets[0]\n for query in part_querysets[1:]:\n queryset |= query\n\n # weighting:\n weight_functions = []\n for _type, weights in settings.ZDS_APP['search']['boosts'].items():\n if _type in models:\n weight_functions.append({'filter': Match(_type=_type), 'weight': weights['global']})\n\n scored_queryset = FunctionScore(query=queryset, boost_mode='multiply', functions=weight_functions)\n search_queryset = search_queryset.query(scored_queryset)\n\n # highlighting:\n search_queryset = search_queryset.highlight_options(\n fragment_size=150, number_of_fragments=5, pre_tags=['[hl]'], post_tags=['[/hl]'])\n search_queryset = search_queryset.highlight('text').highlight('text_html')\n\n # executing:\n return self.index_manager.setup_search(search_queryset)\n\n return []\n\n def get_queryset_publishedcontents(self):\n \"\"\"Find in PublishedContents.\n \"\"\"\n\n query = Match(_type='publishedcontent') \\\n & MultiMatch(query=self.search_query, fields=['title', 'description', 'categories', 'tags', 'text'])\n\n functions_score = [\n {\n 'filter': Match(content_type='TUTORIAL'),\n 'weight': settings.ZDS_APP['search']['boosts']['publishedcontent']['if_tutorial']\n },\n {\n 'filter': Match(content_type='ARTICLE'),\n 'weight': settings.ZDS_APP['search']['boosts']['publishedcontent']['if_article']\n },\n ]\n\n scored_query = FunctionScore(query=query, boost_mode='multiply', functions=functions_score)\n\n return scored_query\n\n def get_queryset_chapters(self):\n \"\"\"Find in chapters.\n \"\"\"\n\n query = Match(_type='chapter') \\\n & MultiMatch(query=self.search_query, fields=['title', 'text'])\n\n return query\n\n def get_queryset_topics(self):\n \"\"\"Find in topics, and remove result if the forum is not allowed for the user.\n\n Score is modified if :\n\n + topic is solved ;\n + Topic is sticky ;\n + Topic is locked.\n\n \"\"\"\n\n query = Match(_type='topic') \\\n & Terms(forum_pk=self.authorized_forums) \\\n & MultiMatch(query=self.search_query, fields=['title', 'subtitle', 'tags'])\n\n functions_score = [\n {'filter': Match(is_solved=True), 'weight': settings.ZDS_APP['search']['boosts']['topic']['if_solved']},\n {'filter': Match(is_sticky=True), 'weight': settings.ZDS_APP['search']['boosts']['topic']['if_sticky']},\n {'filter': Match(is_locked=True), 'weight': settings.ZDS_APP['search']['boosts']['topic']['if_locked']}\n ]\n\n scored_query = FunctionScore(query=query, boost_mode='multiply', functions=functions_score)\n\n return scored_query\n\n def get_queryset_posts(self):\n \"\"\"Find in posts, and remove result if the forum is not allowed for the user or if the message is invisible.\n\n Score is modified if :\n\n + Post is the first one in a topic ;\n + Post is marked as \"useful\" ;\n + Post has a like/dislike ratio above (more like than dislike) or below (the other way around) 1.0.\n \"\"\"\n\n query = Match(_type='post') \\\n & Terms(forum_pk=self.authorized_forums) \\\n & Term(is_visible=True) \\\n & MultiMatch(query=self.search_query, fields=['text_html'])\n\n functions_score = [\n {'filter': Match(position=1), 'weight': settings.ZDS_APP['search']['boosts']['post']['if_first']},\n {'filter': Match(is_useful=True), 'weight': settings.ZDS_APP['search']['boosts']['post']['if_useful']},\n {\n 'filter': Range(like_dislike_ratio={'gt': 1}),\n 'weight': settings.ZDS_APP['search']['boosts']['post']['ld_ratio_above_1']\n },\n {\n 'filter': Range(like_dislike_ratio={'lt': 1}),\n 'weight': settings.ZDS_APP['search']['boosts']['post']['ld_ratio_below_1']\n }\n ]\n\n scored_query = FunctionScore(query=query, boost_mode='multiply', functions=functions_score)\n\n return scored_query\n\n def get_context_data(self, **kwargs):\n context = super(SearchView, self).get_context_data(**kwargs)\n context['form'] = self.search_form\n context['query'] = self.search_query is not None\n\n return context\n\n\ndef opensearch(request):\n \"\"\"Generate OpenSearch Description file\"\"\"\n\n return render(request, 'search/opensearch.xml', {\n 'site_name': settings.ZDS_APP['site']['litteral_name'],\n 'site_url': settings.ZDS_APP['site']['url'],\n 'email_contact': settings.ZDS_APP['site']['email_contact'],\n 'language': settings.LANGUAGE_CODE,\n 'search_url': settings.ZDS_APP['site']['url'] + reverse('search:query')\n }, content_type='application/opensearchdescription+xml')\n", "path": "zds/searchv2/views.py"}]}
| 2,973 | 145 |
gh_patches_debug_40311
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2643
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider ace_hardware is broken
During the global build at 2021-05-21-20-28-08, spider **ace_hardware** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/logs/ace_hardware.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/ace_hardware.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/ace_hardware.geojson))
</issue>
<code>
[start of locations/settings.py]
1 # -*- coding: utf-8 -*-
2 import os
3
4 # Scrapy settings for locations project
5 #
6 # For simplicity, this file contains only settings considered important or
7 # commonly used. You can find more settings consulting the documentation:
8 #
9 # http://doc.scrapy.org/en/latest/topics/settings.html
10 # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
11 # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
12
13 BOT_NAME = 'locations'
14
15 SPIDER_MODULES = ['locations.spiders']
16 NEWSPIDER_MODULE = 'locations.spiders'
17
18
19
20 # Obey robots.txt rules
21 ROBOTSTXT_OBEY = False
22
23 FEED_URI = os.environ.get('FEED_URI')
24 FEED_FORMAT = os.environ.get('FEED_FORMAT')
25 FEED_EXPORTERS = {
26 'geojson': 'locations.exporters.GeoJsonExporter',
27 'ndgeojson': 'locations.exporters.LineDelimitedGeoJsonExporter',
28 }
29
30 # Configure maximum concurrent requests performed by Scrapy (default: 16)
31 #CONCURRENT_REQUESTS = 32
32
33 # Configure a delay for requests for the same website (default: 0)
34 # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
35 # See also autothrottle settings and docs
36 #DOWNLOAD_DELAY = 3
37 # The download delay setting will honor only one of:
38 #CONCURRENT_REQUESTS_PER_DOMAIN = 16
39 #CONCURRENT_REQUESTS_PER_IP = 16
40
41 # Disable cookies (enabled by default)
42 #COOKIES_ENABLED = False
43
44 # Disable Telnet Console (enabled by default)
45 TELNETCONSOLE_ENABLED = False
46
47 # Override the default request headers:
48 #DEFAULT_REQUEST_HEADERS = {
49 # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
50 # 'Accept-Language': 'en',
51 #}
52
53 # Enable or disable spider middlewares
54 # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
55 #SPIDER_MIDDLEWARES = {
56 # 'locations.middlewares.MyCustomSpiderMiddleware': 543,
57 #}
58
59 # Enable or disable downloader middlewares
60 # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
61 DOWNLOADER_MIDDLEWARES = {
62 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
63 'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
64 'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
65 'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
66 }
67
68 FAKEUSERAGENT_PROVIDERS = [
69 'scrapy_fake_useragent.providers.FakeUserAgentProvider',
70 'scrapy_fake_useragent.providers.FakerProvider',
71 'scrapy_fake_useragent.providers.FixedUserAgentProvider',
72 ]
73 USER_AGENT = 'Mozilla/5.0 (Linux; rv:1.0) https://github.com/alltheplaces/alltheplaces'
74
75 # Enable or disable extensions
76 # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
77 #EXTENSIONS = {
78 # 'scrapy.extensions.telnet.TelnetConsole': None,
79 #}
80
81 EXTENSIONS = {
82 'locations.extensions.LogStatsExtension': 101,
83 }
84
85 # Configure item pipelines
86 # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
87 ITEM_PIPELINES = {
88 'locations.pipelines.DuplicatesPipeline': 200,
89 'locations.pipelines.ApplySpiderNamePipeline': 250,
90 'locations.pipelines.ApplySpiderLevelAttributesPipeline': 300
91 }
92
93
94
95 # Enable and configure the AutoThrottle extension (disabled by default)
96 # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
97 #AUTOTHROTTLE_ENABLED = True
98 # The initial download delay
99 #AUTOTHROTTLE_START_DELAY = 5
100 # The maximum download delay to be set in case of high latencies
101 #AUTOTHROTTLE_MAX_DELAY = 60
102 # The average number of requests Scrapy should be sending in parallel to
103 # each remote server
104 #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
105 # Enable showing throttling stats for every response received:
106 #AUTOTHROTTLE_DEBUG = False
107
108 # Enable and configure HTTP caching (disabled by default)
109 # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
110 # HTTPCACHE_ENABLED = True
111 #HTTPCACHE_EXPIRATION_SECS = 0
112 #HTTPCACHE_DIR = 'httpcache'
113 #HTTPCACHE_IGNORE_HTTP_CODES = []
114 #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
115
[end of locations/settings.py]
[start of locations/spiders/ace_hardware.py]
1 import scrapy
2 import re
3 import json
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAY_MAPPING = ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']
8
9
10 class AceHardwareSpider(scrapy.Spider):
11 name = "ace_hardware"
12 item_attributes = { 'brand': "Ace Hardware" }
13 allowed_domains = ["www.acehardware.com"]
14 download_delay = 0.1
15 start_urls = (
16 'https://www.acehardware.com/store-directory',
17 )
18
19 def parse_hours(self, lis):
20 o = OpeningHours()
21
22 for day in DAY_MAPPING:
23 d = day.title()[:2]
24
25 label = lis[day]['label']
26 if label == '0000 - 0000':
27 continue
28
29 start, end = label.split(' - ')
30 start = '%s:%s' % (start[:2], start[2:])
31 end = '%s:%s' % (end[:2], end[2:])
32
33 o.add_range(d, start, end)
34 return o.as_opening_hours()
35
36 def parse_store(self, response):
37 store_data = response.xpath('//script[@id="data-mz-preload-store"]/text()').extract_first()
38
39 if not store_data:
40 return
41
42 store_data = json.loads(store_data)
43
44 properties = {
45 'name': store_data['StoreName'],
46 'phone': store_data['Phone'],
47 'addr_full': store_data['StoreAddressLn1'],
48 'city': store_data['StoreCityNm'],
49 'state': store_data['StoreStateCd'],
50 'postcode': store_data['StoreZipCd'],
51 'ref': store_data['StoreNumber'],
52 'website': response.url,
53 'lat': float(store_data['Latitude']),
54 'lon': float(store_data['Longitude']),
55 }
56
57 hours = self.parse_hours(store_data['RegularHours'])
58 if hours:
59 properties['opening_hours'] = hours
60
61 yield GeojsonPointItem(**properties)
62
63 def parse(self, response):
64 for store_url in response.css('div.store-directory-list-item').xpath('div/a/@href').extract():
65 yield scrapy.Request(
66 response.urljoin(store_url),
67 callback=self.parse_store,
68 )
69
[end of locations/spiders/ace_hardware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/settings.py b/locations/settings.py
--- a/locations/settings.py
+++ b/locations/settings.py
@@ -16,6 +16,8 @@
NEWSPIDER_MODULE = 'locations.spiders'
+# Crawl responsibly by identifying yourself (and your website) on the user-agent
+USER_AGENT = 'Mozilla/5.0 (Linux; rv:1.0) https://github.com/alltheplaces/alltheplaces'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
@@ -58,19 +60,9 @@
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
-DOWNLOADER_MIDDLEWARES = {
- 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
- 'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
- 'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
- 'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
-}
-
-FAKEUSERAGENT_PROVIDERS = [
- 'scrapy_fake_useragent.providers.FakeUserAgentProvider',
- 'scrapy_fake_useragent.providers.FakerProvider',
- 'scrapy_fake_useragent.providers.FixedUserAgentProvider',
-]
-USER_AGENT = 'Mozilla/5.0 (Linux; rv:1.0) https://github.com/alltheplaces/alltheplaces'
+#DOWNLOADER_MIDDLEWARES = {
+# 'locations.middlewares.MyCustomDownloaderMiddleware': 543,
+#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
diff --git a/locations/spiders/ace_hardware.py b/locations/spiders/ace_hardware.py
--- a/locations/spiders/ace_hardware.py
+++ b/locations/spiders/ace_hardware.py
@@ -9,12 +9,13 @@
class AceHardwareSpider(scrapy.Spider):
name = "ace_hardware"
- item_attributes = { 'brand': "Ace Hardware" }
+ item_attributes = {'brand': "Ace Hardware", 'brand_wikidata': 'Q4672981'}
allowed_domains = ["www.acehardware.com"]
- download_delay = 0.1
+ download_delay = 0.7
start_urls = (
'https://www.acehardware.com/store-directory',
)
+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
def parse_hours(self, lis):
o = OpeningHours()
@@ -22,15 +23,10 @@
for day in DAY_MAPPING:
d = day.title()[:2]
- label = lis[day]['label']
- if label == '0000 - 0000':
+ if lis[day]['label'] == '0000 - 0000':
continue
- start, end = label.split(' - ')
- start = '%s:%s' % (start[:2], start[2:])
- end = '%s:%s' % (end[:2], end[2:])
-
- o.add_range(d, start, end)
+ o.add_range(d, lis[day]['openTime'], lis[day]['closeTime'])
return o.as_opening_hours()
def parse_store(self, response):
@@ -50,8 +46,8 @@
'postcode': store_data['StoreZipCd'],
'ref': store_data['StoreNumber'],
'website': response.url,
- 'lat': float(store_data['Latitude']),
- 'lon': float(store_data['Longitude']),
+ 'lat': store_data['Latitude'],
+ 'lon': store_data['Longitude'],
}
hours = self.parse_hours(store_data['RegularHours'])
|
{"golden_diff": "diff --git a/locations/settings.py b/locations/settings.py\n--- a/locations/settings.py\n+++ b/locations/settings.py\n@@ -16,6 +16,8 @@\n NEWSPIDER_MODULE = 'locations.spiders'\n \n \n+# Crawl responsibly by identifying yourself (and your website) on the user-agent\n+USER_AGENT = 'Mozilla/5.0 (Linux; rv:1.0) https://github.com/alltheplaces/alltheplaces'\n \n # Obey robots.txt rules\n ROBOTSTXT_OBEY = False\n@@ -58,19 +60,9 @@\n \n # Enable or disable downloader middlewares\n # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html\n-DOWNLOADER_MIDDLEWARES = {\n- 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,\n- 'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,\n- 'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,\n- 'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,\n-}\n-\n-FAKEUSERAGENT_PROVIDERS = [\n- 'scrapy_fake_useragent.providers.FakeUserAgentProvider',\n- 'scrapy_fake_useragent.providers.FakerProvider',\n- 'scrapy_fake_useragent.providers.FixedUserAgentProvider',\n-]\n-USER_AGENT = 'Mozilla/5.0 (Linux; rv:1.0) https://github.com/alltheplaces/alltheplaces'\n+#DOWNLOADER_MIDDLEWARES = {\n+# 'locations.middlewares.MyCustomDownloaderMiddleware': 543,\n+#}\n \n # Enable or disable extensions\n # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html\ndiff --git a/locations/spiders/ace_hardware.py b/locations/spiders/ace_hardware.py\n--- a/locations/spiders/ace_hardware.py\n+++ b/locations/spiders/ace_hardware.py\n@@ -9,12 +9,13 @@\n \n class AceHardwareSpider(scrapy.Spider):\n name = \"ace_hardware\"\n- item_attributes = { 'brand': \"Ace Hardware\" }\n+ item_attributes = {'brand': \"Ace Hardware\", 'brand_wikidata': 'Q4672981'}\n allowed_domains = [\"www.acehardware.com\"]\n- download_delay = 0.1\n+ download_delay = 0.7\n start_urls = (\n 'https://www.acehardware.com/store-directory',\n )\n+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'\n \n def parse_hours(self, lis):\n o = OpeningHours()\n@@ -22,15 +23,10 @@\n for day in DAY_MAPPING:\n d = day.title()[:2]\n \n- label = lis[day]['label']\n- if label == '0000 - 0000':\n+ if lis[day]['label'] == '0000 - 0000':\n continue\n \n- start, end = label.split(' - ')\n- start = '%s:%s' % (start[:2], start[2:])\n- end = '%s:%s' % (end[:2], end[2:])\n-\n- o.add_range(d, start, end)\n+ o.add_range(d, lis[day]['openTime'], lis[day]['closeTime'])\n return o.as_opening_hours()\n \n def parse_store(self, response):\n@@ -50,8 +46,8 @@\n 'postcode': store_data['StoreZipCd'],\n 'ref': store_data['StoreNumber'],\n 'website': response.url,\n- 'lat': float(store_data['Latitude']),\n- 'lon': float(store_data['Longitude']),\n+ 'lat': store_data['Latitude'],\n+ 'lon': store_data['Longitude'],\n }\n \n hours = self.parse_hours(store_data['RegularHours'])\n", "issue": "Spider ace_hardware is broken\nDuring the global build at 2021-05-21-20-28-08, spider **ace_hardware** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/logs/ace_hardware.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/ace_hardware.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/ace_hardware.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\n\n# Scrapy settings for locations project\n#\n# For simplicity, this file contains only settings considered important or\n# commonly used. You can find more settings consulting the documentation:\n#\n# http://doc.scrapy.org/en/latest/topics/settings.html\n# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html\n# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html\n\nBOT_NAME = 'locations'\n\nSPIDER_MODULES = ['locations.spiders']\nNEWSPIDER_MODULE = 'locations.spiders'\n\n\n\n# Obey robots.txt rules\nROBOTSTXT_OBEY = False\n\nFEED_URI = os.environ.get('FEED_URI')\nFEED_FORMAT = os.environ.get('FEED_FORMAT')\nFEED_EXPORTERS = {\n 'geojson': 'locations.exporters.GeoJsonExporter',\n 'ndgeojson': 'locations.exporters.LineDelimitedGeoJsonExporter',\n}\n\n# Configure maximum concurrent requests performed by Scrapy (default: 16)\n#CONCURRENT_REQUESTS = 32\n\n# Configure a delay for requests for the same website (default: 0)\n# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay\n# See also autothrottle settings and docs\n#DOWNLOAD_DELAY = 3\n# The download delay setting will honor only one of:\n#CONCURRENT_REQUESTS_PER_DOMAIN = 16\n#CONCURRENT_REQUESTS_PER_IP = 16\n\n# Disable cookies (enabled by default)\n#COOKIES_ENABLED = False\n\n# Disable Telnet Console (enabled by default)\nTELNETCONSOLE_ENABLED = False\n\n# Override the default request headers:\n#DEFAULT_REQUEST_HEADERS = {\n# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n# 'Accept-Language': 'en',\n#}\n\n# Enable or disable spider middlewares\n# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html\n#SPIDER_MIDDLEWARES = {\n# 'locations.middlewares.MyCustomSpiderMiddleware': 543,\n#}\n\n# Enable or disable downloader middlewares\n# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html\nDOWNLOADER_MIDDLEWARES = {\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,\n 'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,\n 'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,\n}\n\nFAKEUSERAGENT_PROVIDERS = [\n 'scrapy_fake_useragent.providers.FakeUserAgentProvider',\n 'scrapy_fake_useragent.providers.FakerProvider',\n 'scrapy_fake_useragent.providers.FixedUserAgentProvider',\n]\nUSER_AGENT = 'Mozilla/5.0 (Linux; rv:1.0) https://github.com/alltheplaces/alltheplaces'\n\n# Enable or disable extensions\n# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html\n#EXTENSIONS = {\n# 'scrapy.extensions.telnet.TelnetConsole': None,\n#}\n\nEXTENSIONS = {\n 'locations.extensions.LogStatsExtension': 101,\n}\n\n# Configure item pipelines\n# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html\nITEM_PIPELINES = {\n 'locations.pipelines.DuplicatesPipeline': 200,\n 'locations.pipelines.ApplySpiderNamePipeline': 250,\n 'locations.pipelines.ApplySpiderLevelAttributesPipeline': 300\n}\n\n\n\n# Enable and configure the AutoThrottle extension (disabled by default)\n# See http://doc.scrapy.org/en/latest/topics/autothrottle.html\n#AUTOTHROTTLE_ENABLED = True\n# The initial download delay\n#AUTOTHROTTLE_START_DELAY = 5\n# The maximum download delay to be set in case of high latencies\n#AUTOTHROTTLE_MAX_DELAY = 60\n# The average number of requests Scrapy should be sending in parallel to\n# each remote server\n#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0\n# Enable showing throttling stats for every response received:\n#AUTOTHROTTLE_DEBUG = False\n\n# Enable and configure HTTP caching (disabled by default)\n# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings\n# HTTPCACHE_ENABLED = True\n#HTTPCACHE_EXPIRATION_SECS = 0\n#HTTPCACHE_DIR = 'httpcache'\n#HTTPCACHE_IGNORE_HTTP_CODES = []\n#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'\n", "path": "locations/settings.py"}, {"content": "import scrapy\nimport re\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAY_MAPPING = ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']\n\n\nclass AceHardwareSpider(scrapy.Spider):\n name = \"ace_hardware\"\n item_attributes = { 'brand': \"Ace Hardware\" }\n allowed_domains = [\"www.acehardware.com\"]\n download_delay = 0.1\n start_urls = (\n 'https://www.acehardware.com/store-directory',\n )\n\n def parse_hours(self, lis):\n o = OpeningHours()\n\n for day in DAY_MAPPING:\n d = day.title()[:2]\n\n label = lis[day]['label']\n if label == '0000 - 0000':\n continue\n\n start, end = label.split(' - ')\n start = '%s:%s' % (start[:2], start[2:])\n end = '%s:%s' % (end[:2], end[2:])\n\n o.add_range(d, start, end)\n return o.as_opening_hours()\n\n def parse_store(self, response):\n store_data = response.xpath('//script[@id=\"data-mz-preload-store\"]/text()').extract_first()\n\n if not store_data:\n return\n\n store_data = json.loads(store_data)\n\n properties = {\n 'name': store_data['StoreName'],\n 'phone': store_data['Phone'],\n 'addr_full': store_data['StoreAddressLn1'],\n 'city': store_data['StoreCityNm'],\n 'state': store_data['StoreStateCd'],\n 'postcode': store_data['StoreZipCd'],\n 'ref': store_data['StoreNumber'],\n 'website': response.url,\n 'lat': float(store_data['Latitude']),\n 'lon': float(store_data['Longitude']),\n }\n\n hours = self.parse_hours(store_data['RegularHours'])\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n for store_url in response.css('div.store-directory-list-item').xpath('div/a/@href').extract():\n yield scrapy.Request(\n response.urljoin(store_url),\n callback=self.parse_store,\n )\n", "path": "locations/spiders/ace_hardware.py"}]}
| 2,635 | 902 |
gh_patches_debug_11241
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1942
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use string form for type checking until Python 3.7 dropped
What the if here? The string form is fine until you drop Pythons that don’t support it.
_Originally posted by @henryiii in https://github.com/scikit-hep/pyhf/pull/1909#discussion_r944456765_
This was in reference to
```python
if T.TYPE_CHECKING:
PathOrStr = T.Union[str, os.PathLike[str]]
else:
PathOrStr = T.Union[str, "os.PathLike[str]"]
```
in PR #190 now in
https://github.com/scikit-hep/pyhf/blob/ad1dd86f1d7c1bcbf737805b6821e07c4ef75fca/src/pyhf/typing.py#L30-L33
So until Python 3.7 is dropped (I think this is the version I am not very up to date with my type checking knowledge) we could instead just drop the `if TYPE_CHECKING` and use
```python
#TODO: Switch to os.PathLike[str] once Python 3.7 dropped
PathOrStr = Union[str, "os.PathLike[str]"]
```
This would also allow for reverting PR #1937.
</issue>
<code>
[start of src/pyhf/typing.py]
1 import os
2 import sys
3 from typing import TYPE_CHECKING, MutableSequence, Sequence, Union
4
5 if sys.version_info >= (3, 8):
6 from typing import Literal, TypedDict
7 else:
8 from typing_extensions import Literal, TypedDict
9
10 __all__ = (
11 "PathOrStr",
12 "ParameterBase",
13 "Parameter",
14 "Measurement",
15 "ModifierBase",
16 "NormSys",
17 "NormFactor",
18 "HistoSys",
19 "StatError",
20 "ShapeSys",
21 "ShapeFactor",
22 "LumiSys",
23 "Modifier",
24 "Sample",
25 "Channel",
26 "Observation",
27 "Workspace",
28 )
29
30 if TYPE_CHECKING:
31 PathOrStr = Union[str, os.PathLike[str]]
32 else:
33 PathOrStr = Union[str, "os.PathLike[str]"]
34
35
36 class ParameterBase(TypedDict, total=False):
37 auxdata: Sequence[float]
38 bounds: Sequence[Sequence[float]]
39 inits: Sequence[float]
40 sigmas: Sequence[float]
41 fixed: bool
42
43
44 class Parameter(ParameterBase):
45 name: str
46
47
48 class Config(TypedDict):
49 poi: str
50 parameters: MutableSequence[Parameter]
51
52
53 class Measurement(TypedDict):
54 name: str
55 config: Config
56
57
58 class ModifierBase(TypedDict):
59 name: str
60
61
62 class NormSysData(TypedDict):
63 lo: float
64 hi: float
65
66
67 class NormSys(ModifierBase):
68 type: Literal['normsys']
69 data: NormSysData
70
71
72 class NormFactor(ModifierBase):
73 type: Literal['normfactor']
74 data: None
75
76
77 class HistoSysData(TypedDict):
78 lo_data: Sequence[float]
79 hi_data: Sequence[float]
80
81
82 class HistoSys(ModifierBase):
83 type: Literal['histosys']
84 data: HistoSysData
85
86
87 class StatError(ModifierBase):
88 type: Literal['staterror']
89 data: Sequence[float]
90
91
92 class ShapeSys(ModifierBase):
93 type: Literal['shapesys']
94 data: Sequence[float]
95
96
97 class ShapeFactor(ModifierBase):
98 type: Literal['shapefactor']
99 data: None
100
101
102 class LumiSys(TypedDict):
103 name: Literal['lumi']
104 type: Literal['lumi']
105 data: None
106
107
108 Modifier = Union[
109 NormSys, NormFactor, HistoSys, StatError, ShapeSys, ShapeFactor, LumiSys
110 ]
111
112
113 class SampleBase(TypedDict, total=False):
114 parameter_configs: Sequence[Parameter]
115
116
117 class Sample(SampleBase):
118 name: str
119 data: Sequence[float]
120 modifiers: Sequence[Modifier]
121
122
123 class Channel(TypedDict):
124 name: str
125 samples: Sequence[Sample]
126
127
128 class Observation(TypedDict):
129 name: str
130 data: Sequence[float]
131
132
133 class Workspace(TypedDict):
134 measurements: Sequence[Measurement]
135 channels: Sequence[Channel]
136 observations: Sequence[Observation]
137
[end of src/pyhf/typing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pyhf/typing.py b/src/pyhf/typing.py
--- a/src/pyhf/typing.py
+++ b/src/pyhf/typing.py
@@ -1,6 +1,6 @@
import os
import sys
-from typing import TYPE_CHECKING, MutableSequence, Sequence, Union
+from typing import MutableSequence, Sequence, Union
if sys.version_info >= (3, 8):
from typing import Literal, TypedDict
@@ -27,10 +27,8 @@
"Workspace",
)
-if TYPE_CHECKING:
- PathOrStr = Union[str, os.PathLike[str]]
-else:
- PathOrStr = Union[str, "os.PathLike[str]"]
+# TODO: Switch to os.PathLike[str] once Python 3.8 support dropped
+PathOrStr = Union[str, "os.PathLike[str]"]
class ParameterBase(TypedDict, total=False):
|
{"golden_diff": "diff --git a/src/pyhf/typing.py b/src/pyhf/typing.py\n--- a/src/pyhf/typing.py\n+++ b/src/pyhf/typing.py\n@@ -1,6 +1,6 @@\n import os\n import sys\n-from typing import TYPE_CHECKING, MutableSequence, Sequence, Union\n+from typing import MutableSequence, Sequence, Union\n \n if sys.version_info >= (3, 8):\n from typing import Literal, TypedDict\n@@ -27,10 +27,8 @@\n \"Workspace\",\n )\n \n-if TYPE_CHECKING:\n- PathOrStr = Union[str, os.PathLike[str]]\n-else:\n- PathOrStr = Union[str, \"os.PathLike[str]\"]\n+# TODO: Switch to os.PathLike[str] once Python 3.8 support dropped\n+PathOrStr = Union[str, \"os.PathLike[str]\"]\n \n \n class ParameterBase(TypedDict, total=False):\n", "issue": "Use string form for type checking until Python 3.7 dropped\nWhat the if here? The string form is fine until you drop Pythons that don\u2019t support it.\r\n\r\n_Originally posted by @henryiii in https://github.com/scikit-hep/pyhf/pull/1909#discussion_r944456765_\r\n\r\nThis was in reference to \r\n\r\n```python\r\nif T.TYPE_CHECKING:\r\n PathOrStr = T.Union[str, os.PathLike[str]]\r\nelse:\r\n PathOrStr = T.Union[str, \"os.PathLike[str]\"]\r\n```\r\n\r\nin PR #190 now in \r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/ad1dd86f1d7c1bcbf737805b6821e07c4ef75fca/src/pyhf/typing.py#L30-L33\r\n\r\nSo until Python 3.7 is dropped (I think this is the version I am not very up to date with my type checking knowledge) we could instead just drop the `if TYPE_CHECKING` and use\r\n\r\n```python\r\n#TODO: Switch to os.PathLike[str] once Python 3.7 dropped\r\nPathOrStr = Union[str, \"os.PathLike[str]\"]\r\n```\r\n\r\nThis would also allow for reverting PR #1937.\n", "before_files": [{"content": "import os\nimport sys\nfrom typing import TYPE_CHECKING, MutableSequence, Sequence, Union\n\nif sys.version_info >= (3, 8):\n from typing import Literal, TypedDict\nelse:\n from typing_extensions import Literal, TypedDict\n\n__all__ = (\n \"PathOrStr\",\n \"ParameterBase\",\n \"Parameter\",\n \"Measurement\",\n \"ModifierBase\",\n \"NormSys\",\n \"NormFactor\",\n \"HistoSys\",\n \"StatError\",\n \"ShapeSys\",\n \"ShapeFactor\",\n \"LumiSys\",\n \"Modifier\",\n \"Sample\",\n \"Channel\",\n \"Observation\",\n \"Workspace\",\n)\n\nif TYPE_CHECKING:\n PathOrStr = Union[str, os.PathLike[str]]\nelse:\n PathOrStr = Union[str, \"os.PathLike[str]\"]\n\n\nclass ParameterBase(TypedDict, total=False):\n auxdata: Sequence[float]\n bounds: Sequence[Sequence[float]]\n inits: Sequence[float]\n sigmas: Sequence[float]\n fixed: bool\n\n\nclass Parameter(ParameterBase):\n name: str\n\n\nclass Config(TypedDict):\n poi: str\n parameters: MutableSequence[Parameter]\n\n\nclass Measurement(TypedDict):\n name: str\n config: Config\n\n\nclass ModifierBase(TypedDict):\n name: str\n\n\nclass NormSysData(TypedDict):\n lo: float\n hi: float\n\n\nclass NormSys(ModifierBase):\n type: Literal['normsys']\n data: NormSysData\n\n\nclass NormFactor(ModifierBase):\n type: Literal['normfactor']\n data: None\n\n\nclass HistoSysData(TypedDict):\n lo_data: Sequence[float]\n hi_data: Sequence[float]\n\n\nclass HistoSys(ModifierBase):\n type: Literal['histosys']\n data: HistoSysData\n\n\nclass StatError(ModifierBase):\n type: Literal['staterror']\n data: Sequence[float]\n\n\nclass ShapeSys(ModifierBase):\n type: Literal['shapesys']\n data: Sequence[float]\n\n\nclass ShapeFactor(ModifierBase):\n type: Literal['shapefactor']\n data: None\n\n\nclass LumiSys(TypedDict):\n name: Literal['lumi']\n type: Literal['lumi']\n data: None\n\n\nModifier = Union[\n NormSys, NormFactor, HistoSys, StatError, ShapeSys, ShapeFactor, LumiSys\n]\n\n\nclass SampleBase(TypedDict, total=False):\n parameter_configs: Sequence[Parameter]\n\n\nclass Sample(SampleBase):\n name: str\n data: Sequence[float]\n modifiers: Sequence[Modifier]\n\n\nclass Channel(TypedDict):\n name: str\n samples: Sequence[Sample]\n\n\nclass Observation(TypedDict):\n name: str\n data: Sequence[float]\n\n\nclass Workspace(TypedDict):\n measurements: Sequence[Measurement]\n channels: Sequence[Channel]\n observations: Sequence[Observation]\n", "path": "src/pyhf/typing.py"}]}
| 1,782 | 199 |
gh_patches_debug_31813
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-206
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Capture Celery Time-in-queue
</issue>
<code>
[start of src/scout_apm/celery.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from celery.signals import task_postrun, task_prerun
5
6 import scout_apm.core
7 from scout_apm.core.tracked_request import TrackedRequest
8
9
10 def prerun_callback(task=None, **kwargs):
11 tracked_request = TrackedRequest.instance()
12 tracked_request.mark_real_request()
13
14 delivery_info = task.request.delivery_info
15 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
16 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
17 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
18 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
19
20 tracked_request.start_span(operation=("Job/" + task.name))
21
22
23 def postrun_callback(task=None, **kwargs):
24 tracked_request = TrackedRequest.instance()
25 tracked_request.stop_span()
26
27
28 def install():
29 installed = scout_apm.core.install()
30 if not installed:
31 return
32
33 task_prerun.connect(prerun_callback)
34 task_postrun.connect(postrun_callback)
35
36
37 def uninstall():
38 task_prerun.disconnect(prerun_callback)
39 task_postrun.disconnect(postrun_callback)
40
[end of src/scout_apm/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -1,16 +1,34 @@
# coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
-from celery.signals import task_postrun, task_prerun
+import datetime as dt
+
+from celery.signals import before_task_publish, task_postrun, task_prerun
import scout_apm.core
+from scout_apm.compat import datetime_to_timestamp
from scout_apm.core.tracked_request import TrackedRequest
+def before_publish_callback(headers=None, properties=None, **kwargs):
+ if "scout_task_start" not in headers:
+ headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
+
+
def prerun_callback(task=None, **kwargs):
tracked_request = TrackedRequest.instance()
tracked_request.mark_real_request()
+ start = getattr(task.request, "scout_task_start", None)
+ if start is not None:
+ now = datetime_to_timestamp(dt.datetime.utcnow())
+ try:
+ queue_time = now - start
+ except TypeError:
+ pass
+ else:
+ tracked_request.tag("queue_time", queue_time)
+
delivery_info = task.request.delivery_info
tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
@@ -30,10 +48,12 @@
if not installed:
return
+ before_task_publish.connect(before_publish_callback)
task_prerun.connect(prerun_callback)
task_postrun.connect(postrun_callback)
def uninstall():
+ before_task_publish.disconnect(before_publish_callback)
task_prerun.disconnect(prerun_callback)
task_postrun.disconnect(postrun_callback)
|
{"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -1,16 +1,34 @@\n # coding=utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n-from celery.signals import task_postrun, task_prerun\n+import datetime as dt\n+\n+from celery.signals import before_task_publish, task_postrun, task_prerun\n \n import scout_apm.core\n+from scout_apm.compat import datetime_to_timestamp\n from scout_apm.core.tracked_request import TrackedRequest\n \n \n+def before_publish_callback(headers=None, properties=None, **kwargs):\n+ if \"scout_task_start\" not in headers:\n+ headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n+\n+\n def prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.mark_real_request()\n \n+ start = getattr(task.request, \"scout_task_start\", None)\n+ if start is not None:\n+ now = datetime_to_timestamp(dt.datetime.utcnow())\n+ try:\n+ queue_time = now - start\n+ except TypeError:\n+ pass\n+ else:\n+ tracked_request.tag(\"queue_time\", queue_time)\n+\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n@@ -30,10 +48,12 @@\n if not installed:\n return\n \n+ before_task_publish.connect(before_publish_callback)\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n \n \n def uninstall():\n+ before_task_publish.disconnect(before_publish_callback)\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "issue": "Capture Celery Time-in-queue\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom celery.signals import task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.mark_real_request()\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install():\n installed = scout_apm.core.install()\n if not installed:\n return\n\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n\n\ndef uninstall():\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "path": "src/scout_apm/celery.py"}]}
| 900 | 434 |
gh_patches_debug_18846
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-877
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tab completion on __xonsh_shell__.shell. fails
[See this mailing list discussion.](https://groups.google.com/d/msgid/xonsh/eb9b3b50-33c0-4652-949b-f042ec160f78%40googlegroups.com?utm_medium=email&utm_source=footer) I think that this is caused by the coloring code being updated and something in the completions not taking this into account.
</issue>
<code>
[start of xonsh/base_shell.py]
1 # -*- coding: utf-8 -*-
2 """The base class for xonsh shell"""
3 import io
4 import os
5 import sys
6 import time
7 import builtins
8
9 from xonsh.tools import XonshError, escape_windows_cmd_string, ON_WINDOWS, \
10 print_exception, HAVE_PYGMENTS
11 from xonsh.codecache import (should_use_cache, code_cache_name,
12 code_cache_check, get_cache_filename,
13 update_cache, run_compiled_code)
14 from xonsh.completer import Completer
15 from xonsh.environ import multiline_prompt, format_prompt
16 if HAVE_PYGMENTS:
17 from xonsh.pyghooks import XonshStyle
18
19
20 class _TeeOut(object):
21 """Tees stdout into the original sys.stdout and another buffer."""
22
23 def __init__(self, buf):
24 self.buffer = buf
25 self.stdout = sys.stdout
26 self.encoding = self.stdout.encoding
27 sys.stdout = self
28
29 def __del__(self):
30 sys.stdout = self.stdout
31
32 def close(self):
33 """Restores the original stdout."""
34 sys.stdout = self.stdout
35
36 def write(self, data):
37 """Writes data to the original stdout and the buffer."""
38 #data = data.replace('\001', '').replace('\002', '')
39 self.stdout.write(data)
40 self.buffer.write(data)
41
42 def flush(self):
43 """Flushes both the original stdout and the buffer."""
44 self.stdout.flush()
45 self.buffer.flush()
46
47 def fileno(self):
48 """Tunnel fileno() calls."""
49 _ = self
50 return sys.stdout.fileno()
51
52
53 class _TeeErr(object):
54 """Tees stderr into the original sys.stdout and another buffer."""
55
56 def __init__(self, buf):
57 self.buffer = buf
58 self.stderr = sys.stderr
59 self.encoding = self.stderr.encoding
60 sys.stderr = self
61
62 def __del__(self):
63 sys.stderr = self.stderr
64
65 def close(self):
66 """Restores the original stderr."""
67 sys.stderr = self.stderr
68
69 def write(self, data):
70 """Writes data to the original stderr and the buffer."""
71 #data = data.replace('\001', '').replace('\002', '')
72 self.stderr.write(data)
73 self.buffer.write(data)
74
75 def flush(self):
76 """Flushes both the original stderr and the buffer."""
77 self.stderr.flush()
78 self.buffer.flush()
79
80 def fileno(self):
81 """Tunnel fileno() calls."""
82 _ = self
83 return sys.stderr.fileno()
84
85
86 class Tee(io.StringIO):
87 """Class that merges tee'd stdout and stderr into a single buffer.
88
89 This represents what a user would actually see on the command line.
90 """
91 # pylint is a stupid about counting public methods when using inheritance.
92 # pylint: disable=too-few-public-methods
93
94 def __init__(self, *args, **kwargs):
95 super().__init__(*args, **kwargs)
96 self.stdout = _TeeOut(self)
97 self.stderr = _TeeErr(self)
98
99 def __del__(self):
100 del self.stdout, self.stderr
101 super().__del__()
102
103 def close(self):
104 """Closes the buffer as well as the stdout and stderr tees."""
105 self.stdout.close()
106 self.stderr.close()
107 super().close()
108
109
110 class BaseShell(object):
111 """The xonsh shell."""
112
113 def __init__(self, execer, ctx, **kwargs):
114 super().__init__()
115 self.execer = execer
116 self.ctx = ctx
117 if kwargs.get('completer', True):
118 self.completer = Completer()
119 self.buffer = []
120 self.need_more_lines = False
121 self.mlprompt = None
122 if HAVE_PYGMENTS:
123 env = builtins.__xonsh_env__
124 self.styler = XonshStyle(env.get('XONSH_COLOR_STYLE'))
125 else:
126 self.styler = None
127
128 def emptyline(self):
129 """Called when an empty line has been entered."""
130 self.need_more_lines = False
131 self.default('')
132
133 def singleline(self, **kwargs):
134 """Reads a single line of input from the shell."""
135 msg = '{0} has not implemented singleline().'
136 raise RuntimeError(msg.format(self.__class__.__name__))
137
138 def precmd(self, line):
139 """Called just before execution of line."""
140 return line if self.need_more_lines else line.lstrip()
141
142 def default(self, line):
143 """Implements code execution."""
144 line = line if line.endswith('\n') else line + '\n'
145 src, code = self.push(line)
146 if code is None:
147 return
148 hist = builtins.__xonsh_history__ # pylint: disable=no-member
149 ts1 = None
150 store_stdout = builtins.__xonsh_env__.get('XONSH_STORE_STDOUT') # pylint: disable=no-member
151 tee = Tee() if store_stdout else io.StringIO()
152 try:
153 ts0 = time.time()
154 run_compiled_code(code, self.ctx, None, 'single')
155 ts1 = time.time()
156 if hist.last_cmd_rtn is None:
157 hist.last_cmd_rtn = 0 # returncode for success
158 except XonshError as e:
159 print(e.args[0], file=sys.stderr)
160 if hist.last_cmd_rtn is None:
161 hist.last_cmd_rtn = 1 # return code for failure
162 except Exception: # pylint: disable=broad-except
163 print_exception()
164 if hist.last_cmd_rtn is None:
165 hist.last_cmd_rtn = 1 # return code for failure
166 finally:
167 ts1 = ts1 or time.time()
168 self._append_history(inp=src, ts=[ts0, ts1], tee_out=tee.getvalue())
169 tee.close()
170 if builtins.__xonsh_exit__: # pylint: disable=no-member
171 return True
172
173 def push(self, line):
174 """Pushes a line onto the buffer and compiles the code in a way that
175 enables multiline input.
176 """
177 code = None
178 self.buffer.append(line)
179 if self.need_more_lines:
180 return None, code
181 src = ''.join(self.buffer)
182 _cache = should_use_cache(self.execer, 'single')
183 if _cache:
184 codefname = code_cache_name(src)
185 cachefname = get_cache_filename(codefname, code=True)
186 usecache, code = code_cache_check(cachefname)
187 if usecache:
188 self.reset_buffer()
189 return src, code
190 try:
191 code = self.execer.compile(src,
192 mode='single',
193 glbs=self.ctx,
194 locs=None)
195 if _cache:
196 update_cache(code, cachefname)
197 self.reset_buffer()
198 except SyntaxError:
199 if line == '\n':
200 self.reset_buffer()
201 print_exception()
202 return src, None
203 self.need_more_lines = True
204 except Exception: # pylint: disable=broad-except
205 self.reset_buffer()
206 print_exception()
207 return src, None
208 return src, code
209
210 def reset_buffer(self):
211 """Resets the line buffer."""
212 self.buffer.clear()
213 self.need_more_lines = False
214 self.mlprompt = None
215
216 def settitle(self):
217 """Sets terminal title."""
218 _ = self
219 env = builtins.__xonsh_env__ # pylint: disable=no-member
220 term = env.get('TERM', None)
221 # Shells running in emacs sets TERM to "dumb" or "eterm-color".
222 # Do not set title for these to avoid garbled prompt.
223 if term is None or term in ['dumb', 'eterm-color', 'linux']:
224 return
225 t = env.get('TITLE')
226 if t is None:
227 return
228 t = format_prompt(t)
229 if ON_WINDOWS and 'ANSICON' not in env:
230 t = escape_windows_cmd_string(t)
231 os.system('title {}'.format(t))
232 else:
233 os.write(1, "\x1b]2;{0}\x07".format(t).encode())
234
235 @property
236 def prompt(self):
237 """Obtains the current prompt string."""
238 if self.need_more_lines:
239 if self.mlprompt is None:
240 try:
241 self.mlprompt = multiline_prompt()
242 except Exception: # pylint: disable=broad-except
243 print_exception()
244 self.mlprompt = '<multiline prompt error> '
245 return self.mlprompt
246 env = builtins.__xonsh_env__ # pylint: disable=no-member
247 p = env.get('PROMPT')
248 try:
249 p = format_prompt(p)
250 except Exception: # pylint: disable=broad-except
251 print_exception()
252 self.settitle()
253 return p
254
255 def _append_history(self, tee_out=None, **info):
256 """Append information about the command to the history."""
257 _ = self
258 hist = builtins.__xonsh_history__ # pylint: disable=no-member
259 info['rtn'] = hist.last_cmd_rtn
260 tee_out = tee_out or None
261 last_out = hist.last_cmd_out or None
262 if last_out is None and tee_out is None:
263 pass
264 elif last_out is None and tee_out is not None:
265 info['out'] = tee_out
266 elif last_out is not None and tee_out is None:
267 info['out'] = last_out
268 else:
269 info['out'] = tee_out + '\n' + last_out
270 hist.append(info)
271 hist.last_cmd_rtn = hist.last_cmd_out = None
272
273 def format_color(self, string, **kwargs):
274 """Formats the colors in a string. This base implmentation does not
275 actually do any coloring, but just returns the string directly.
276 """
277 return string
278
279 def print_color(self, string, **kwargs):
280 """Prints a string in color. This base implmentation does not actually
281 do any coloring, but just prints the string directly.
282 """
283 if not isinstance(string, str):
284 string = ''.join([x for _, x in string])
285 print(string, **kwargs)
286
287 def color_style_names(self):
288 """Returns an iterable of all available style names."""
289 return ()
290
291 def color_style(self):
292 """Returns the current color map."""
293 return {}
294
[end of xonsh/base_shell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xonsh/base_shell.py b/xonsh/base_shell.py
--- a/xonsh/base_shell.py
+++ b/xonsh/base_shell.py
@@ -12,7 +12,7 @@
code_cache_check, get_cache_filename,
update_cache, run_compiled_code)
from xonsh.completer import Completer
-from xonsh.environ import multiline_prompt, format_prompt
+from xonsh.environ import multiline_prompt, format_prompt, partial_format_prompt
if HAVE_PYGMENTS:
from xonsh.pyghooks import XonshStyle
@@ -246,7 +246,7 @@
env = builtins.__xonsh_env__ # pylint: disable=no-member
p = env.get('PROMPT')
try:
- p = format_prompt(p)
+ p = partial_format_prompt(p)
except Exception: # pylint: disable=broad-except
print_exception()
self.settitle()
|
{"golden_diff": "diff --git a/xonsh/base_shell.py b/xonsh/base_shell.py\n--- a/xonsh/base_shell.py\n+++ b/xonsh/base_shell.py\n@@ -12,7 +12,7 @@\n code_cache_check, get_cache_filename,\n update_cache, run_compiled_code)\n from xonsh.completer import Completer\n-from xonsh.environ import multiline_prompt, format_prompt\n+from xonsh.environ import multiline_prompt, format_prompt, partial_format_prompt\n if HAVE_PYGMENTS:\n from xonsh.pyghooks import XonshStyle\n \n@@ -246,7 +246,7 @@\n env = builtins.__xonsh_env__ # pylint: disable=no-member\n p = env.get('PROMPT')\n try:\n- p = format_prompt(p)\n+ p = partial_format_prompt(p)\n except Exception: # pylint: disable=broad-except\n print_exception()\n self.settitle()\n", "issue": "Tab completion on __xonsh_shell__.shell. fails\n[See this mailing list discussion.](https://groups.google.com/d/msgid/xonsh/eb9b3b50-33c0-4652-949b-f042ec160f78%40googlegroups.com?utm_medium=email&utm_source=footer) I think that this is caused by the coloring code being updated and something in the completions not taking this into account.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"The base class for xonsh shell\"\"\"\nimport io\nimport os\nimport sys\nimport time\nimport builtins\n\nfrom xonsh.tools import XonshError, escape_windows_cmd_string, ON_WINDOWS, \\\n print_exception, HAVE_PYGMENTS\nfrom xonsh.codecache import (should_use_cache, code_cache_name,\n code_cache_check, get_cache_filename,\n update_cache, run_compiled_code)\nfrom xonsh.completer import Completer\nfrom xonsh.environ import multiline_prompt, format_prompt\nif HAVE_PYGMENTS:\n from xonsh.pyghooks import XonshStyle\n\n\nclass _TeeOut(object):\n \"\"\"Tees stdout into the original sys.stdout and another buffer.\"\"\"\n\n def __init__(self, buf):\n self.buffer = buf\n self.stdout = sys.stdout\n self.encoding = self.stdout.encoding\n sys.stdout = self\n\n def __del__(self):\n sys.stdout = self.stdout\n\n def close(self):\n \"\"\"Restores the original stdout.\"\"\"\n sys.stdout = self.stdout\n\n def write(self, data):\n \"\"\"Writes data to the original stdout and the buffer.\"\"\"\n #data = data.replace('\\001', '').replace('\\002', '')\n self.stdout.write(data)\n self.buffer.write(data)\n\n def flush(self):\n \"\"\"Flushes both the original stdout and the buffer.\"\"\"\n self.stdout.flush()\n self.buffer.flush()\n\n def fileno(self):\n \"\"\"Tunnel fileno() calls.\"\"\"\n _ = self\n return sys.stdout.fileno()\n\n\nclass _TeeErr(object):\n \"\"\"Tees stderr into the original sys.stdout and another buffer.\"\"\"\n\n def __init__(self, buf):\n self.buffer = buf\n self.stderr = sys.stderr\n self.encoding = self.stderr.encoding\n sys.stderr = self\n\n def __del__(self):\n sys.stderr = self.stderr\n\n def close(self):\n \"\"\"Restores the original stderr.\"\"\"\n sys.stderr = self.stderr\n\n def write(self, data):\n \"\"\"Writes data to the original stderr and the buffer.\"\"\"\n #data = data.replace('\\001', '').replace('\\002', '')\n self.stderr.write(data)\n self.buffer.write(data)\n\n def flush(self):\n \"\"\"Flushes both the original stderr and the buffer.\"\"\"\n self.stderr.flush()\n self.buffer.flush()\n\n def fileno(self):\n \"\"\"Tunnel fileno() calls.\"\"\"\n _ = self\n return sys.stderr.fileno()\n\n\nclass Tee(io.StringIO):\n \"\"\"Class that merges tee'd stdout and stderr into a single buffer.\n\n This represents what a user would actually see on the command line.\n \"\"\"\n # pylint is a stupid about counting public methods when using inheritance.\n # pylint: disable=too-few-public-methods\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.stdout = _TeeOut(self)\n self.stderr = _TeeErr(self)\n\n def __del__(self):\n del self.stdout, self.stderr\n super().__del__()\n\n def close(self):\n \"\"\"Closes the buffer as well as the stdout and stderr tees.\"\"\"\n self.stdout.close()\n self.stderr.close()\n super().close()\n\n\nclass BaseShell(object):\n \"\"\"The xonsh shell.\"\"\"\n\n def __init__(self, execer, ctx, **kwargs):\n super().__init__()\n self.execer = execer\n self.ctx = ctx\n if kwargs.get('completer', True):\n self.completer = Completer()\n self.buffer = []\n self.need_more_lines = False\n self.mlprompt = None\n if HAVE_PYGMENTS:\n env = builtins.__xonsh_env__\n self.styler = XonshStyle(env.get('XONSH_COLOR_STYLE'))\n else:\n self.styler = None\n\n def emptyline(self):\n \"\"\"Called when an empty line has been entered.\"\"\"\n self.need_more_lines = False\n self.default('')\n\n def singleline(self, **kwargs):\n \"\"\"Reads a single line of input from the shell.\"\"\"\n msg = '{0} has not implemented singleline().'\n raise RuntimeError(msg.format(self.__class__.__name__))\n\n def precmd(self, line):\n \"\"\"Called just before execution of line.\"\"\"\n return line if self.need_more_lines else line.lstrip()\n\n def default(self, line):\n \"\"\"Implements code execution.\"\"\"\n line = line if line.endswith('\\n') else line + '\\n'\n src, code = self.push(line)\n if code is None:\n return\n hist = builtins.__xonsh_history__ # pylint: disable=no-member\n ts1 = None\n store_stdout = builtins.__xonsh_env__.get('XONSH_STORE_STDOUT') # pylint: disable=no-member\n tee = Tee() if store_stdout else io.StringIO()\n try:\n ts0 = time.time()\n run_compiled_code(code, self.ctx, None, 'single')\n ts1 = time.time()\n if hist.last_cmd_rtn is None:\n hist.last_cmd_rtn = 0 # returncode for success\n except XonshError as e:\n print(e.args[0], file=sys.stderr)\n if hist.last_cmd_rtn is None:\n hist.last_cmd_rtn = 1 # return code for failure\n except Exception: # pylint: disable=broad-except\n print_exception()\n if hist.last_cmd_rtn is None:\n hist.last_cmd_rtn = 1 # return code for failure\n finally:\n ts1 = ts1 or time.time()\n self._append_history(inp=src, ts=[ts0, ts1], tee_out=tee.getvalue())\n tee.close()\n if builtins.__xonsh_exit__: # pylint: disable=no-member\n return True\n\n def push(self, line):\n \"\"\"Pushes a line onto the buffer and compiles the code in a way that\n enables multiline input.\n \"\"\"\n code = None\n self.buffer.append(line)\n if self.need_more_lines:\n return None, code\n src = ''.join(self.buffer)\n _cache = should_use_cache(self.execer, 'single')\n if _cache:\n codefname = code_cache_name(src)\n cachefname = get_cache_filename(codefname, code=True)\n usecache, code = code_cache_check(cachefname)\n if usecache:\n self.reset_buffer()\n return src, code\n try:\n code = self.execer.compile(src,\n mode='single',\n glbs=self.ctx,\n locs=None)\n if _cache:\n update_cache(code, cachefname)\n self.reset_buffer()\n except SyntaxError:\n if line == '\\n':\n self.reset_buffer()\n print_exception()\n return src, None\n self.need_more_lines = True\n except Exception: # pylint: disable=broad-except\n self.reset_buffer()\n print_exception()\n return src, None\n return src, code\n\n def reset_buffer(self):\n \"\"\"Resets the line buffer.\"\"\"\n self.buffer.clear()\n self.need_more_lines = False\n self.mlprompt = None\n\n def settitle(self):\n \"\"\"Sets terminal title.\"\"\"\n _ = self\n env = builtins.__xonsh_env__ # pylint: disable=no-member\n term = env.get('TERM', None)\n # Shells running in emacs sets TERM to \"dumb\" or \"eterm-color\".\n # Do not set title for these to avoid garbled prompt.\n if term is None or term in ['dumb', 'eterm-color', 'linux']:\n return\n t = env.get('TITLE')\n if t is None:\n return\n t = format_prompt(t)\n if ON_WINDOWS and 'ANSICON' not in env:\n t = escape_windows_cmd_string(t)\n os.system('title {}'.format(t))\n else:\n os.write(1, \"\\x1b]2;{0}\\x07\".format(t).encode())\n\n @property\n def prompt(self):\n \"\"\"Obtains the current prompt string.\"\"\"\n if self.need_more_lines:\n if self.mlprompt is None:\n try:\n self.mlprompt = multiline_prompt()\n except Exception: # pylint: disable=broad-except\n print_exception()\n self.mlprompt = '<multiline prompt error> '\n return self.mlprompt\n env = builtins.__xonsh_env__ # pylint: disable=no-member\n p = env.get('PROMPT')\n try:\n p = format_prompt(p)\n except Exception: # pylint: disable=broad-except\n print_exception()\n self.settitle()\n return p\n\n def _append_history(self, tee_out=None, **info):\n \"\"\"Append information about the command to the history.\"\"\"\n _ = self\n hist = builtins.__xonsh_history__ # pylint: disable=no-member\n info['rtn'] = hist.last_cmd_rtn\n tee_out = tee_out or None\n last_out = hist.last_cmd_out or None\n if last_out is None and tee_out is None:\n pass\n elif last_out is None and tee_out is not None:\n info['out'] = tee_out\n elif last_out is not None and tee_out is None:\n info['out'] = last_out\n else:\n info['out'] = tee_out + '\\n' + last_out\n hist.append(info)\n hist.last_cmd_rtn = hist.last_cmd_out = None\n\n def format_color(self, string, **kwargs):\n \"\"\"Formats the colors in a string. This base implmentation does not\n actually do any coloring, but just returns the string directly.\n \"\"\"\n return string\n\n def print_color(self, string, **kwargs):\n \"\"\"Prints a string in color. This base implmentation does not actually\n do any coloring, but just prints the string directly.\n \"\"\"\n if not isinstance(string, str):\n string = ''.join([x for _, x in string])\n print(string, **kwargs)\n\n def color_style_names(self):\n \"\"\"Returns an iterable of all available style names.\"\"\"\n return ()\n\n def color_style(self):\n \"\"\"Returns the current color map.\"\"\"\n return {}\n", "path": "xonsh/base_shell.py"}]}
| 3,672 | 211 |
gh_patches_debug_19538
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2418
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Script Reloader does not unload old script
##### Steps to reproduce the problem:
1. mitmdump -s script.py
2. Modify script on disk.
##### System information
Mitmproxy version: 3.0.0 (2.0.0dev0407-0x315daa0)
Python version: 3.5.2
Platform: Linux-4.4.0-43-Microsoft-x86_64-with-Ubuntu-16.04-xenial
SSL version: OpenSSL 1.0.2g 1 Mar 2016
Linux distro: Ubuntu 16.04 xenial
</issue>
<code>
[start of mitmproxy/addons/script.py]
1 import os
2 import importlib
3 import time
4 import sys
5 import typing
6
7 from mitmproxy import addonmanager
8 from mitmproxy import exceptions
9 from mitmproxy import flow
10 from mitmproxy import command
11 from mitmproxy import eventsequence
12 from mitmproxy import ctx
13
14
15 def load_script(actx, path):
16 if not os.path.exists(path):
17 ctx.log.info("No such file: %s" % path)
18 return
19 loader = importlib.machinery.SourceFileLoader(os.path.basename(path), path)
20 try:
21 oldpath = sys.path
22 sys.path.insert(0, os.path.dirname(path))
23 with addonmanager.safecall():
24 m = loader.load_module()
25 if not getattr(m, "name", None):
26 m.name = path
27 return m
28 finally:
29 sys.path[:] = oldpath
30
31
32 class Script:
33 """
34 An addon that manages a single script.
35 """
36 ReloadInterval = 2
37
38 def __init__(self, path):
39 self.name = "scriptmanager:" + path
40 self.path = path
41 self.fullpath = os.path.expanduser(path)
42 self.ns = None
43
44 self.last_load = 0
45 self.last_mtime = 0
46 if not os.path.isfile(self.fullpath):
47 raise exceptions.OptionsError("No such script: %s" % path)
48
49 @property
50 def addons(self):
51 return [self.ns] if self.ns else []
52
53 def tick(self):
54 if time.time() - self.last_load > self.ReloadInterval:
55 mtime = os.stat(self.fullpath).st_mtime
56 if mtime > self.last_mtime:
57 ctx.log.info("Loading script: %s" % self.path)
58 if self.ns:
59 ctx.master.addons.remove(self.ns)
60 self.ns = load_script(ctx, self.fullpath)
61 if self.ns:
62 # We're already running, so we have to explicitly register and
63 # configure the addon
64 ctx.master.addons.register(self.ns)
65 ctx.master.addons.invoke_addon(self.ns, "running")
66 ctx.master.addons.invoke_addon(
67 self.ns,
68 "configure",
69 ctx.options.keys()
70 )
71 self.last_load = time.time()
72 self.last_mtime = mtime
73
74
75 class ScriptLoader:
76 """
77 An addon that manages loading scripts from options.
78 """
79 def __init__(self):
80 self.is_running = False
81 self.addons = []
82
83 def running(self):
84 self.is_running = True
85
86 @command.command("script.run")
87 def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:
88 """
89 Run a script on the specified flows. The script is loaded with
90 default options, and all lifecycle events for each flow are
91 simulated.
92 """
93 try:
94 s = Script(path)
95 l = addonmanager.Loader(ctx.master)
96 ctx.master.addons.invoke_addon(s, "load", l)
97 ctx.master.addons.invoke_addon(s, "configure", ctx.options.keys())
98 # Script is loaded on the first tick
99 ctx.master.addons.invoke_addon(s, "tick")
100 for f in flows:
101 for evt, arg in eventsequence.iterate(f):
102 ctx.master.addons.invoke_addon(s, evt, arg)
103 except exceptions.OptionsError as e:
104 raise exceptions.CommandError("Error running script: %s" % e) from e
105
106 def configure(self, updated):
107 if "scripts" in updated:
108 for s in ctx.options.scripts:
109 if ctx.options.scripts.count(s) > 1:
110 raise exceptions.OptionsError("Duplicate script: %s" % s)
111
112 for a in self.addons[:]:
113 if a.path not in ctx.options.scripts:
114 ctx.log.info("Un-loading script: %s" % a.name)
115 ctx.master.addons.remove(a)
116 self.addons.remove(a)
117
118 # The machinations below are to ensure that:
119 # - Scripts remain in the same order
120 # - Scripts are not initialized un-necessarily. If only a
121 # script's order in the script list has changed, it is just
122 # moved.
123
124 current = {}
125 for a in self.addons:
126 current[a.path] = a
127
128 ordered = []
129 newscripts = []
130 for s in ctx.options.scripts:
131 if s in current:
132 ordered.append(current[s])
133 else:
134 sc = Script(s)
135 ordered.append(sc)
136 newscripts.append(sc)
137
138 self.addons = ordered
139
140 for s in newscripts:
141 ctx.master.addons.register(s)
142 if self.is_running:
143 # If we're already running, we configure and tell the addon
144 # we're up and running.
145 ctx.master.addons.invoke_addon(s, "running")
146
[end of mitmproxy/addons/script.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py
--- a/mitmproxy/addons/script.py
+++ b/mitmproxy/addons/script.py
@@ -52,11 +52,19 @@
def tick(self):
if time.time() - self.last_load > self.ReloadInterval:
- mtime = os.stat(self.fullpath).st_mtime
+ try:
+ mtime = os.stat(self.fullpath).st_mtime
+ except FileNotFoundError:
+ scripts = ctx.options.scripts
+ scripts.remove(self.path)
+ ctx.options.update(scripts=scripts)
+ return
+
if mtime > self.last_mtime:
ctx.log.info("Loading script: %s" % self.path)
if self.ns:
ctx.master.addons.remove(self.ns)
+ del sys.modules[self.ns.__name__]
self.ns = load_script(ctx, self.fullpath)
if self.ns:
# We're already running, so we have to explicitly register and
|
{"golden_diff": "diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py\n--- a/mitmproxy/addons/script.py\n+++ b/mitmproxy/addons/script.py\n@@ -52,11 +52,19 @@\n \n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n- mtime = os.stat(self.fullpath).st_mtime\n+ try:\n+ mtime = os.stat(self.fullpath).st_mtime\n+ except FileNotFoundError:\n+ scripts = ctx.options.scripts\n+ scripts.remove(self.path)\n+ ctx.options.update(scripts=scripts)\n+ return\n+\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n+ del sys.modules[self.ns.__name__]\n self.ns = load_script(ctx, self.fullpath)\n if self.ns:\n # We're already running, so we have to explicitly register and\n", "issue": "Script Reloader does not unload old script\n##### Steps to reproduce the problem:\r\n\r\n1. mitmdump -s script.py\r\n2. Modify script on disk.\r\n\r\n##### System information\r\n\r\nMitmproxy version: 3.0.0 (2.0.0dev0407-0x315daa0)\r\nPython version: 3.5.2\r\nPlatform: Linux-4.4.0-43-Microsoft-x86_64-with-Ubuntu-16.04-xenial\r\nSSL version: OpenSSL 1.0.2g 1 Mar 2016\r\nLinux distro: Ubuntu 16.04 xenial\n", "before_files": [{"content": "import os\nimport importlib\nimport time\nimport sys\nimport typing\n\nfrom mitmproxy import addonmanager\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import command\nfrom mitmproxy import eventsequence\nfrom mitmproxy import ctx\n\n\ndef load_script(actx, path):\n if not os.path.exists(path):\n ctx.log.info(\"No such file: %s\" % path)\n return\n loader = importlib.machinery.SourceFileLoader(os.path.basename(path), path)\n try:\n oldpath = sys.path\n sys.path.insert(0, os.path.dirname(path))\n with addonmanager.safecall():\n m = loader.load_module()\n if not getattr(m, \"name\", None):\n m.name = path\n return m\n finally:\n sys.path[:] = oldpath\n\n\nclass Script:\n \"\"\"\n An addon that manages a single script.\n \"\"\"\n ReloadInterval = 2\n\n def __init__(self, path):\n self.name = \"scriptmanager:\" + path\n self.path = path\n self.fullpath = os.path.expanduser(path)\n self.ns = None\n\n self.last_load = 0\n self.last_mtime = 0\n if not os.path.isfile(self.fullpath):\n raise exceptions.OptionsError(\"No such script: %s\" % path)\n\n @property\n def addons(self):\n return [self.ns] if self.ns else []\n\n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n mtime = os.stat(self.fullpath).st_mtime\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n self.ns = load_script(ctx, self.fullpath)\n if self.ns:\n # We're already running, so we have to explicitly register and\n # configure the addon\n ctx.master.addons.register(self.ns)\n ctx.master.addons.invoke_addon(self.ns, \"running\")\n ctx.master.addons.invoke_addon(\n self.ns,\n \"configure\",\n ctx.options.keys()\n )\n self.last_load = time.time()\n self.last_mtime = mtime\n\n\nclass ScriptLoader:\n \"\"\"\n An addon that manages loading scripts from options.\n \"\"\"\n def __init__(self):\n self.is_running = False\n self.addons = []\n\n def running(self):\n self.is_running = True\n\n @command.command(\"script.run\")\n def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Run a script on the specified flows. The script is loaded with\n default options, and all lifecycle events for each flow are\n simulated.\n \"\"\"\n try:\n s = Script(path)\n l = addonmanager.Loader(ctx.master)\n ctx.master.addons.invoke_addon(s, \"load\", l)\n ctx.master.addons.invoke_addon(s, \"configure\", ctx.options.keys())\n # Script is loaded on the first tick\n ctx.master.addons.invoke_addon(s, \"tick\")\n for f in flows:\n for evt, arg in eventsequence.iterate(f):\n ctx.master.addons.invoke_addon(s, evt, arg)\n except exceptions.OptionsError as e:\n raise exceptions.CommandError(\"Error running script: %s\" % e) from e\n\n def configure(self, updated):\n if \"scripts\" in updated:\n for s in ctx.options.scripts:\n if ctx.options.scripts.count(s) > 1:\n raise exceptions.OptionsError(\"Duplicate script: %s\" % s)\n\n for a in self.addons[:]:\n if a.path not in ctx.options.scripts:\n ctx.log.info(\"Un-loading script: %s\" % a.name)\n ctx.master.addons.remove(a)\n self.addons.remove(a)\n\n # The machinations below are to ensure that:\n # - Scripts remain in the same order\n # - Scripts are not initialized un-necessarily. If only a\n # script's order in the script list has changed, it is just\n # moved.\n\n current = {}\n for a in self.addons:\n current[a.path] = a\n\n ordered = []\n newscripts = []\n for s in ctx.options.scripts:\n if s in current:\n ordered.append(current[s])\n else:\n sc = Script(s)\n ordered.append(sc)\n newscripts.append(sc)\n\n self.addons = ordered\n\n for s in newscripts:\n ctx.master.addons.register(s)\n if self.is_running:\n # If we're already running, we configure and tell the addon\n # we're up and running.\n ctx.master.addons.invoke_addon(s, \"running\")\n", "path": "mitmproxy/addons/script.py"}]}
| 2,061 | 224 |
gh_patches_debug_13458
|
rasdani/github-patches
|
git_diff
|
explosion__spaCy-3499
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect lemma from lemmatizer
**Right:**
`[w.lemma_ for w in nlp('funnier')]` -> `['funny']`
**Wrong:**
`[w.lemma_ for w in nlp('faster')]` ->`['faster']`
I think for word _faster_ lemma should be _fast_
</issue>
<code>
[start of spacy/lang/en/lemmatizer/_adverbs_irreg.py]
1 # coding: utf8
2 from __future__ import unicode_literals
3
4
5 ADVERBS_IRREG = {
6 "best": ("well",),
7 "better": ("well",),
8 "deeper": ("deeply",),
9 "farther": ("far",),
10 "further": ("far",),
11 "harder": ("hard",),
12 "hardest": ("hard",),
13 }
14
[end of spacy/lang/en/lemmatizer/_adverbs_irreg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/spacy/lang/en/lemmatizer/_adverbs_irreg.py b/spacy/lang/en/lemmatizer/_adverbs_irreg.py
--- a/spacy/lang/en/lemmatizer/_adverbs_irreg.py
+++ b/spacy/lang/en/lemmatizer/_adverbs_irreg.py
@@ -5,9 +5,27 @@
ADVERBS_IRREG = {
"best": ("well",),
"better": ("well",),
+ "closer": ("close",),
+ "closest": ("close",),
"deeper": ("deeply",),
+ "earlier": ("early",),
+ "earliest": ("early",),
"farther": ("far",),
"further": ("far",),
+ "faster": ("fast",),
+ "fastest": ("fast",),
"harder": ("hard",),
"hardest": ("hard",),
+ "longer": ("long",),
+ "longest": ("long",),
+ "nearer": ("near",),
+ "nearest": ("near",),
+ "nigher": ("nigh",),
+ "nighest": ("nigh",),
+ "quicker": ("quick",),
+ "quickest": ("quick",),
+ "slower": ("slow",),
+ "slowest": ("slowest",),
+ "sooner": ("soon",),
+ "soonest": ("soon",)
}
|
{"golden_diff": "diff --git a/spacy/lang/en/lemmatizer/_adverbs_irreg.py b/spacy/lang/en/lemmatizer/_adverbs_irreg.py\n--- a/spacy/lang/en/lemmatizer/_adverbs_irreg.py\n+++ b/spacy/lang/en/lemmatizer/_adverbs_irreg.py\n@@ -5,9 +5,27 @@\n ADVERBS_IRREG = {\n \"best\": (\"well\",),\n \"better\": (\"well\",),\n+ \"closer\": (\"close\",),\n+ \"closest\": (\"close\",),\n \"deeper\": (\"deeply\",),\n+ \"earlier\": (\"early\",),\n+ \"earliest\": (\"early\",),\n \"farther\": (\"far\",),\n \"further\": (\"far\",),\n+ \"faster\": (\"fast\",),\n+ \"fastest\": (\"fast\",),\n \"harder\": (\"hard\",),\n \"hardest\": (\"hard\",),\n+ \"longer\": (\"long\",),\n+ \"longest\": (\"long\",),\n+ \"nearer\": (\"near\",),\n+ \"nearest\": (\"near\",),\n+ \"nigher\": (\"nigh\",),\n+ \"nighest\": (\"nigh\",),\n+ \"quicker\": (\"quick\",),\n+ \"quickest\": (\"quick\",),\n+ \"slower\": (\"slow\",),\n+ \"slowest\": (\"slowest\",),\n+ \"sooner\": (\"soon\",),\n+ \"soonest\": (\"soon\",)\n }\n", "issue": "Incorrect lemma from lemmatizer\n**Right:**\r\n`[w.lemma_ for w in nlp('funnier')]` -> `['funny']` \r\n\r\n**Wrong:**\r\n`[w.lemma_ for w in nlp('faster')]` ->`['faster']`\r\n\r\nI think for word _faster_ lemma should be _fast_\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\n\nADVERBS_IRREG = {\n \"best\": (\"well\",),\n \"better\": (\"well\",),\n \"deeper\": (\"deeply\",),\n \"farther\": (\"far\",),\n \"further\": (\"far\",),\n \"harder\": (\"hard\",),\n \"hardest\": (\"hard\",),\n}\n", "path": "spacy/lang/en/lemmatizer/_adverbs_irreg.py"}]}
| 736 | 331 |
gh_patches_debug_67335
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-284
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
base_url is used twice in redirects
For jupyterhub running at _http://address/prefix/_ almost every redirect has `base_url` twice. It was caused by PR #276. Reverting changes to jupyterhub/handlers/pages.py:27 [1] fixes the issue.
[1] https://github.com/Crunch-io/jupyterhub/commit/05e4ab41fe6691dd697eecc27d9abee2ac4e23b6
</issue>
<code>
[start of jupyterhub/handlers/pages.py]
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11
12
13 class RootHandler(BaseHandler):
14 """Render the Hub root page.
15
16 If logged in, redirects to:
17
18 - single-user server if running
19 - hub home, otherwise
20
21 Otherwise, renders login page.
22 """
23 def get(self):
24 user = self.get_current_user()
25 if user:
26 if user.running:
27 url = url_path_join(self.hub.server.base_url, user.server.base_url)
28 self.log.debug("User is running: %s", url)
29 else:
30 url = url_path_join(self.hub.server.base_url, 'home')
31 self.log.debug("User is not running: %s", url)
32 self.redirect(url, permanent=False)
33 return
34 # Redirect to the authenticator login page instead of rendering the
35 # login html page
36 url = self.authenticator.login_url(self.hub.server.base_url)
37 self.log.debug("No user logged in: %s", url)
38 self.redirect(url, permanent=False)
39
40 class HomeHandler(BaseHandler):
41 """Render the user's home page."""
42
43 @web.authenticated
44 def get(self):
45 html = self.render_template('home.html',
46 user=self.get_current_user(),
47 )
48 self.finish(html)
49
50
51 class AdminHandler(BaseHandler):
52 """Render the admin page."""
53
54 @admin_only
55 def get(self):
56 available = {'name', 'admin', 'running', 'last_activity'}
57 default_sort = ['admin', 'name']
58 mapping = {
59 'running': '_server_id'
60 }
61 default_order = {
62 'name': 'asc',
63 'last_activity': 'desc',
64 'admin': 'desc',
65 'running': 'desc',
66 }
67 sorts = self.get_arguments('sort') or default_sort
68 orders = self.get_arguments('order')
69
70 for bad in set(sorts).difference(available):
71 self.log.warn("ignoring invalid sort: %r", bad)
72 sorts.remove(bad)
73 for bad in set(orders).difference({'asc', 'desc'}):
74 self.log.warn("ignoring invalid order: %r", bad)
75 orders.remove(bad)
76
77 # add default sort as secondary
78 for s in default_sort:
79 if s not in sorts:
80 sorts.append(s)
81 if len(orders) < len(sorts):
82 for col in sorts[len(orders):]:
83 orders.append(default_order[col])
84 else:
85 orders = orders[:len(sorts)]
86
87 # this could be one incomprehensible nested list comprehension
88 # get User columns
89 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
90 # get User.col.desc() order objects
91 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
92
93 users = self.db.query(orm.User).order_by(*ordered)
94 running = users.filter(orm.User.server != None)
95
96 html = self.render_template('admin.html',
97 user=self.get_current_user(),
98 admin_access=self.settings.get('admin_access', False),
99 users=users,
100 running=running,
101 sort={s:o for s,o in zip(sorts, orders)},
102 )
103 self.finish(html)
104
105
106 default_handlers = [
107 (r'/', RootHandler),
108 (r'/home', HomeHandler),
109 (r'/admin', AdminHandler),
110 ]
111
[end of jupyterhub/handlers/pages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -24,7 +24,7 @@
user = self.get_current_user()
if user:
if user.running:
- url = url_path_join(self.hub.server.base_url, user.server.base_url)
+ url = user.server.base_url
self.log.debug("User is running: %s", url)
else:
url = url_path_join(self.hub.server.base_url, 'home')
|
{"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -24,7 +24,7 @@\n user = self.get_current_user()\n if user:\n if user.running:\n- url = url_path_join(self.hub.server.base_url, user.server.base_url)\n+ url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n", "issue": "base_url is used twice in redirects\nFor jupyterhub running at _http://address/prefix/_ almost every redirect has `base_url` twice. It was caused by PR #276. Reverting changes to jupyterhub/handlers/pages.py:27 [1] fixes the issue.\n\n[1] https://github.com/Crunch-io/jupyterhub/commit/05e4ab41fe6691dd697eecc27d9abee2ac4e23b6\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = url_path_join(self.hub.server.base_url, user.server.base_url)\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url, permanent=False)\n return\n # Redirect to the authenticator login page instead of rendering the\n # login html page\n url = self.authenticator.login_url(self.hub.server.base_url)\n self.log.debug(\"No user logged in: %s\", url)\n self.redirect(url, permanent=False)\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n running = users.filter(orm.User.server != None)\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]}
| 1,655 | 134 |
gh_patches_debug_17131
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-1504
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
private rendered dict is still prompted
* Cookiecutter version: 2.0.0 from commit 52dd18513bbab7f0fbfcb2938c9644d9092247cf
* Python version: py3.8.7
* Operating System: Win
### Description:
For the `cookiecutter.json` below the private, rendered dict `__module_names` is still prompted in the output (last line).
For private variables c.f. https://github.com/cookiecutter/cookiecutter/issues/1115#issuecomment-442522474
### What I've run:
**cookiecutter.json:**
```
{
"project_name": "Project Name",
"project_slug": "{{ cookiecutter.project_name|lower|replace(' ', '-') }}",
"__package_name": "{{ cookiecutter.project_name|lower|replace(' ', '_') }}",
"__module_names": {
"within": "{{ cookiecutter.project_slug }}",
"outside": "{{ cookiecutter.__package_name }}"
}
}
```
**verbose output:**
```
$ cookiecutter .\cookiecutter-context-test\ -f -v
DEBUG cookiecutter.config: User config not found. Loading default config.
DEBUG cookiecutter.main: context_file is .\cookiecutter-context-test\cookiecutter.json
DEBUG cookiecutter.generate: Context generated is OrderedDict([('cookiecutter', OrderedDict([('project_name', 'Project Name'), ('project_slug', "{{ cookiecutter.project_name|lower|replace(' ', '-') }}"), ('__package_name', "{{ cookiecutter.project_name|lower|replace(' ', '_') }}"), ('__module_names', OrderedDict([('within', '{{ cookiecutter.project_slug }}'), ('outside', '{{ cookiecutter.__package_name }}')]))]))])
project_name [Project Name]:
project_slug [project-name]:
__module_names [default]:
```
</issue>
<code>
[start of cookiecutter/prompt.py]
1 """Functions for prompting the user for project info."""
2 import json
3 from collections import OrderedDict
4
5 import click
6 from jinja2.exceptions import UndefinedError
7
8 from cookiecutter.environment import StrictEnvironment
9 from cookiecutter.exceptions import UndefinedVariableInTemplate
10
11
12 def read_user_variable(var_name, default_value):
13 """Prompt user for variable and return the entered value or given default.
14
15 :param str var_name: Variable of the context to query the user
16 :param default_value: Value that will be returned if no input happens
17 """
18 # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt
19 return click.prompt(var_name, default=default_value)
20
21
22 def read_user_yes_no(question, default_value):
23 """Prompt the user to reply with 'yes' or 'no' (or equivalent values).
24
25 Note:
26 Possible choices are 'true', '1', 'yes', 'y' or 'false', '0', 'no', 'n'
27
28 :param str question: Question to the user
29 :param default_value: Value that will be returned if no input happens
30 """
31 # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt
32 return click.prompt(question, default=default_value, type=click.BOOL)
33
34
35 def read_repo_password(question):
36 """Prompt the user to enter a password.
37
38 :param str question: Question to the user
39 """
40 # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt
41 return click.prompt(question, hide_input=True)
42
43
44 def read_user_choice(var_name, options):
45 """Prompt the user to choose from several options for the given variable.
46
47 The first item will be returned if no input happens.
48
49 :param str var_name: Variable as specified in the context
50 :param list options: Sequence of options that are available to select from
51 :return: Exactly one item of ``options`` that has been chosen by the user
52 """
53 # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt
54 if not isinstance(options, list):
55 raise TypeError
56
57 if not options:
58 raise ValueError
59
60 choice_map = OrderedDict(
61 ('{}'.format(i), value) for i, value in enumerate(options, 1)
62 )
63 choices = choice_map.keys()
64 default = '1'
65
66 choice_lines = ['{} - {}'.format(*c) for c in choice_map.items()]
67 prompt = '\n'.join(
68 (
69 'Select {}:'.format(var_name),
70 '\n'.join(choice_lines),
71 'Choose from {}'.format(', '.join(choices)),
72 )
73 )
74
75 user_choice = click.prompt(
76 prompt, type=click.Choice(choices), default=default, show_choices=False
77 )
78 return choice_map[user_choice]
79
80
81 def process_json(user_value):
82 """Load user-supplied value as a JSON dict.
83
84 :param str user_value: User-supplied value to load as a JSON dict
85 """
86 try:
87 user_dict = json.loads(user_value, object_pairs_hook=OrderedDict)
88 except Exception:
89 # Leave it up to click to ask the user again
90 raise click.UsageError('Unable to decode to JSON.')
91
92 if not isinstance(user_dict, dict):
93 # Leave it up to click to ask the user again
94 raise click.UsageError('Requires JSON dict.')
95
96 return user_dict
97
98
99 def read_user_dict(var_name, default_value):
100 """Prompt the user to provide a dictionary of data.
101
102 :param str var_name: Variable as specified in the context
103 :param default_value: Value that will be returned if no input is provided
104 :return: A Python dictionary to use in the context.
105 """
106 # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt
107 if not isinstance(default_value, dict):
108 raise TypeError
109
110 default_display = 'default'
111
112 user_value = click.prompt(
113 var_name, default=default_display, type=click.STRING, value_proc=process_json
114 )
115
116 if user_value == default_display:
117 # Return the given default w/o any processing
118 return default_value
119 return user_value
120
121
122 def render_variable(env, raw, cookiecutter_dict):
123 """Render the next variable to be displayed in the user prompt.
124
125 Inside the prompting taken from the cookiecutter.json file, this renders
126 the next variable. For example, if a project_name is "Peanut Butter
127 Cookie", the repo_name could be be rendered with:
128
129 `{{ cookiecutter.project_name.replace(" ", "_") }}`.
130
131 This is then presented to the user as the default.
132
133 :param Environment env: A Jinja2 Environment object.
134 :param raw: The next value to be prompted for by the user.
135 :param dict cookiecutter_dict: The current context as it's gradually
136 being populated with variables.
137 :return: The rendered value for the default variable.
138 """
139 if raw is None:
140 return None
141 elif isinstance(raw, dict):
142 return {
143 render_variable(env, k, cookiecutter_dict): render_variable(
144 env, v, cookiecutter_dict
145 )
146 for k, v in raw.items()
147 }
148 elif isinstance(raw, list):
149 return [render_variable(env, v, cookiecutter_dict) for v in raw]
150 elif not isinstance(raw, str):
151 raw = str(raw)
152
153 template = env.from_string(raw)
154
155 rendered_template = template.render(cookiecutter=cookiecutter_dict)
156 return rendered_template
157
158
159 def prompt_choice_for_config(cookiecutter_dict, env, key, options, no_input):
160 """Prompt user with a set of options to choose from.
161
162 Each of the possible choices is rendered beforehand.
163 """
164 rendered_options = [render_variable(env, raw, cookiecutter_dict) for raw in options]
165
166 if no_input:
167 return rendered_options[0]
168 return read_user_choice(key, rendered_options)
169
170
171 def prompt_for_config(context, no_input=False):
172 """Prompt user to enter a new config.
173
174 :param dict context: Source for field names and sample values.
175 :param no_input: Prompt the user at command line for manual configuration?
176 """
177 cookiecutter_dict = OrderedDict([])
178 env = StrictEnvironment(context=context)
179
180 # First pass: Handle simple and raw variables, plus choices.
181 # These must be done first because the dictionaries keys and
182 # values might refer to them.
183 for key, raw in context['cookiecutter'].items():
184 if key.startswith('_') and not key.startswith('__'):
185 cookiecutter_dict[key] = raw
186 continue
187 elif key.startswith('__'):
188 cookiecutter_dict[key] = render_variable(env, raw, cookiecutter_dict)
189 continue
190
191 try:
192 if isinstance(raw, list):
193 # We are dealing with a choice variable
194 val = prompt_choice_for_config(
195 cookiecutter_dict, env, key, raw, no_input
196 )
197 cookiecutter_dict[key] = val
198 elif not isinstance(raw, dict):
199 # We are dealing with a regular variable
200 val = render_variable(env, raw, cookiecutter_dict)
201
202 if not no_input:
203 val = read_user_variable(key, val)
204
205 cookiecutter_dict[key] = val
206 except UndefinedError as err:
207 msg = "Unable to render variable '{}'".format(key)
208 raise UndefinedVariableInTemplate(msg, err, context)
209
210 # Second pass; handle the dictionaries.
211 for key, raw in context['cookiecutter'].items():
212 # Skip private type dicts
213 if key.startswith('_') and not key.startswith('__'):
214 continue
215
216 try:
217 if isinstance(raw, dict):
218 # We are dealing with a dict variable
219 val = render_variable(env, raw, cookiecutter_dict)
220
221 if not no_input:
222 val = read_user_dict(key, val)
223
224 cookiecutter_dict[key] = val
225 except UndefinedError as err:
226 msg = "Unable to render variable '{}'".format(key)
227 raise UndefinedVariableInTemplate(msg, err, context)
228
229 return cookiecutter_dict
230
[end of cookiecutter/prompt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -209,7 +209,7 @@
# Second pass; handle the dictionaries.
for key, raw in context['cookiecutter'].items():
- # Skip private type dicts
+ # Skip private type dicts not ot be rendered.
if key.startswith('_') and not key.startswith('__'):
continue
@@ -218,7 +218,7 @@
# We are dealing with a dict variable
val = render_variable(env, raw, cookiecutter_dict)
- if not no_input:
+ if not no_input and not key.startswith('__'):
val = read_user_dict(key, val)
cookiecutter_dict[key] = val
|
{"golden_diff": "diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -209,7 +209,7 @@\n \n # Second pass; handle the dictionaries.\n for key, raw in context['cookiecutter'].items():\n- # Skip private type dicts\n+ # Skip private type dicts not ot be rendered.\n if key.startswith('_') and not key.startswith('__'):\n continue\n \n@@ -218,7 +218,7 @@\n # We are dealing with a dict variable\n val = render_variable(env, raw, cookiecutter_dict)\n \n- if not no_input:\n+ if not no_input and not key.startswith('__'):\n val = read_user_dict(key, val)\n \n cookiecutter_dict[key] = val\n", "issue": "private rendered dict is still prompted\n* Cookiecutter version: 2.0.0 from commit 52dd18513bbab7f0fbfcb2938c9644d9092247cf\r\n* Python version: py3.8.7\r\n* Operating System: Win\r\n\r\n### Description:\r\nFor the `cookiecutter.json` below the private, rendered dict `__module_names` is still prompted in the output (last line).\r\n\r\nFor private variables c.f. https://github.com/cookiecutter/cookiecutter/issues/1115#issuecomment-442522474\r\n\r\n### What I've run:\r\n\r\n**cookiecutter.json:**\r\n```\r\n{\r\n \"project_name\": \"Project Name\",\r\n \"project_slug\": \"{{ cookiecutter.project_name|lower|replace(' ', '-') }}\",\r\n \"__package_name\": \"{{ cookiecutter.project_name|lower|replace(' ', '_') }}\",\r\n\r\n \"__module_names\": {\r\n \"within\": \"{{ cookiecutter.project_slug }}\",\r\n \"outside\": \"{{ cookiecutter.__package_name }}\"\r\n }\r\n}\r\n```\r\n**verbose output:**\r\n```\r\n$ cookiecutter .\\cookiecutter-context-test\\ -f -v\r\n\r\nDEBUG cookiecutter.config: User config not found. Loading default config.\r\nDEBUG cookiecutter.main: context_file is .\\cookiecutter-context-test\\cookiecutter.json\r\nDEBUG cookiecutter.generate: Context generated is OrderedDict([('cookiecutter', OrderedDict([('project_name', 'Project Name'), ('project_slug', \"{{ cookiecutter.project_name|lower|replace(' ', '-') }}\"), ('__package_name', \"{{ cookiecutter.project_name|lower|replace(' ', '_') }}\"), ('__module_names', OrderedDict([('within', '{{ cookiecutter.project_slug }}'), ('outside', '{{ cookiecutter.__package_name }}')]))]))])\r\nproject_name [Project Name]: \r\nproject_slug [project-name]: \r\n__module_names [default]:\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"Functions for prompting the user for project info.\"\"\"\nimport json\nfrom collections import OrderedDict\n\nimport click\nfrom jinja2.exceptions import UndefinedError\n\nfrom cookiecutter.environment import StrictEnvironment\nfrom cookiecutter.exceptions import UndefinedVariableInTemplate\n\n\ndef read_user_variable(var_name, default_value):\n \"\"\"Prompt user for variable and return the entered value or given default.\n\n :param str var_name: Variable of the context to query the user\n :param default_value: Value that will be returned if no input happens\n \"\"\"\n # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt\n return click.prompt(var_name, default=default_value)\n\n\ndef read_user_yes_no(question, default_value):\n \"\"\"Prompt the user to reply with 'yes' or 'no' (or equivalent values).\n\n Note:\n Possible choices are 'true', '1', 'yes', 'y' or 'false', '0', 'no', 'n'\n\n :param str question: Question to the user\n :param default_value: Value that will be returned if no input happens\n \"\"\"\n # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt\n return click.prompt(question, default=default_value, type=click.BOOL)\n\n\ndef read_repo_password(question):\n \"\"\"Prompt the user to enter a password.\n\n :param str question: Question to the user\n \"\"\"\n # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt\n return click.prompt(question, hide_input=True)\n\n\ndef read_user_choice(var_name, options):\n \"\"\"Prompt the user to choose from several options for the given variable.\n\n The first item will be returned if no input happens.\n\n :param str var_name: Variable as specified in the context\n :param list options: Sequence of options that are available to select from\n :return: Exactly one item of ``options`` that has been chosen by the user\n \"\"\"\n # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt\n if not isinstance(options, list):\n raise TypeError\n\n if not options:\n raise ValueError\n\n choice_map = OrderedDict(\n ('{}'.format(i), value) for i, value in enumerate(options, 1)\n )\n choices = choice_map.keys()\n default = '1'\n\n choice_lines = ['{} - {}'.format(*c) for c in choice_map.items()]\n prompt = '\\n'.join(\n (\n 'Select {}:'.format(var_name),\n '\\n'.join(choice_lines),\n 'Choose from {}'.format(', '.join(choices)),\n )\n )\n\n user_choice = click.prompt(\n prompt, type=click.Choice(choices), default=default, show_choices=False\n )\n return choice_map[user_choice]\n\n\ndef process_json(user_value):\n \"\"\"Load user-supplied value as a JSON dict.\n\n :param str user_value: User-supplied value to load as a JSON dict\n \"\"\"\n try:\n user_dict = json.loads(user_value, object_pairs_hook=OrderedDict)\n except Exception:\n # Leave it up to click to ask the user again\n raise click.UsageError('Unable to decode to JSON.')\n\n if not isinstance(user_dict, dict):\n # Leave it up to click to ask the user again\n raise click.UsageError('Requires JSON dict.')\n\n return user_dict\n\n\ndef read_user_dict(var_name, default_value):\n \"\"\"Prompt the user to provide a dictionary of data.\n\n :param str var_name: Variable as specified in the context\n :param default_value: Value that will be returned if no input is provided\n :return: A Python dictionary to use in the context.\n \"\"\"\n # Please see https://click.palletsprojects.com/en/7.x/api/#click.prompt\n if not isinstance(default_value, dict):\n raise TypeError\n\n default_display = 'default'\n\n user_value = click.prompt(\n var_name, default=default_display, type=click.STRING, value_proc=process_json\n )\n\n if user_value == default_display:\n # Return the given default w/o any processing\n return default_value\n return user_value\n\n\ndef render_variable(env, raw, cookiecutter_dict):\n \"\"\"Render the next variable to be displayed in the user prompt.\n\n Inside the prompting taken from the cookiecutter.json file, this renders\n the next variable. For example, if a project_name is \"Peanut Butter\n Cookie\", the repo_name could be be rendered with:\n\n `{{ cookiecutter.project_name.replace(\" \", \"_\") }}`.\n\n This is then presented to the user as the default.\n\n :param Environment env: A Jinja2 Environment object.\n :param raw: The next value to be prompted for by the user.\n :param dict cookiecutter_dict: The current context as it's gradually\n being populated with variables.\n :return: The rendered value for the default variable.\n \"\"\"\n if raw is None:\n return None\n elif isinstance(raw, dict):\n return {\n render_variable(env, k, cookiecutter_dict): render_variable(\n env, v, cookiecutter_dict\n )\n for k, v in raw.items()\n }\n elif isinstance(raw, list):\n return [render_variable(env, v, cookiecutter_dict) for v in raw]\n elif not isinstance(raw, str):\n raw = str(raw)\n\n template = env.from_string(raw)\n\n rendered_template = template.render(cookiecutter=cookiecutter_dict)\n return rendered_template\n\n\ndef prompt_choice_for_config(cookiecutter_dict, env, key, options, no_input):\n \"\"\"Prompt user with a set of options to choose from.\n\n Each of the possible choices is rendered beforehand.\n \"\"\"\n rendered_options = [render_variable(env, raw, cookiecutter_dict) for raw in options]\n\n if no_input:\n return rendered_options[0]\n return read_user_choice(key, rendered_options)\n\n\ndef prompt_for_config(context, no_input=False):\n \"\"\"Prompt user to enter a new config.\n\n :param dict context: Source for field names and sample values.\n :param no_input: Prompt the user at command line for manual configuration?\n \"\"\"\n cookiecutter_dict = OrderedDict([])\n env = StrictEnvironment(context=context)\n\n # First pass: Handle simple and raw variables, plus choices.\n # These must be done first because the dictionaries keys and\n # values might refer to them.\n for key, raw in context['cookiecutter'].items():\n if key.startswith('_') and not key.startswith('__'):\n cookiecutter_dict[key] = raw\n continue\n elif key.startswith('__'):\n cookiecutter_dict[key] = render_variable(env, raw, cookiecutter_dict)\n continue\n\n try:\n if isinstance(raw, list):\n # We are dealing with a choice variable\n val = prompt_choice_for_config(\n cookiecutter_dict, env, key, raw, no_input\n )\n cookiecutter_dict[key] = val\n elif not isinstance(raw, dict):\n # We are dealing with a regular variable\n val = render_variable(env, raw, cookiecutter_dict)\n\n if not no_input:\n val = read_user_variable(key, val)\n\n cookiecutter_dict[key] = val\n except UndefinedError as err:\n msg = \"Unable to render variable '{}'\".format(key)\n raise UndefinedVariableInTemplate(msg, err, context)\n\n # Second pass; handle the dictionaries.\n for key, raw in context['cookiecutter'].items():\n # Skip private type dicts\n if key.startswith('_') and not key.startswith('__'):\n continue\n\n try:\n if isinstance(raw, dict):\n # We are dealing with a dict variable\n val = render_variable(env, raw, cookiecutter_dict)\n\n if not no_input:\n val = read_user_dict(key, val)\n\n cookiecutter_dict[key] = val\n except UndefinedError as err:\n msg = \"Unable to render variable '{}'\".format(key)\n raise UndefinedVariableInTemplate(msg, err, context)\n\n return cookiecutter_dict\n", "path": "cookiecutter/prompt.py"}]}
| 3,322 | 192 |
gh_patches_debug_13594
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-807
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Number of followers of suggested users is wrong
**Describe the bug**
Recommendations for new users to follow in my feed appear with a number of followers we have in common, but that number looks off. The user recommendation I saw said they had 306 followers I follow, but neither of us has that many followers.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to your home feed after logging in
3. Scroll down until you see a recommendation for a user to follow
4. See the weird follower count
**Expected behavior**
The number of mutual followers should be accurate
**Screenshots**
If applicable, add screenshots to help explain your problem.
Screenshot

**Smartphone (please complete the following information):**
- Device: iPhone 11
- OS:
- Browser safari
- Version
**Additional context**
</issue>
<code>
[start of bookwyrm/views/helpers.py]
1 """ helper functions used in various views """
2 import re
3 from requests import HTTPError
4 from django.core.exceptions import FieldError
5 from django.db.models import Count, Max, Q
6
7 from bookwyrm import activitypub, models
8 from bookwyrm.connectors import ConnectorException, get_data
9 from bookwyrm.status import create_generated_note
10 from bookwyrm.utils import regex
11
12
13 def get_user_from_username(viewer, username):
14 """ helper function to resolve a localname or a username to a user """
15 # raises DoesNotExist if user is now found
16 try:
17 return models.User.viewer_aware_objects(viewer).get(localname=username)
18 except models.User.DoesNotExist:
19 return models.User.viewer_aware_objects(viewer).get(username=username)
20
21
22 def is_api_request(request):
23 """ check whether a request is asking for html or data """
24 return "json" in request.headers.get("Accept") or request.path[-5:] == ".json"
25
26
27 def is_bookwyrm_request(request):
28 """ check if the request is coming from another bookwyrm instance """
29 user_agent = request.headers.get("User-Agent")
30 if user_agent is None or re.search(regex.bookwyrm_user_agent, user_agent) is None:
31 return False
32 return True
33
34
35 def object_visible_to_user(viewer, obj):
36 """ is a user authorized to view an object? """
37 if not obj:
38 return False
39
40 # viewer can't see it if the object's owner blocked them
41 if viewer in obj.user.blocks.all():
42 return False
43
44 # you can see your own posts and any public or unlisted posts
45 if viewer == obj.user or obj.privacy in ["public", "unlisted"]:
46 return True
47
48 # you can see the followers only posts of people you follow
49 if obj.privacy == "followers" and obj.user.followers.filter(id=viewer.id).first():
50 return True
51
52 # you can see dms you are tagged in
53 if isinstance(obj, models.Status):
54 if obj.privacy == "direct" and obj.mention_users.filter(id=viewer.id).first():
55 return True
56 return False
57
58
59 def privacy_filter(viewer, queryset, privacy_levels=None, following_only=False):
60 """ filter objects that have "user" and "privacy" fields """
61 privacy_levels = privacy_levels or ["public", "unlisted", "followers", "direct"]
62 # if there'd a deleted field, exclude deleted items
63 try:
64 queryset = queryset.filter(deleted=False)
65 except FieldError:
66 pass
67
68 # exclude blocks from both directions
69 if not viewer.is_anonymous:
70 blocked = models.User.objects.filter(id__in=viewer.blocks.all()).all()
71 queryset = queryset.exclude(Q(user__in=blocked) | Q(user__blocks=viewer))
72
73 # you can't see followers only or direct messages if you're not logged in
74 if viewer.is_anonymous:
75 privacy_levels = [p for p in privacy_levels if not p in ["followers", "direct"]]
76
77 # filter to only privided privacy levels
78 queryset = queryset.filter(privacy__in=privacy_levels)
79
80 # only include statuses the user follows
81 if following_only:
82 queryset = queryset.exclude(
83 ~Q( # remove everythign except
84 Q(user__in=viewer.following.all())
85 | Q(user=viewer) # user following
86 | Q(mention_users=viewer) # is self # mentions user
87 ),
88 )
89 # exclude followers-only statuses the user doesn't follow
90 elif "followers" in privacy_levels:
91 queryset = queryset.exclude(
92 ~Q( # user isn't following and it isn't their own status
93 Q(user__in=viewer.following.all()) | Q(user=viewer)
94 ),
95 privacy="followers", # and the status is followers only
96 )
97
98 # exclude direct messages not intended for the user
99 if "direct" in privacy_levels:
100 try:
101 queryset = queryset.exclude(
102 ~Q(Q(user=viewer) | Q(mention_users=viewer)), privacy="direct"
103 )
104 except FieldError:
105 queryset = queryset.exclude(~Q(user=viewer), privacy="direct")
106
107 return queryset
108
109
110 def handle_remote_webfinger(query):
111 """ webfingerin' other servers """
112 user = None
113
114 # usernames could be @user@domain or user@domain
115 if not query:
116 return None
117
118 if query[0] == "@":
119 query = query[1:]
120
121 try:
122 domain = query.split("@")[1]
123 except IndexError:
124 return None
125
126 try:
127 user = models.User.objects.get(username=query)
128 except models.User.DoesNotExist:
129 url = "https://%s/.well-known/webfinger?resource=acct:%s" % (domain, query)
130 try:
131 data = get_data(url)
132 except (ConnectorException, HTTPError):
133 return None
134
135 for link in data.get("links"):
136 if link.get("rel") == "self":
137 try:
138 user = activitypub.resolve_remote_id(
139 link["href"], model=models.User
140 )
141 except KeyError:
142 return None
143 return user
144
145
146 def get_edition(book_id):
147 """ look up a book in the db and return an edition """
148 book = models.Book.objects.select_subclasses().get(id=book_id)
149 if isinstance(book, models.Work):
150 book = book.get_default_edition()
151 return book
152
153
154 def handle_reading_status(user, shelf, book, privacy):
155 """ post about a user reading a book """
156 # tell the world about this cool thing that happened
157 try:
158 message = {
159 "to-read": "wants to read",
160 "reading": "started reading",
161 "read": "finished reading",
162 }[shelf.identifier]
163 except KeyError:
164 # it's a non-standard shelf, don't worry about it
165 return
166
167 status = create_generated_note(user, message, mention_books=[book], privacy=privacy)
168 status.save()
169
170
171 def is_blocked(viewer, user):
172 """ is this viewer blocked by the user? """
173 if viewer.is_authenticated and viewer in user.blocks.all():
174 return True
175 return False
176
177
178 def get_discover_books():
179 """ list of books for the discover page """
180 return list(
181 set(
182 models.Edition.objects.filter(
183 review__published_date__isnull=False,
184 review__deleted=False,
185 review__user__local=True,
186 review__privacy__in=["public", "unlisted"],
187 )
188 .exclude(cover__exact="")
189 .annotate(Max("review__published_date"))
190 .order_by("-review__published_date__max")[:6]
191 )
192 )
193
194
195 def get_suggested_users(user, *args, **kwargs):
196 """ Users, annotated with things they have in common """
197 return (
198 models.User.objects.filter(discoverable=True, is_active=True, *args, **kwargs)
199 .exclude(Q(id__in=user.blocks.all()) | Q(blocks=user))
200 .annotate(
201 mutuals=Count(
202 "following",
203 filter=Q(
204 ~Q(id=user.id),
205 ~Q(id__in=user.following.all()),
206 following__in=user.following.all(),
207 ),
208 ),
209 shared_books=Count(
210 "shelfbook",
211 filter=Q(
212 ~Q(id=user.id),
213 shelfbook__book__parent_work__in=[
214 s.book.parent_work for s in user.shelfbook_set.all()
215 ],
216 ),
217 ),
218 )
219 )
220
[end of bookwyrm/views/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bookwyrm/views/helpers.py b/bookwyrm/views/helpers.py
--- a/bookwyrm/views/helpers.py
+++ b/bookwyrm/views/helpers.py
@@ -205,6 +205,7 @@
~Q(id__in=user.following.all()),
following__in=user.following.all(),
),
+ distinct=True,
),
shared_books=Count(
"shelfbook",
@@ -214,6 +215,7 @@
s.book.parent_work for s in user.shelfbook_set.all()
],
),
+ distinct=True,
),
)
)
|
{"golden_diff": "diff --git a/bookwyrm/views/helpers.py b/bookwyrm/views/helpers.py\n--- a/bookwyrm/views/helpers.py\n+++ b/bookwyrm/views/helpers.py\n@@ -205,6 +205,7 @@\n ~Q(id__in=user.following.all()),\n following__in=user.following.all(),\n ),\n+ distinct=True,\n ),\n shared_books=Count(\n \"shelfbook\",\n@@ -214,6 +215,7 @@\n s.book.parent_work for s in user.shelfbook_set.all()\n ],\n ),\n+ distinct=True,\n ),\n )\n )\n", "issue": "Number of followers of suggested users is wrong\n**Describe the bug**\r\nRecommendations for new users to follow in my feed appear with a number of followers we have in common, but that number looks off. The user recommendation I saw said they had 306 followers I follow, but neither of us has that many followers.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to your home feed after logging in\r\n3. Scroll down until you see a recommendation for a user to follow\r\n4. See the weird follower count\r\n\r\n**Expected behavior**\r\nThe number of mutual followers should be accurate \r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\nScreenshot\r\n\r\n\r\n**Smartphone (please complete the following information):**\r\n - Device: iPhone 11\r\n - OS: \r\n - Browser safari\r\n - Version\r\n\r\n**Additional context**\r\n\r\n\n", "before_files": [{"content": "\"\"\" helper functions used in various views \"\"\"\nimport re\nfrom requests import HTTPError\nfrom django.core.exceptions import FieldError\nfrom django.db.models import Count, Max, Q\n\nfrom bookwyrm import activitypub, models\nfrom bookwyrm.connectors import ConnectorException, get_data\nfrom bookwyrm.status import create_generated_note\nfrom bookwyrm.utils import regex\n\n\ndef get_user_from_username(viewer, username):\n \"\"\" helper function to resolve a localname or a username to a user \"\"\"\n # raises DoesNotExist if user is now found\n try:\n return models.User.viewer_aware_objects(viewer).get(localname=username)\n except models.User.DoesNotExist:\n return models.User.viewer_aware_objects(viewer).get(username=username)\n\n\ndef is_api_request(request):\n \"\"\" check whether a request is asking for html or data \"\"\"\n return \"json\" in request.headers.get(\"Accept\") or request.path[-5:] == \".json\"\n\n\ndef is_bookwyrm_request(request):\n \"\"\" check if the request is coming from another bookwyrm instance \"\"\"\n user_agent = request.headers.get(\"User-Agent\")\n if user_agent is None or re.search(regex.bookwyrm_user_agent, user_agent) is None:\n return False\n return True\n\n\ndef object_visible_to_user(viewer, obj):\n \"\"\" is a user authorized to view an object? \"\"\"\n if not obj:\n return False\n\n # viewer can't see it if the object's owner blocked them\n if viewer in obj.user.blocks.all():\n return False\n\n # you can see your own posts and any public or unlisted posts\n if viewer == obj.user or obj.privacy in [\"public\", \"unlisted\"]:\n return True\n\n # you can see the followers only posts of people you follow\n if obj.privacy == \"followers\" and obj.user.followers.filter(id=viewer.id).first():\n return True\n\n # you can see dms you are tagged in\n if isinstance(obj, models.Status):\n if obj.privacy == \"direct\" and obj.mention_users.filter(id=viewer.id).first():\n return True\n return False\n\n\ndef privacy_filter(viewer, queryset, privacy_levels=None, following_only=False):\n \"\"\" filter objects that have \"user\" and \"privacy\" fields \"\"\"\n privacy_levels = privacy_levels or [\"public\", \"unlisted\", \"followers\", \"direct\"]\n # if there'd a deleted field, exclude deleted items\n try:\n queryset = queryset.filter(deleted=False)\n except FieldError:\n pass\n\n # exclude blocks from both directions\n if not viewer.is_anonymous:\n blocked = models.User.objects.filter(id__in=viewer.blocks.all()).all()\n queryset = queryset.exclude(Q(user__in=blocked) | Q(user__blocks=viewer))\n\n # you can't see followers only or direct messages if you're not logged in\n if viewer.is_anonymous:\n privacy_levels = [p for p in privacy_levels if not p in [\"followers\", \"direct\"]]\n\n # filter to only privided privacy levels\n queryset = queryset.filter(privacy__in=privacy_levels)\n\n # only include statuses the user follows\n if following_only:\n queryset = queryset.exclude(\n ~Q( # remove everythign except\n Q(user__in=viewer.following.all())\n | Q(user=viewer) # user following\n | Q(mention_users=viewer) # is self # mentions user\n ),\n )\n # exclude followers-only statuses the user doesn't follow\n elif \"followers\" in privacy_levels:\n queryset = queryset.exclude(\n ~Q( # user isn't following and it isn't their own status\n Q(user__in=viewer.following.all()) | Q(user=viewer)\n ),\n privacy=\"followers\", # and the status is followers only\n )\n\n # exclude direct messages not intended for the user\n if \"direct\" in privacy_levels:\n try:\n queryset = queryset.exclude(\n ~Q(Q(user=viewer) | Q(mention_users=viewer)), privacy=\"direct\"\n )\n except FieldError:\n queryset = queryset.exclude(~Q(user=viewer), privacy=\"direct\")\n\n return queryset\n\n\ndef handle_remote_webfinger(query):\n \"\"\" webfingerin' other servers \"\"\"\n user = None\n\n # usernames could be @user@domain or user@domain\n if not query:\n return None\n\n if query[0] == \"@\":\n query = query[1:]\n\n try:\n domain = query.split(\"@\")[1]\n except IndexError:\n return None\n\n try:\n user = models.User.objects.get(username=query)\n except models.User.DoesNotExist:\n url = \"https://%s/.well-known/webfinger?resource=acct:%s\" % (domain, query)\n try:\n data = get_data(url)\n except (ConnectorException, HTTPError):\n return None\n\n for link in data.get(\"links\"):\n if link.get(\"rel\") == \"self\":\n try:\n user = activitypub.resolve_remote_id(\n link[\"href\"], model=models.User\n )\n except KeyError:\n return None\n return user\n\n\ndef get_edition(book_id):\n \"\"\" look up a book in the db and return an edition \"\"\"\n book = models.Book.objects.select_subclasses().get(id=book_id)\n if isinstance(book, models.Work):\n book = book.get_default_edition()\n return book\n\n\ndef handle_reading_status(user, shelf, book, privacy):\n \"\"\" post about a user reading a book \"\"\"\n # tell the world about this cool thing that happened\n try:\n message = {\n \"to-read\": \"wants to read\",\n \"reading\": \"started reading\",\n \"read\": \"finished reading\",\n }[shelf.identifier]\n except KeyError:\n # it's a non-standard shelf, don't worry about it\n return\n\n status = create_generated_note(user, message, mention_books=[book], privacy=privacy)\n status.save()\n\n\ndef is_blocked(viewer, user):\n \"\"\" is this viewer blocked by the user? \"\"\"\n if viewer.is_authenticated and viewer in user.blocks.all():\n return True\n return False\n\n\ndef get_discover_books():\n \"\"\" list of books for the discover page \"\"\"\n return list(\n set(\n models.Edition.objects.filter(\n review__published_date__isnull=False,\n review__deleted=False,\n review__user__local=True,\n review__privacy__in=[\"public\", \"unlisted\"],\n )\n .exclude(cover__exact=\"\")\n .annotate(Max(\"review__published_date\"))\n .order_by(\"-review__published_date__max\")[:6]\n )\n )\n\n\ndef get_suggested_users(user, *args, **kwargs):\n \"\"\" Users, annotated with things they have in common \"\"\"\n return (\n models.User.objects.filter(discoverable=True, is_active=True, *args, **kwargs)\n .exclude(Q(id__in=user.blocks.all()) | Q(blocks=user))\n .annotate(\n mutuals=Count(\n \"following\",\n filter=Q(\n ~Q(id=user.id),\n ~Q(id__in=user.following.all()),\n following__in=user.following.all(),\n ),\n ),\n shared_books=Count(\n \"shelfbook\",\n filter=Q(\n ~Q(id=user.id),\n shelfbook__book__parent_work__in=[\n s.book.parent_work for s in user.shelfbook_set.all()\n ],\n ),\n ),\n )\n )\n", "path": "bookwyrm/views/helpers.py"}]}
| 2,984 | 134 |
gh_patches_debug_1591
|
rasdani/github-patches
|
git_diff
|
evennia__evennia-600
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'<' character is an "escape" character when messaging characters?
Hi! I'm not sure if anyone else has stumbled across this issue. I first discovered this using a custom character typeclass that didn't have its msg method overloaded, and then tested it again with the default character typeclass. I haven't messed/overrode any functions found in /src. It seems as though the character '<' works like an escape character, as of the latest Evennia patch when messaging characters.
Examples of testing in-game with @py (using the default character typeclass):

It's pretty weird, and it likely doesn't affect anyone sorely, but I was using the '<' and '>' character for my prompt, so I was surprised and baffled when half of my prompt disappeared! It used to work as of last night, before I pulled in the latest Evennia changes. I was browsing through Evennia's latest commits, but I haven't found anything that I thought would affect this.
</issue>
<code>
[start of src/server/portal/mxp.py]
1 """
2 MXP - Mud eXtension Protocol.
3
4 Partial implementation of the MXP protocol.
5 The MXP protocol allows more advanced formatting options for telnet clients
6 that supports it (mudlet, zmud, mushclient are a few)
7
8 This only implements the SEND tag.
9
10 More information can be found on the following links:
11 http://www.zuggsoft.com/zmud/mxp.htm
12 http://www.mushclient.com/mushclient/mxp.htm
13 http://www.gammon.com.au/mushclient/addingservermxp.htm
14 """
15 import re
16
17 LINKS_SUB = re.compile(r'\{lc(.*?)\{lt(.*?)\{le', re.DOTALL)
18
19 MXP = "\x5B"
20 MXP_TEMPSECURE = "\x1B[4z"
21 MXP_SEND = MXP_TEMPSECURE + \
22 "<SEND HREF='\\1'>" + \
23 "\\2" + \
24 MXP_TEMPSECURE + \
25 "</SEND>"
26
27 def mxp_parse(text):
28 """
29 Replaces links to the correct format for MXP.
30 """
31 text = LINKS_SUB.sub(MXP_SEND, text)
32 return text
33
34 class Mxp(object):
35 """
36 Implements the MXP protocol.
37 """
38
39 def __init__(self, protocol):
40 """Initializes the protocol by checking if the client supports it."""
41 self.protocol = protocol
42 self.protocol.protocol_flags["MXP"] = False
43 self.protocol.will(MXP).addCallbacks(self.do_mxp, self.no_mxp)
44
45 def no_mxp(self, option):
46 """
47 Client does not support MXP.
48 """
49 self.protocol.protocol_flags["MXP"] = False
50
51 def do_mxp(self, option):
52 """
53 Client does support MXP.
54 """
55 self.protocol.protocol_flags["MXP"] = True
56 self.protocol.handshake_done()
57 self.protocol.requestNegotiation(MXP, '')
58
[end of src/server/portal/mxp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/server/portal/mxp.py b/src/server/portal/mxp.py
--- a/src/server/portal/mxp.py
+++ b/src/server/portal/mxp.py
@@ -28,6 +28,10 @@
"""
Replaces links to the correct format for MXP.
"""
+ text = text.replace("&", "&") \
+ .replace("<", "<") \
+ .replace(">", ">")
+
text = LINKS_SUB.sub(MXP_SEND, text)
return text
|
{"golden_diff": "diff --git a/src/server/portal/mxp.py b/src/server/portal/mxp.py\n--- a/src/server/portal/mxp.py\n+++ b/src/server/portal/mxp.py\n@@ -28,6 +28,10 @@\n \"\"\"\n Replaces links to the correct format for MXP.\n \"\"\"\n+ text = text.replace(\"&\", \"&\") \\\n+ .replace(\"<\", \"<\") \\\n+ .replace(\">\", \">\")\n+\n text = LINKS_SUB.sub(MXP_SEND, text)\n return text\n", "issue": "'<' character is an \"escape\" character when messaging characters?\nHi! I'm not sure if anyone else has stumbled across this issue. I first discovered this using a custom character typeclass that didn't have its msg method overloaded, and then tested it again with the default character typeclass. I haven't messed/overrode any functions found in /src. It seems as though the character '<' works like an escape character, as of the latest Evennia patch when messaging characters.\n\nExamples of testing in-game with @py (using the default character typeclass):\n\n\nIt's pretty weird, and it likely doesn't affect anyone sorely, but I was using the '<' and '>' character for my prompt, so I was surprised and baffled when half of my prompt disappeared! It used to work as of last night, before I pulled in the latest Evennia changes. I was browsing through Evennia's latest commits, but I haven't found anything that I thought would affect this.\n\n", "before_files": [{"content": "\"\"\"\nMXP - Mud eXtension Protocol.\n\nPartial implementation of the MXP protocol.\nThe MXP protocol allows more advanced formatting options for telnet clients\nthat supports it (mudlet, zmud, mushclient are a few)\n\nThis only implements the SEND tag.\n\nMore information can be found on the following links:\nhttp://www.zuggsoft.com/zmud/mxp.htm\nhttp://www.mushclient.com/mushclient/mxp.htm\nhttp://www.gammon.com.au/mushclient/addingservermxp.htm\n\"\"\"\nimport re\n\nLINKS_SUB = re.compile(r'\\{lc(.*?)\\{lt(.*?)\\{le', re.DOTALL)\n\nMXP = \"\\x5B\"\nMXP_TEMPSECURE = \"\\x1B[4z\"\nMXP_SEND = MXP_TEMPSECURE + \\\n \"<SEND HREF='\\\\1'>\" + \\\n \"\\\\2\" + \\\n MXP_TEMPSECURE + \\\n \"</SEND>\"\n\ndef mxp_parse(text):\n \"\"\"\n Replaces links to the correct format for MXP.\n \"\"\"\n text = LINKS_SUB.sub(MXP_SEND, text)\n return text\n\nclass Mxp(object):\n \"\"\"\n Implements the MXP protocol.\n \"\"\"\n\n def __init__(self, protocol):\n \"\"\"Initializes the protocol by checking if the client supports it.\"\"\"\n self.protocol = protocol\n self.protocol.protocol_flags[\"MXP\"] = False\n self.protocol.will(MXP).addCallbacks(self.do_mxp, self.no_mxp)\n\n def no_mxp(self, option):\n \"\"\"\n Client does not support MXP.\n \"\"\"\n self.protocol.protocol_flags[\"MXP\"] = False\n\n def do_mxp(self, option):\n \"\"\"\n Client does support MXP.\n \"\"\"\n self.protocol.protocol_flags[\"MXP\"] = True\n self.protocol.handshake_done()\n self.protocol.requestNegotiation(MXP, '')\n", "path": "src/server/portal/mxp.py"}]}
| 1,324 | 120 |
gh_patches_debug_9962
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-499
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Escaping % in add_default_args
**Describe the bug**
In utilities/arg_parse.py, a percentage symbol is not escaped and would cause an error when printing help information.
```python
parser.add_argument('--overfit', default=-1, type=float,
help='% of dataset to use with this option. float, or -1 for none')
```
**To Reproduce**
Steps to reproduce the behavior:
```
import os
import random
import sys
from pytorch_lightning.utilities.arg_parse import add_default_args
from test_tube import HyperOptArgumentParser, Experiment
if __name__ == "__main__":
root_dir = os.path.split(os.path.dirname(sys.modules['__main__'].__file__))[0]
parent_parser = HyperOptArgumentParser(strategy='random_search', add_help=True)
add_default_args(parent_parser, root_dir)
hyperparams = parent_parser.parse_args()
```
Execute the file with `--help`
```
python temp.py --help
```
Throws an error:
```
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
Traceback (most recent call last):
File "/Users/chenghaomou/Code/ai2/temp.py", line 11, in <module>
hyperparams = parent_parser.parse_args()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/site-packages/test_tube/argparse_hopt.py", line 238, in parse_args
results = self.__parse_args(args, namespace)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/site-packages/test_tube/argparse_hopt.py", line 157, in __parse_args
args, argv = self.parse_known_args(args, namespace)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1782, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1988, in _parse_known_args
start_index = consume_optional(start_index)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1928, in consume_optional
take_action(action, args, option_string)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1856, in take_action
action(self, namespace, argument_values, option_string)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1038, in __call__
parser.print_help()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 2475, in print_help
self._print_message(self.format_help(), file)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 2459, in format_help
return formatter.format_help()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 284, in format_help
help = self._root_section.format_help()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in format_help
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in format_help
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 525, in _format_action
help_text = self._expand_help(action)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 615, in _expand_help
return self._get_help_string(action) % params
TypeError: %o format: an integer is required, not dict
```
**Expected behavior**
Escape the percentage sign and help can be printed.
**Desktop (please complete the following information):**
- OS: macOS 10.15
- Browser Chrome
- Version 78.0.3904.87
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of pytorch_lightning/utilities/arg_parse.py]
1 """
2 List of default args which mught be useful for all the available flags
3 Might need to update with the new flags
4 """
5
6 import os
7
8
9 def add_default_args(parser, root_dir, rand_seed=None, possible_model_names=None):
10 # training, test, val check intervals
11 parser.add_argument('--eval_test_set', dest='eval_test_set', action='store_true',
12 help='true = run test set also')
13 parser.add_argument('--check_val_every_n_epoch', default=1, type=int,
14 help='check val every n epochs')
15 parser.opt_list('--accumulate_grad_batches', default=1, type=int, tunable=False,
16 help='accumulates gradients k times before applying update.'
17 ' Simulates huge batch size')
18 parser.add_argument('--max_nb_epochs', default=200, type=int, help='cap epochs')
19 parser.add_argument('--min_nb_epochs', default=2, type=int, help='min epochs')
20 parser.add_argument('--train_percent_check', default=1.0, type=float,
21 help='how much of training set to check')
22 parser.add_argument('--val_percent_check', default=1.0, type=float,
23 help='how much of val set to check')
24 parser.add_argument('--test_percent_check', default=1.0, type=float,
25 help='how much of test set to check')
26
27 parser.add_argument('--val_check_interval', default=0.95, type=float,
28 help='how much within 1 epoch to check val')
29 parser.add_argument('--log_save_interval', default=100, type=int,
30 help='how many batches between log saves')
31 parser.add_argument('--row_log_interval', default=100, type=int,
32 help='add log every k batches')
33
34 # early stopping
35 parser.add_argument('--disable_early_stop', dest='enable_early_stop', action='store_false')
36 parser.add_argument('--early_stop_metric', default='val_acc', type=str)
37 parser.add_argument('--early_stop_mode', default='min', type=str)
38 parser.add_argument('--early_stop_patience', default=3, type=int,
39 help='number of epochs until stop')
40
41 # gradient handling
42 parser.add_argument('--gradient_clip_val', default=-1, type=int)
43 parser.add_argument('--track_grad_norm', default=-1, type=int,
44 help='if > 0, will track this grad norm')
45
46 # model saving
47 parser.add_argument('--model_save_path', default=root_dir + '/model_weights')
48 parser.add_argument('--model_save_monitor_value', default='val_acc')
49 parser.add_argument('--model_save_monitor_mode', default='max')
50
51 # model paths
52 parser.add_argument('--model_load_weights_path', default=None, type=str)
53
54 if possible_model_names is not None:
55 parser.add_argument('--model_name', default='', help=','.join(possible_model_names))
56
57 # test_tube settings
58 parser.add_argument('-en', '--tt_name', default='pt_test')
59 parser.add_argument('-td', '--tt_description', default='pytorch lightning test')
60 parser.add_argument('--tt_save_path', default=os.path.join(root_dir, 'test_tube_logs'),
61 help='logging dir')
62 parser.add_argument('--enable_single_run', dest='single_run', action='store_true')
63 parser.add_argument('--nb_hopt_trials', default=1, type=int)
64 parser.add_argument('--log_stdout', dest='log_stdout', action='store_true')
65
66 # GPU
67 parser.add_argument('--gpus', default=None, type=str)
68 parser.add_argument('--single_run_gpu', dest='single_run_gpu', action='store_true')
69 parser.add_argument('--default_tensor_type', default='torch.cuda.FloatTensor', type=str)
70 parser.add_argument('--use_amp', dest='use_amp', action='store_true')
71 parser.add_argument('--check_grad_nans', dest='check_grad_nans', action='store_true')
72 parser.add_argument('--amp_level', default='O2', type=str)
73
74 # run on hpc
75 parser.add_argument('--on_cluster', dest='on_cluster', action='store_true')
76
77 # FAST training
78 # use these settings to make sure network has no bugs without running a full dataset
79 parser.add_argument('--fast_dev_run', dest='fast_dev_run', default=False, action='store_true',
80 help='runs validation after 1 training step')
81 parser.add_argument('--enable_tqdm', dest='enable_tqdm', default=False, action='store_true',
82 help='false removes the progress bar')
83 parser.add_argument('--overfit', default=-1, type=float,
84 help='% of dataset to use with this option. float, or -1 for none')
85
86 # debug args
87 if rand_seed is not None:
88 parser.add_argument('--random_seed', default=rand_seed, type=int)
89
90 parser.add_argument('--interactive', dest='interactive', action='store_true',
91 help='runs on gpu without cluster')
92 parser.add_argument('--debug', dest='debug', action='store_true',
93 help='enables/disables test tube')
94 parser.add_argument('--local', dest='local', action='store_true',
95 help='enables local training')
96
97 # optimizer
98 parser.add_argument('--lr_scheduler_milestones', default=None, type=str)
99
[end of pytorch_lightning/utilities/arg_parse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pytorch_lightning/utilities/arg_parse.py b/pytorch_lightning/utilities/arg_parse.py
--- a/pytorch_lightning/utilities/arg_parse.py
+++ b/pytorch_lightning/utilities/arg_parse.py
@@ -81,7 +81,7 @@
parser.add_argument('--enable_tqdm', dest='enable_tqdm', default=False, action='store_true',
help='false removes the progress bar')
parser.add_argument('--overfit', default=-1, type=float,
- help='% of dataset to use with this option. float, or -1 for none')
+ help='%% of dataset to use with this option. float, or -1 for none')
# debug args
if rand_seed is not None:
|
{"golden_diff": "diff --git a/pytorch_lightning/utilities/arg_parse.py b/pytorch_lightning/utilities/arg_parse.py\n--- a/pytorch_lightning/utilities/arg_parse.py\n+++ b/pytorch_lightning/utilities/arg_parse.py\n@@ -81,7 +81,7 @@\n parser.add_argument('--enable_tqdm', dest='enable_tqdm', default=False, action='store_true',\n help='false removes the progress bar')\n parser.add_argument('--overfit', default=-1, type=float,\n- help='% of dataset to use with this option. float, or -1 for none')\n+ help='%% of dataset to use with this option. float, or -1 for none')\n \n # debug args\n if rand_seed is not None:\n", "issue": "Escaping % in add_default_args\n**Describe the bug**\r\n\r\nIn utilities/arg_parse.py, a percentage symbol is not escaped and would cause an error when printing help information.\r\n\r\n```python\r\nparser.add_argument('--overfit', default=-1, type=float,\r\n help='% of dataset to use with this option. float, or -1 for none')\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n```\r\nimport os\r\nimport random\r\nimport sys\r\nfrom pytorch_lightning.utilities.arg_parse import add_default_args\r\nfrom test_tube import HyperOptArgumentParser, Experiment\r\n\r\nif __name__ == \"__main__\":\r\n root_dir = os.path.split(os.path.dirname(sys.modules['__main__'].__file__))[0]\r\n parent_parser = HyperOptArgumentParser(strategy='random_search', add_help=True)\r\n add_default_args(parent_parser, root_dir)\r\n hyperparams = parent_parser.parse_args()\r\n```\r\nExecute the file with `--help`\r\n\r\n```\r\npython temp.py --help\r\n```\r\n\r\nThrows an error:\r\n```\r\nWARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.\r\nTraceback (most recent call last):\r\n File \"/Users/chenghaomou/Code/ai2/temp.py\", line 11, in <module>\r\n hyperparams = parent_parser.parse_args()\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/site-packages/test_tube/argparse_hopt.py\", line 238, in parse_args\r\n results = self.__parse_args(args, namespace)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/site-packages/test_tube/argparse_hopt.py\", line 157, in __parse_args\r\n args, argv = self.parse_known_args(args, namespace)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 1782, in parse_known_args\r\n namespace, args = self._parse_known_args(args, namespace)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 1988, in _parse_known_args\r\n start_index = consume_optional(start_index)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 1928, in consume_optional\r\n take_action(action, args, option_string)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 1856, in take_action\r\n action(self, namespace, argument_values, option_string)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 1038, in __call__\r\n parser.print_help()\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 2475, in print_help\r\n self._print_message(self.format_help(), file)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 2459, in format_help\r\n return formatter.format_help()\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 284, in format_help\r\n help = self._root_section.format_help()\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 215, in format_help\r\n item_help = join([func(*args) for func, args in self.items])\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 215, in <listcomp>\r\n item_help = join([func(*args) for func, args in self.items])\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 215, in format_help\r\n item_help = join([func(*args) for func, args in self.items])\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 215, in <listcomp>\r\n item_help = join([func(*args) for func, args in self.items])\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 525, in _format_action\r\n help_text = self._expand_help(action)\r\n File \"/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py\", line 615, in _expand_help\r\n return self._get_help_string(action) % params\r\nTypeError: %o format: an integer is required, not dict\r\n```\r\n\r\n\r\n**Expected behavior**\r\nEscape the percentage sign and help can be printed. \r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS 10.15\r\n - Browser Chrome\r\n - Version 78.0.3904.87\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "\"\"\"\nList of default args which mught be useful for all the available flags\nMight need to update with the new flags\n\"\"\"\n\nimport os\n\n\ndef add_default_args(parser, root_dir, rand_seed=None, possible_model_names=None):\n # training, test, val check intervals\n parser.add_argument('--eval_test_set', dest='eval_test_set', action='store_true',\n help='true = run test set also')\n parser.add_argument('--check_val_every_n_epoch', default=1, type=int,\n help='check val every n epochs')\n parser.opt_list('--accumulate_grad_batches', default=1, type=int, tunable=False,\n help='accumulates gradients k times before applying update.'\n ' Simulates huge batch size')\n parser.add_argument('--max_nb_epochs', default=200, type=int, help='cap epochs')\n parser.add_argument('--min_nb_epochs', default=2, type=int, help='min epochs')\n parser.add_argument('--train_percent_check', default=1.0, type=float,\n help='how much of training set to check')\n parser.add_argument('--val_percent_check', default=1.0, type=float,\n help='how much of val set to check')\n parser.add_argument('--test_percent_check', default=1.0, type=float,\n help='how much of test set to check')\n\n parser.add_argument('--val_check_interval', default=0.95, type=float,\n help='how much within 1 epoch to check val')\n parser.add_argument('--log_save_interval', default=100, type=int,\n help='how many batches between log saves')\n parser.add_argument('--row_log_interval', default=100, type=int,\n help='add log every k batches')\n\n # early stopping\n parser.add_argument('--disable_early_stop', dest='enable_early_stop', action='store_false')\n parser.add_argument('--early_stop_metric', default='val_acc', type=str)\n parser.add_argument('--early_stop_mode', default='min', type=str)\n parser.add_argument('--early_stop_patience', default=3, type=int,\n help='number of epochs until stop')\n\n # gradient handling\n parser.add_argument('--gradient_clip_val', default=-1, type=int)\n parser.add_argument('--track_grad_norm', default=-1, type=int,\n help='if > 0, will track this grad norm')\n\n # model saving\n parser.add_argument('--model_save_path', default=root_dir + '/model_weights')\n parser.add_argument('--model_save_monitor_value', default='val_acc')\n parser.add_argument('--model_save_monitor_mode', default='max')\n\n # model paths\n parser.add_argument('--model_load_weights_path', default=None, type=str)\n\n if possible_model_names is not None:\n parser.add_argument('--model_name', default='', help=','.join(possible_model_names))\n\n # test_tube settings\n parser.add_argument('-en', '--tt_name', default='pt_test')\n parser.add_argument('-td', '--tt_description', default='pytorch lightning test')\n parser.add_argument('--tt_save_path', default=os.path.join(root_dir, 'test_tube_logs'),\n help='logging dir')\n parser.add_argument('--enable_single_run', dest='single_run', action='store_true')\n parser.add_argument('--nb_hopt_trials', default=1, type=int)\n parser.add_argument('--log_stdout', dest='log_stdout', action='store_true')\n\n # GPU\n parser.add_argument('--gpus', default=None, type=str)\n parser.add_argument('--single_run_gpu', dest='single_run_gpu', action='store_true')\n parser.add_argument('--default_tensor_type', default='torch.cuda.FloatTensor', type=str)\n parser.add_argument('--use_amp', dest='use_amp', action='store_true')\n parser.add_argument('--check_grad_nans', dest='check_grad_nans', action='store_true')\n parser.add_argument('--amp_level', default='O2', type=str)\n\n # run on hpc\n parser.add_argument('--on_cluster', dest='on_cluster', action='store_true')\n\n # FAST training\n # use these settings to make sure network has no bugs without running a full dataset\n parser.add_argument('--fast_dev_run', dest='fast_dev_run', default=False, action='store_true',\n help='runs validation after 1 training step')\n parser.add_argument('--enable_tqdm', dest='enable_tqdm', default=False, action='store_true',\n help='false removes the progress bar')\n parser.add_argument('--overfit', default=-1, type=float,\n help='% of dataset to use with this option. float, or -1 for none')\n\n # debug args\n if rand_seed is not None:\n parser.add_argument('--random_seed', default=rand_seed, type=int)\n\n parser.add_argument('--interactive', dest='interactive', action='store_true',\n help='runs on gpu without cluster')\n parser.add_argument('--debug', dest='debug', action='store_true',\n help='enables/disables test tube')\n parser.add_argument('--local', dest='local', action='store_true',\n help='enables local training')\n\n # optimizer\n parser.add_argument('--lr_scheduler_milestones', default=None, type=str)\n", "path": "pytorch_lightning/utilities/arg_parse.py"}]}
| 3,053 | 168 |
gh_patches_debug_1718
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-6683
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
alias `list` as `ls`?
I have been reaching for `dvc ls` out of habit instead of `dvc list`. Should we introduce an alias for `dvc list`?
</issue>
<code>
[start of dvc/command/ls/__init__.py]
1 import argparse
2 import logging
3
4 from dvc.command import completion
5 from dvc.command.base import CmdBaseNoRepo, append_doc_link
6 from dvc.command.ls.ls_colors import LsColors
7 from dvc.exceptions import DvcException
8 from dvc.ui import ui
9
10 logger = logging.getLogger(__name__)
11
12
13 def _prettify(entries, with_color=False):
14 if with_color:
15 ls_colors = LsColors()
16 fmt = ls_colors.format
17 else:
18
19 def fmt(entry):
20 return entry["path"]
21
22 return [fmt(entry) for entry in entries]
23
24
25 class CmdList(CmdBaseNoRepo):
26 def run(self):
27 from dvc.repo import Repo
28
29 try:
30 entries = Repo.ls(
31 self.args.url,
32 self.args.path,
33 rev=self.args.rev,
34 recursive=self.args.recursive,
35 dvc_only=self.args.dvc_only,
36 )
37 if self.args.show_json:
38 import json
39
40 ui.write(json.dumps(entries))
41 elif entries:
42 entries = _prettify(entries, with_color=True)
43 ui.write("\n".join(entries))
44 return 0
45 except DvcException:
46 logger.exception(f"failed to list '{self.args.url}'")
47 return 1
48
49
50 def add_parser(subparsers, parent_parser):
51 LIST_HELP = (
52 "List repository contents, including files"
53 " and directories tracked by DVC and by Git."
54 )
55 list_parser = subparsers.add_parser(
56 "list",
57 parents=[parent_parser],
58 description=append_doc_link(LIST_HELP, "list"),
59 help=LIST_HELP,
60 formatter_class=argparse.RawTextHelpFormatter,
61 )
62 list_parser.add_argument("url", help="Location of DVC repository to list")
63 list_parser.add_argument(
64 "-R",
65 "--recursive",
66 action="store_true",
67 help="Recursively list files.",
68 )
69 list_parser.add_argument(
70 "--dvc-only", action="store_true", help="Show only DVC outputs."
71 )
72 list_parser.add_argument(
73 "--show-json", action="store_true", help="Show output in JSON format."
74 )
75 list_parser.add_argument(
76 "--rev",
77 nargs="?",
78 help="Git revision (e.g. SHA, branch, tag)",
79 metavar="<commit>",
80 )
81 list_parser.add_argument(
82 "path",
83 nargs="?",
84 help="Path to directory within the repository to list outputs for",
85 ).complete = completion.DIR
86 list_parser.set_defaults(func=CmdList)
87
[end of dvc/command/ls/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py
--- a/dvc/command/ls/__init__.py
+++ b/dvc/command/ls/__init__.py
@@ -54,6 +54,7 @@
)
list_parser = subparsers.add_parser(
"list",
+ aliases=["ls"],
parents=[parent_parser],
description=append_doc_link(LIST_HELP, "list"),
help=LIST_HELP,
|
{"golden_diff": "diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py\n--- a/dvc/command/ls/__init__.py\n+++ b/dvc/command/ls/__init__.py\n@@ -54,6 +54,7 @@\n )\n list_parser = subparsers.add_parser(\n \"list\",\n+ aliases=[\"ls\"],\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n", "issue": "alias `list` as `ls`?\nI have been reaching for `dvc ls` out of habit instead of `dvc list`. Should we introduce an alias for `dvc list`?\n", "before_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.command.ls.ls_colors import LsColors\nfrom dvc.exceptions import DvcException\nfrom dvc.ui import ui\n\nlogger = logging.getLogger(__name__)\n\n\ndef _prettify(entries, with_color=False):\n if with_color:\n ls_colors = LsColors()\n fmt = ls_colors.format\n else:\n\n def fmt(entry):\n return entry[\"path\"]\n\n return [fmt(entry) for entry in entries]\n\n\nclass CmdList(CmdBaseNoRepo):\n def run(self):\n from dvc.repo import Repo\n\n try:\n entries = Repo.ls(\n self.args.url,\n self.args.path,\n rev=self.args.rev,\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n if self.args.show_json:\n import json\n\n ui.write(json.dumps(entries))\n elif entries:\n entries = _prettify(entries, with_color=True)\n ui.write(\"\\n\".join(entries))\n return 0\n except DvcException:\n logger.exception(f\"failed to list '{self.args.url}'\")\n return 1\n\n\ndef add_parser(subparsers, parent_parser):\n LIST_HELP = (\n \"List repository contents, including files\"\n \" and directories tracked by DVC and by Git.\"\n )\n list_parser = subparsers.add_parser(\n \"list\",\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n list_parser.add_argument(\"url\", help=\"Location of DVC repository to list\")\n list_parser.add_argument(\n \"-R\",\n \"--recursive\",\n action=\"store_true\",\n help=\"Recursively list files.\",\n )\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n list_parser.add_argument(\n \"--show-json\", action=\"store_true\", help=\"Show output in JSON format.\"\n )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n help=\"Git revision (e.g. SHA, branch, tag)\",\n metavar=\"<commit>\",\n )\n list_parser.add_argument(\n \"path\",\n nargs=\"?\",\n help=\"Path to directory within the repository to list outputs for\",\n ).complete = completion.DIR\n list_parser.set_defaults(func=CmdList)\n", "path": "dvc/command/ls/__init__.py"}]}
| 1,291 | 106 |
gh_patches_debug_1281
|
rasdani/github-patches
|
git_diff
|
cleanlab__cleanlab-990
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add underperforming_group issue type among the Datalab defaults
Test issue manager with different datasets (Image, tabular etc.) to make sure that the underperforming group in the dataset is extracted successfully. List any failure cases that might need to be addressed before adding this issue type to the defaults.
</issue>
<code>
[start of cleanlab/datalab/internal/issue_manager_factory.py]
1 # Copyright (C) 2017-2023 Cleanlab Inc.
2 # This file is part of cleanlab.
3 #
4 # cleanlab is free software: you can redistribute it and/or modify
5 # it under the terms of the GNU Affero General Public License as published
6 # by the Free Software Foundation, either version 3 of the License, or
7 # (at your option) any later version.
8 #
9 # cleanlab is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU Affero General Public License for more details.
13 #
14 # You should have received a copy of the GNU Affero General Public License
15 # along with cleanlab. If not, see <https://www.gnu.org/licenses/>.
16 """The factory module provides a factory class for constructing concrete issue managers
17 and a decorator for registering new issue managers.
18
19 This module provides the :py:meth:`register` decorator for users to register new subclasses of
20 :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`
21 in the registry. Each IssueManager detects some particular type of issue in a dataset.
22
23
24 Note
25 ----
26
27 The :class:`REGISTRY` variable is used by the factory class to keep track
28 of registered issue managers.
29 The factory class is used as an implementation detail by
30 :py:class:`Datalab <cleanlab.datalab.datalab.Datalab>`,
31 which provides a simplified API for constructing concrete issue managers.
32 :py:class:`Datalab <cleanlab.datalab.datalab.Datalab>` is intended to be used by users
33 and provides detailed documentation on how to use the API.
34
35 Warning
36 -------
37 Neither the :class:`REGISTRY` variable nor the factory class should be used directly by users.
38 """
39 from __future__ import annotations
40
41 from typing import Dict, List, Type
42
43 from cleanlab.datalab.internal.issue_manager import (
44 ClassImbalanceIssueManager,
45 DataValuationIssueManager,
46 IssueManager,
47 LabelIssueManager,
48 NearDuplicateIssueManager,
49 NonIIDIssueManager,
50 ClassImbalanceIssueManager,
51 UnderperformingGroupIssueManager,
52 DataValuationIssueManager,
53 OutlierIssueManager,
54 NullIssueManager,
55 )
56 from cleanlab.datalab.internal.issue_manager.regression import RegressionLabelIssueManager
57 from cleanlab.datalab.internal.issue_manager.multilabel.label import MultilabelIssueManager
58 from cleanlab.datalab.internal.task import Task
59
60
61 REGISTRY: Dict[Task, Dict[str, Type[IssueManager]]] = {
62 Task.CLASSIFICATION: {
63 "outlier": OutlierIssueManager,
64 "label": LabelIssueManager,
65 "near_duplicate": NearDuplicateIssueManager,
66 "non_iid": NonIIDIssueManager,
67 "class_imbalance": ClassImbalanceIssueManager,
68 "underperforming_group": UnderperformingGroupIssueManager,
69 "data_valuation": DataValuationIssueManager,
70 "null": NullIssueManager,
71 },
72 Task.REGRESSION: {
73 "label": RegressionLabelIssueManager,
74 "outlier": OutlierIssueManager,
75 "near_duplicate": NearDuplicateIssueManager,
76 "non_iid": NonIIDIssueManager,
77 "null": NullIssueManager,
78 },
79 Task.MULTILABEL: {
80 "label": MultilabelIssueManager,
81 "outlier": OutlierIssueManager,
82 "near_duplicate": NearDuplicateIssueManager,
83 "non_iid": NonIIDIssueManager,
84 "null": NullIssueManager,
85 },
86 }
87 """Registry of issue managers that can be constructed from a task and issue type
88 and used in the Datalab class.
89
90 :meta hide-value:
91
92 Currently, the following issue managers are registered by default for a given task:
93
94 - Classification:
95
96 - ``"outlier"``: :py:class:`OutlierIssueManager <cleanlab.datalab.internal.issue_manager.outlier.OutlierIssueManager>`
97 - ``"label"``: :py:class:`LabelIssueManager <cleanlab.datalab.internal.issue_manager.label.LabelIssueManager>`
98 - ``"near_duplicate"``: :py:class:`NearDuplicateIssueManager <cleanlab.datalab.internal.issue_manager.duplicate.NearDuplicateIssueManager>`
99 - ``"non_iid"``: :py:class:`NonIIDIssueManager <cleanlab.datalab.internal.issue_manager.noniid.NonIIDIssueManager>`
100 - ``"class_imbalance"``: :py:class:`ClassImbalanceIssueManager <cleanlab.datalab.internal.issue_manager.class_imbalance.ClassImbalanceIssueManager>`
101 - ``"underperforming_group"``: :py:class:`UnderperformingGroupIssueManager <cleanlab.datalab.internal.issue_manager.underperforming_group.UnderperformingGroupIssueManager>`
102 - ``"data_valuation"``: :py:class:`DataValuationIssueManager <cleanlab.datalab.internal.issue_manager.data_valuation.DataValuationIssueManager>`
103 - ``"null"``: :py:class:`NullIssueManager <cleanlab.datalab.internal.issue_manager.null.NullIssueManager>`
104
105 - Regression:
106
107 - ``"label"``: :py:class:`RegressionLabelIssueManager <cleanlab.datalab.internal.issue_manager.regression.label.RegressionLabelIssueManager>`
108
109 - Multilabel:
110
111 - ``"label"``: :py:class:`MultilabelIssueManager <cleanlab.datalab.internal.issue_manager.multilabel.label.MultilabelIssueManager>`
112
113 Warning
114 -------
115 This variable should not be used directly by users.
116 """
117
118
119 # Construct concrete issue manager with a from_str method
120 class _IssueManagerFactory:
121 """Factory class for constructing concrete issue managers."""
122
123 @classmethod
124 def from_str(cls, issue_type: str, task: Task) -> Type[IssueManager]:
125 """Constructs a concrete issue manager class from a string."""
126 if isinstance(issue_type, list):
127 raise ValueError(
128 "issue_type must be a string, not a list. Try using from_list instead."
129 )
130
131 if task not in REGISTRY:
132 raise ValueError(f"Invalid task type: {task}, must be in {list(REGISTRY.keys())}")
133 if issue_type not in REGISTRY[task]:
134 raise ValueError(f"Invalid issue type: {issue_type} for task {task}")
135
136 return REGISTRY[task][issue_type]
137
138 @classmethod
139 def from_list(cls, issue_types: List[str], task: Task) -> List[Type[IssueManager]]:
140 """Constructs a list of concrete issue manager classes from a list of strings."""
141 return [cls.from_str(issue_type, task) for issue_type in issue_types]
142
143
144 def register(cls: Type[IssueManager], task: str = str(Task.CLASSIFICATION)) -> Type[IssueManager]:
145 """Registers the issue manager factory.
146
147 Parameters
148 ----------
149 cls :
150 A subclass of
151 :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`.
152
153 task :
154 Specific machine learning task like classification or regression.
155 See :py:meth:`Task.from_str <cleanlab.datalab.internal.task.Task.from_str>`` for more details,
156 to see which task type corresponds to which string.
157
158 Returns
159 -------
160 cls :
161 The same class that was passed in.
162
163 Example
164 -------
165
166 When defining a new subclass of
167 :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`,
168 you can register it like so:
169
170 .. code-block:: python
171
172 from cleanlab import IssueManager
173 from cleanlab.datalab.internal.issue_manager_factory import register
174
175 @register
176 class MyIssueManager(IssueManager):
177 issue_name: str = "my_issue"
178 def find_issues(self, **kwargs):
179 # Some logic to find issues
180 pass
181
182 or in a function call:
183
184 .. code-block:: python
185
186 from cleanlab import IssueManager
187 from cleanlab.datalab.internal.issue_manager_factory import register
188
189 class MyIssueManager(IssueManager):
190 issue_name: str = "my_issue"
191 def find_issues(self, **kwargs):
192 # Some logic to find issues
193 pass
194
195 register(MyIssueManager, task="classification")
196 """
197
198 if not issubclass(cls, IssueManager):
199 raise ValueError(f"Class {cls} must be a subclass of IssueManager")
200
201 name: str = str(cls.issue_name)
202
203 try:
204 _task = Task.from_str(task)
205 if _task not in REGISTRY:
206 raise ValueError(f"Invalid task type: {_task}, must be in {list(REGISTRY.keys())}")
207 except KeyError:
208 raise ValueError(f"Invalid task type: {task}, must be in {list(REGISTRY.keys())}")
209
210 if name in REGISTRY[_task]:
211 print(
212 f"Warning: Overwriting existing issue manager {name} with {cls} for task {_task}."
213 "This may cause unexpected behavior."
214 )
215
216 REGISTRY[_task][name] = cls
217 return cls
218
219
220 def list_possible_issue_types(task: Task) -> List[str]:
221 """Returns a list of all registered issue types.
222
223 Any issue type that is not in this list cannot be used in the :py:meth:`find_issues` method.
224
225 See Also
226 --------
227 :py:class:`REGISTRY <cleanlab.datalab.internal.issue_manager_factory.REGISTRY>` : All available issue types and their corresponding issue managers can be found here.
228 """
229 return list(REGISTRY.get(task, []))
230
231
232 def list_default_issue_types(task: Task) -> List[str]:
233 """Returns a list of the issue types that are run by default
234 when :py:meth:`find_issues` is called without specifying `issue_types`.
235
236 task :
237 Specific machine learning task supported by Datalab.
238
239 See Also
240 --------
241 :py:class:`REGISTRY <cleanlab.datalab.internal.issue_manager_factory.REGISTRY>` : All available issue types and their corresponding issue managers can be found here.
242 """
243 default_issue_types_dict = {
244 Task.CLASSIFICATION: [
245 "null",
246 "label",
247 "outlier",
248 "near_duplicate",
249 "non_iid",
250 "class_imbalance",
251 ],
252 Task.REGRESSION: [
253 "null",
254 "label",
255 "outlier",
256 "near_duplicate",
257 "non_iid",
258 ],
259 Task.MULTILABEL: [
260 "null",
261 "label",
262 "outlier",
263 "near_duplicate",
264 "non_iid",
265 ],
266 }
267 if task not in default_issue_types_dict:
268 task = Task.CLASSIFICATION
269 default_issue_types = default_issue_types_dict[task]
270 return default_issue_types
271
[end of cleanlab/datalab/internal/issue_manager_factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cleanlab/datalab/internal/issue_manager_factory.py b/cleanlab/datalab/internal/issue_manager_factory.py
--- a/cleanlab/datalab/internal/issue_manager_factory.py
+++ b/cleanlab/datalab/internal/issue_manager_factory.py
@@ -223,6 +223,7 @@
"near_duplicate",
"non_iid",
"class_imbalance",
+ "underperforming_group",
],
"regression": [
"null",
|
{"golden_diff": "diff --git a/cleanlab/datalab/internal/issue_manager_factory.py b/cleanlab/datalab/internal/issue_manager_factory.py\n--- a/cleanlab/datalab/internal/issue_manager_factory.py\n+++ b/cleanlab/datalab/internal/issue_manager_factory.py\n@@ -223,6 +223,7 @@\n \"near_duplicate\",\n \"non_iid\",\n \"class_imbalance\",\n+ \"underperforming_group\",\n ],\n \"regression\": [\n \"null\",\n", "issue": "Add underperforming_group issue type among the Datalab defaults \nTest issue manager with different datasets (Image, tabular etc.) to make sure that the underperforming group in the dataset is extracted successfully. List any failure cases that might need to be addressed before adding this issue type to the defaults.\n", "before_files": [{"content": "# Copyright (C) 2017-2023 Cleanlab Inc.\n# This file is part of cleanlab.\n#\n# cleanlab is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cleanlab is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with cleanlab. If not, see <https://www.gnu.org/licenses/>.\n\"\"\"The factory module provides a factory class for constructing concrete issue managers\nand a decorator for registering new issue managers.\n\nThis module provides the :py:meth:`register` decorator for users to register new subclasses of\n:py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`\nin the registry. Each IssueManager detects some particular type of issue in a dataset.\n\n\nNote\n----\n\nThe :class:`REGISTRY` variable is used by the factory class to keep track\nof registered issue managers.\nThe factory class is used as an implementation detail by\n:py:class:`Datalab <cleanlab.datalab.datalab.Datalab>`,\nwhich provides a simplified API for constructing concrete issue managers.\n:py:class:`Datalab <cleanlab.datalab.datalab.Datalab>` is intended to be used by users\nand provides detailed documentation on how to use the API.\n\nWarning\n-------\nNeither the :class:`REGISTRY` variable nor the factory class should be used directly by users.\n\"\"\"\nfrom __future__ import annotations\n\nfrom typing import Dict, List, Type\n\nfrom cleanlab.datalab.internal.issue_manager import (\n ClassImbalanceIssueManager,\n DataValuationIssueManager,\n IssueManager,\n LabelIssueManager,\n NearDuplicateIssueManager,\n NonIIDIssueManager,\n ClassImbalanceIssueManager,\n UnderperformingGroupIssueManager,\n DataValuationIssueManager,\n OutlierIssueManager,\n NullIssueManager,\n)\nfrom cleanlab.datalab.internal.issue_manager.regression import RegressionLabelIssueManager\nfrom cleanlab.datalab.internal.issue_manager.multilabel.label import MultilabelIssueManager\nfrom cleanlab.datalab.internal.task import Task\n\n\nREGISTRY: Dict[Task, Dict[str, Type[IssueManager]]] = {\n Task.CLASSIFICATION: {\n \"outlier\": OutlierIssueManager,\n \"label\": LabelIssueManager,\n \"near_duplicate\": NearDuplicateIssueManager,\n \"non_iid\": NonIIDIssueManager,\n \"class_imbalance\": ClassImbalanceIssueManager,\n \"underperforming_group\": UnderperformingGroupIssueManager,\n \"data_valuation\": DataValuationIssueManager,\n \"null\": NullIssueManager,\n },\n Task.REGRESSION: {\n \"label\": RegressionLabelIssueManager,\n \"outlier\": OutlierIssueManager,\n \"near_duplicate\": NearDuplicateIssueManager,\n \"non_iid\": NonIIDIssueManager,\n \"null\": NullIssueManager,\n },\n Task.MULTILABEL: {\n \"label\": MultilabelIssueManager,\n \"outlier\": OutlierIssueManager,\n \"near_duplicate\": NearDuplicateIssueManager,\n \"non_iid\": NonIIDIssueManager,\n \"null\": NullIssueManager,\n },\n}\n\"\"\"Registry of issue managers that can be constructed from a task and issue type\nand used in the Datalab class.\n\n:meta hide-value:\n\nCurrently, the following issue managers are registered by default for a given task:\n\n- Classification:\n\n - ``\"outlier\"``: :py:class:`OutlierIssueManager <cleanlab.datalab.internal.issue_manager.outlier.OutlierIssueManager>`\n - ``\"label\"``: :py:class:`LabelIssueManager <cleanlab.datalab.internal.issue_manager.label.LabelIssueManager>`\n - ``\"near_duplicate\"``: :py:class:`NearDuplicateIssueManager <cleanlab.datalab.internal.issue_manager.duplicate.NearDuplicateIssueManager>`\n - ``\"non_iid\"``: :py:class:`NonIIDIssueManager <cleanlab.datalab.internal.issue_manager.noniid.NonIIDIssueManager>`\n - ``\"class_imbalance\"``: :py:class:`ClassImbalanceIssueManager <cleanlab.datalab.internal.issue_manager.class_imbalance.ClassImbalanceIssueManager>`\n - ``\"underperforming_group\"``: :py:class:`UnderperformingGroupIssueManager <cleanlab.datalab.internal.issue_manager.underperforming_group.UnderperformingGroupIssueManager>`\n - ``\"data_valuation\"``: :py:class:`DataValuationIssueManager <cleanlab.datalab.internal.issue_manager.data_valuation.DataValuationIssueManager>`\n - ``\"null\"``: :py:class:`NullIssueManager <cleanlab.datalab.internal.issue_manager.null.NullIssueManager>`\n \n- Regression:\n\n - ``\"label\"``: :py:class:`RegressionLabelIssueManager <cleanlab.datalab.internal.issue_manager.regression.label.RegressionLabelIssueManager>`\n\n- Multilabel:\n\n - ``\"label\"``: :py:class:`MultilabelIssueManager <cleanlab.datalab.internal.issue_manager.multilabel.label.MultilabelIssueManager>`\n\nWarning\n-------\nThis variable should not be used directly by users.\n\"\"\"\n\n\n# Construct concrete issue manager with a from_str method\nclass _IssueManagerFactory:\n \"\"\"Factory class for constructing concrete issue managers.\"\"\"\n\n @classmethod\n def from_str(cls, issue_type: str, task: Task) -> Type[IssueManager]:\n \"\"\"Constructs a concrete issue manager class from a string.\"\"\"\n if isinstance(issue_type, list):\n raise ValueError(\n \"issue_type must be a string, not a list. Try using from_list instead.\"\n )\n\n if task not in REGISTRY:\n raise ValueError(f\"Invalid task type: {task}, must be in {list(REGISTRY.keys())}\")\n if issue_type not in REGISTRY[task]:\n raise ValueError(f\"Invalid issue type: {issue_type} for task {task}\")\n\n return REGISTRY[task][issue_type]\n\n @classmethod\n def from_list(cls, issue_types: List[str], task: Task) -> List[Type[IssueManager]]:\n \"\"\"Constructs a list of concrete issue manager classes from a list of strings.\"\"\"\n return [cls.from_str(issue_type, task) for issue_type in issue_types]\n\n\ndef register(cls: Type[IssueManager], task: str = str(Task.CLASSIFICATION)) -> Type[IssueManager]:\n \"\"\"Registers the issue manager factory.\n\n Parameters\n ----------\n cls :\n A subclass of\n :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`.\n\n task :\n Specific machine learning task like classification or regression.\n See :py:meth:`Task.from_str <cleanlab.datalab.internal.task.Task.from_str>`` for more details,\n to see which task type corresponds to which string.\n\n Returns\n -------\n cls :\n The same class that was passed in.\n\n Example\n -------\n\n When defining a new subclass of\n :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`,\n you can register it like so:\n\n .. code-block:: python\n\n from cleanlab import IssueManager\n from cleanlab.datalab.internal.issue_manager_factory import register\n\n @register\n class MyIssueManager(IssueManager):\n issue_name: str = \"my_issue\"\n def find_issues(self, **kwargs):\n # Some logic to find issues\n pass\n\n or in a function call:\n\n .. code-block:: python\n\n from cleanlab import IssueManager\n from cleanlab.datalab.internal.issue_manager_factory import register\n\n class MyIssueManager(IssueManager):\n issue_name: str = \"my_issue\"\n def find_issues(self, **kwargs):\n # Some logic to find issues\n pass\n\n register(MyIssueManager, task=\"classification\")\n \"\"\"\n\n if not issubclass(cls, IssueManager):\n raise ValueError(f\"Class {cls} must be a subclass of IssueManager\")\n\n name: str = str(cls.issue_name)\n\n try:\n _task = Task.from_str(task)\n if _task not in REGISTRY:\n raise ValueError(f\"Invalid task type: {_task}, must be in {list(REGISTRY.keys())}\")\n except KeyError:\n raise ValueError(f\"Invalid task type: {task}, must be in {list(REGISTRY.keys())}\")\n\n if name in REGISTRY[_task]:\n print(\n f\"Warning: Overwriting existing issue manager {name} with {cls} for task {_task}.\"\n \"This may cause unexpected behavior.\"\n )\n\n REGISTRY[_task][name] = cls\n return cls\n\n\ndef list_possible_issue_types(task: Task) -> List[str]:\n \"\"\"Returns a list of all registered issue types.\n\n Any issue type that is not in this list cannot be used in the :py:meth:`find_issues` method.\n\n See Also\n --------\n :py:class:`REGISTRY <cleanlab.datalab.internal.issue_manager_factory.REGISTRY>` : All available issue types and their corresponding issue managers can be found here.\n \"\"\"\n return list(REGISTRY.get(task, []))\n\n\ndef list_default_issue_types(task: Task) -> List[str]:\n \"\"\"Returns a list of the issue types that are run by default\n when :py:meth:`find_issues` is called without specifying `issue_types`.\n\n task :\n Specific machine learning task supported by Datalab.\n\n See Also\n --------\n :py:class:`REGISTRY <cleanlab.datalab.internal.issue_manager_factory.REGISTRY>` : All available issue types and their corresponding issue managers can be found here.\n \"\"\"\n default_issue_types_dict = {\n Task.CLASSIFICATION: [\n \"null\",\n \"label\",\n \"outlier\",\n \"near_duplicate\",\n \"non_iid\",\n \"class_imbalance\",\n ],\n Task.REGRESSION: [\n \"null\",\n \"label\",\n \"outlier\",\n \"near_duplicate\",\n \"non_iid\",\n ],\n Task.MULTILABEL: [\n \"null\",\n \"label\",\n \"outlier\",\n \"near_duplicate\",\n \"non_iid\",\n ],\n }\n if task not in default_issue_types_dict:\n task = Task.CLASSIFICATION\n default_issue_types = default_issue_types_dict[task]\n return default_issue_types\n", "path": "cleanlab/datalab/internal/issue_manager_factory.py"}]}
| 3,628 | 111 |
gh_patches_debug_27717
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-1785
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Leaving group with fullscreen window and then returning to it is broken
# Description
When I create a fullscreen window in a group, and then change to another group
and back to the original group, the fullscreen window is no longer rendered. Functionality
for the window is still there, however. E.g. when running a zoom ;) conversation in
fullscreen mode, the audio part of the conversation continues whereas window containing
the video does not render. Clearly the window (and it's functionality) is still someplace,
just not rendered.
Other applications also exhibit this behavior when used in fullscreen mode. Another example
is chromium (enter fullscreen mode with F11).
# Commit hash
This behavior manifests itself with current git, master branch. I bisected the issue,
and the last good commit that does not exhibit this behavior is: 52d787944ebb669d5e90d7d372648e539034c271.
# Stack traces
No stack traces were thrown.
</issue>
<code>
[start of libqtile/layout/floating.py]
1 # Copyright (c) 2010 matt
2 # Copyright (c) 2010-2011 Paul Colomiets
3 # Copyright (c) 2011 Mounier Florian
4 # Copyright (c) 2012 Craig Barnes
5 # Copyright (c) 2012, 2014-2015 Tycho Andersen
6 # Copyright (c) 2013 Tao Sauvage
7 # Copyright (c) 2013 Julien Iguchi-Cartigny
8 # Copyright (c) 2014 ramnes
9 # Copyright (c) 2014 Sean Vig
10 # Copyright (c) 2014 dequis
11 # Copyright (c) 2018 Nazar Mokrynskyi
12 #
13 # Permission is hereby granted, free of charge, to any person obtaining a copy
14 # of this software and associated documentation files (the "Software"), to deal
15 # in the Software without restriction, including without limitation the rights
16 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
17 # copies of the Software, and to permit persons to whom the Software is
18 # furnished to do so, subject to the following conditions:
19 #
20 # The above copyright notice and this permission notice shall be included in
21 # all copies or substantial portions of the Software.
22 #
23 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
24 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
25 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
26 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
27 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
28 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
29 # SOFTWARE.
30
31 from libqtile.layout.base import Layout
32
33 DEFAULT_FLOAT_WM_TYPES = set([
34 'utility',
35 'notification',
36 'toolbar',
37 'splash',
38 'dialog',
39 ])
40
41 DEFAULT_FLOAT_RULES = [
42 {"role": "About"},
43 {"wmclass": "file_progress"},
44 ]
45
46
47 class Floating(Layout):
48 """
49 Floating layout, which does nothing with windows but handles focus order
50 """
51 defaults = [
52 ("border_focus", "#0000ff", "Border colour for the focused window."),
53 ("border_normal", "#000000", "Border colour for un-focused windows."),
54 ("border_width", 1, "Border width."),
55 ("max_border_width", 0, "Border width for maximize."),
56 ("fullscreen_border_width", 0, "Border width for fullscreen."),
57 ("name", "floating", "Name of this layout."),
58 (
59 "auto_float_types",
60 DEFAULT_FLOAT_WM_TYPES,
61 "default wm types to automatically float"
62 ),
63 ]
64
65 def __init__(self, float_rules=None, no_reposition_match=None, **config):
66 """
67 If you have certain apps that you always want to float you can provide
68 ``float_rules`` to do so. ``float_rules`` is a list of
69 dictionaries containing some or all of the keys::
70
71 {'wname': WM_NAME, 'wmclass': WM_CLASS, 'role': WM_WINDOW_ROLE}
72
73 The keys must be specified as above. You only need one, but
74 you need to provide the value for it. When a new window is
75 opened it's ``match`` method is called with each of these
76 rules. If one matches, the window will float. The following
77 will float gimp and skype::
78
79 float_rules=[dict(wmclass="skype"), dict(wmclass="gimp")]
80
81 Specify these in the ``floating_layout`` in your config.
82
83 Floating layout will try to center most of floating windows by default,
84 but if you don't want this to happen for certain windows that are
85 centered by mistake, you can use ``no_reposition_match`` option to
86 specify them and layout will rely on windows to position themselves in
87 correct location on the screen.
88 """
89 Layout.__init__(self, **config)
90 self.clients = []
91 self.focused = None
92 self.group = None
93 self.float_rules = float_rules or DEFAULT_FLOAT_RULES
94 self.no_reposition_match = no_reposition_match
95 self.add_defaults(Floating.defaults)
96
97 def match(self, win):
98 """Used to default float some windows"""
99 if win.window.get_wm_type() in self.auto_float_types:
100 return True
101 for rule_dict in self.float_rules:
102 if win.match(**rule_dict):
103 return True
104 return False
105
106 def find_clients(self, group):
107 """Find all clients belonging to a given group"""
108 return [c for c in self.clients if c.group is group]
109
110 def to_screen(self, group, new_screen):
111 """Adjust offsets of clients within current screen"""
112 for win in self.find_clients(group):
113 if win.maximized:
114 win.maximized = True
115 elif win.fullscreen:
116 win.fullscreen = True
117 else:
118 # catch if the client hasn't been configured
119 try:
120 # By default, place window at same offset from top corner
121 new_x = new_screen.x + win.float_x
122 new_y = new_screen.y + win.float_y
123 except AttributeError:
124 # this will be handled in .configure()
125 pass
126 else:
127 # make sure window isn't off screen left/right...
128 new_x = min(new_x, new_screen.x + new_screen.width - win.width)
129 new_x = max(new_x, new_screen.x)
130 # and up/down
131 new_y = min(new_y, new_screen.y + new_screen.height - win.height)
132 new_y = max(new_y, new_screen.y)
133
134 win.x = new_x
135 win.y = new_y
136 win.group = new_screen.group
137
138 def focus_first(self, group=None):
139 if group is None:
140 clients = self.clients
141 else:
142 clients = self.find_clients(group)
143
144 if clients:
145 return clients[0]
146
147 def focus_next(self, win):
148 if win not in self.clients or win.group is None:
149 return
150
151 clients = self.find_clients(win.group)
152 idx = clients.index(win)
153 if len(clients) > idx + 1:
154 return clients[idx + 1]
155
156 def focus_last(self, group=None):
157 if group is None:
158 clients = self.clients
159 else:
160 clients = self.find_clients(group)
161
162 if clients:
163 return clients[-1]
164
165 def focus_previous(self, win):
166 if win not in self.clients or win.group is None:
167 return
168
169 clients = self.find_clients(win.group)
170 idx = clients.index(win)
171 if idx > 0:
172 return clients[idx - 1]
173
174 def focus(self, client):
175 self.focused = client
176
177 def blur(self):
178 self.focused = None
179
180 def compute_client_position(self, client, screen):
181 """ recompute client.x and client.y, returning whether or not to place
182 this client above other windows or not """
183 above = False
184 transient_for = client.window.get_wm_transient_for()
185 win = client.group.qtile.windows_map.get(transient_for)
186 if win is not None:
187 # if transient for a window, place in the center of the window
188 center_x = win.x + win.width / 2
189 center_y = win.y + win.height / 2
190 else:
191 center_x = screen.x + screen.width / 2
192 center_y = screen.y + screen.height / 2
193 above = True
194
195 x = center_x - client.width / 2
196 y = center_y - client.height / 2
197
198 # don't go off the right...
199 x = min(x, screen.x + screen.width)
200 # or left...
201 x = max(x, screen.x)
202 # or bottom...
203 y = min(y, screen.y + screen.height)
204 # or top
205 y = max(y, screen.y)
206
207 client.x = int(round(x))
208 client.y = int(round(y))
209 return above
210
211 def configure(self, client, screen):
212 # After this, the client will be mapped. Either this will do it, or the
213 # client has already done it.
214 client.hidden = False
215
216 # 'sun-awt-X11-XWindowPeer' is a dropdown used in Java application,
217 # don't reposition it anywhere, let Java app to control it
218 cls = client.window.get_wm_class() or ''
219 is_java_dropdown = 'sun-awt-X11-XWindowPeer' in cls
220 if is_java_dropdown:
221 return
222
223 # similar to above but the X11 version, the client may have already
224 # placed itself. let's respect that
225 if client.has_user_set_position():
226 return
227
228 # ok, it's not java and the window itself didn't position it, but users
229 # may still have asked us not to mess with it
230 if self.no_reposition_match is not None and self.no_reposition_match.compare(client):
231 return
232
233 if client.has_focus:
234 bc = client.group.qtile.color_pixel(self.border_focus)
235 else:
236 bc = client.group.qtile.color_pixel(self.border_normal)
237 if client.maximized:
238 bw = self.max_border_width
239 elif client.fullscreen:
240 bw = self.fullscreen_border_width
241 else:
242 bw = self.border_width
243 above = False
244
245 # We definitely have a screen here, so let's be sure we'll float on screen
246 try:
247 client.float_x
248 client.float_y
249 except AttributeError:
250 # this window hasn't been placed before, let's put it in a sensible spot
251 above = self.compute_client_position(client, screen)
252
253 client.place(
254 client.x,
255 client.y,
256 client.width,
257 client.height,
258 bw,
259 bc,
260 above,
261 )
262 client.unhide()
263
264 def add(self, client):
265 self.clients.append(client)
266 self.focused = client
267
268 def remove(self, client):
269 if client not in self.clients:
270 return
271
272 next_focus = self.focus_next(client)
273 if client is self.focused:
274 self.blur()
275 self.clients.remove(client)
276 return next_focus
277
278 def info(self):
279 d = Layout.info(self)
280 d["clients"] = [c.name for c in self.clients]
281 return d
282
283 def cmd_next(self):
284 # This can't ever be called, but implement the abstract method
285 pass
286
287 def cmd_previous(self):
288 # This can't ever be called, but implement the abstract method
289 pass
290
[end of libqtile/layout/floating.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/layout/floating.py b/libqtile/layout/floating.py
--- a/libqtile/layout/floating.py
+++ b/libqtile/layout/floating.py
@@ -209,25 +209,24 @@
return above
def configure(self, client, screen):
- # After this, the client will be mapped. Either this will do it, or the
- # client has already done it.
- client.hidden = False
-
# 'sun-awt-X11-XWindowPeer' is a dropdown used in Java application,
# don't reposition it anywhere, let Java app to control it
cls = client.window.get_wm_class() or ''
is_java_dropdown = 'sun-awt-X11-XWindowPeer' in cls
if is_java_dropdown:
+ client.unhide()
return
# similar to above but the X11 version, the client may have already
# placed itself. let's respect that
if client.has_user_set_position():
+ client.unhide()
return
# ok, it's not java and the window itself didn't position it, but users
# may still have asked us not to mess with it
if self.no_reposition_match is not None and self.no_reposition_match.compare(client):
+ client.unhide()
return
if client.has_focus:
|
{"golden_diff": "diff --git a/libqtile/layout/floating.py b/libqtile/layout/floating.py\n--- a/libqtile/layout/floating.py\n+++ b/libqtile/layout/floating.py\n@@ -209,25 +209,24 @@\n return above\n \n def configure(self, client, screen):\n- # After this, the client will be mapped. Either this will do it, or the\n- # client has already done it.\n- client.hidden = False\n-\n # 'sun-awt-X11-XWindowPeer' is a dropdown used in Java application,\n # don't reposition it anywhere, let Java app to control it\n cls = client.window.get_wm_class() or ''\n is_java_dropdown = 'sun-awt-X11-XWindowPeer' in cls\n if is_java_dropdown:\n+ client.unhide()\n return\n \n # similar to above but the X11 version, the client may have already\n # placed itself. let's respect that\n if client.has_user_set_position():\n+ client.unhide()\n return\n \n # ok, it's not java and the window itself didn't position it, but users\n # may still have asked us not to mess with it\n if self.no_reposition_match is not None and self.no_reposition_match.compare(client):\n+ client.unhide()\n return\n \n if client.has_focus:\n", "issue": "Leaving group with fullscreen window and then returning to it is broken\n# Description\r\n\r\nWhen I create a fullscreen window in a group, and then change to another group\r\nand back to the original group, the fullscreen window is no longer rendered. Functionality\r\nfor the window is still there, however. E.g. when running a zoom ;) conversation in\r\nfullscreen mode, the audio part of the conversation continues whereas window containing\r\nthe video does not render. Clearly the window (and it's functionality) is still someplace,\r\njust not rendered.\r\n\r\nOther applications also exhibit this behavior when used in fullscreen mode. Another example\r\nis chromium (enter fullscreen mode with F11).\r\n\r\n# Commit hash\r\n\r\nThis behavior manifests itself with current git, master branch. I bisected the issue,\r\nand the last good commit that does not exhibit this behavior is: 52d787944ebb669d5e90d7d372648e539034c271.\r\n\r\n# Stack traces\r\nNo stack traces were thrown.\n", "before_files": [{"content": "# Copyright (c) 2010 matt\n# Copyright (c) 2010-2011 Paul Colomiets\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2012 Craig Barnes\n# Copyright (c) 2012, 2014-2015 Tycho Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2013 Julien Iguchi-Cartigny\n# Copyright (c) 2014 ramnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 dequis\n# Copyright (c) 2018 Nazar Mokrynskyi\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile.layout.base import Layout\n\nDEFAULT_FLOAT_WM_TYPES = set([\n 'utility',\n 'notification',\n 'toolbar',\n 'splash',\n 'dialog',\n])\n\nDEFAULT_FLOAT_RULES = [\n {\"role\": \"About\"},\n {\"wmclass\": \"file_progress\"},\n]\n\n\nclass Floating(Layout):\n \"\"\"\n Floating layout, which does nothing with windows but handles focus order\n \"\"\"\n defaults = [\n (\"border_focus\", \"#0000ff\", \"Border colour for the focused window.\"),\n (\"border_normal\", \"#000000\", \"Border colour for un-focused windows.\"),\n (\"border_width\", 1, \"Border width.\"),\n (\"max_border_width\", 0, \"Border width for maximize.\"),\n (\"fullscreen_border_width\", 0, \"Border width for fullscreen.\"),\n (\"name\", \"floating\", \"Name of this layout.\"),\n (\n \"auto_float_types\",\n DEFAULT_FLOAT_WM_TYPES,\n \"default wm types to automatically float\"\n ),\n ]\n\n def __init__(self, float_rules=None, no_reposition_match=None, **config):\n \"\"\"\n If you have certain apps that you always want to float you can provide\n ``float_rules`` to do so. ``float_rules`` is a list of\n dictionaries containing some or all of the keys::\n\n {'wname': WM_NAME, 'wmclass': WM_CLASS, 'role': WM_WINDOW_ROLE}\n\n The keys must be specified as above. You only need one, but\n you need to provide the value for it. When a new window is\n opened it's ``match`` method is called with each of these\n rules. If one matches, the window will float. The following\n will float gimp and skype::\n\n float_rules=[dict(wmclass=\"skype\"), dict(wmclass=\"gimp\")]\n\n Specify these in the ``floating_layout`` in your config.\n\n Floating layout will try to center most of floating windows by default,\n but if you don't want this to happen for certain windows that are\n centered by mistake, you can use ``no_reposition_match`` option to\n specify them and layout will rely on windows to position themselves in\n correct location on the screen.\n \"\"\"\n Layout.__init__(self, **config)\n self.clients = []\n self.focused = None\n self.group = None\n self.float_rules = float_rules or DEFAULT_FLOAT_RULES\n self.no_reposition_match = no_reposition_match\n self.add_defaults(Floating.defaults)\n\n def match(self, win):\n \"\"\"Used to default float some windows\"\"\"\n if win.window.get_wm_type() in self.auto_float_types:\n return True\n for rule_dict in self.float_rules:\n if win.match(**rule_dict):\n return True\n return False\n\n def find_clients(self, group):\n \"\"\"Find all clients belonging to a given group\"\"\"\n return [c for c in self.clients if c.group is group]\n\n def to_screen(self, group, new_screen):\n \"\"\"Adjust offsets of clients within current screen\"\"\"\n for win in self.find_clients(group):\n if win.maximized:\n win.maximized = True\n elif win.fullscreen:\n win.fullscreen = True\n else:\n # catch if the client hasn't been configured\n try:\n # By default, place window at same offset from top corner\n new_x = new_screen.x + win.float_x\n new_y = new_screen.y + win.float_y\n except AttributeError:\n # this will be handled in .configure()\n pass\n else:\n # make sure window isn't off screen left/right...\n new_x = min(new_x, new_screen.x + new_screen.width - win.width)\n new_x = max(new_x, new_screen.x)\n # and up/down\n new_y = min(new_y, new_screen.y + new_screen.height - win.height)\n new_y = max(new_y, new_screen.y)\n\n win.x = new_x\n win.y = new_y\n win.group = new_screen.group\n\n def focus_first(self, group=None):\n if group is None:\n clients = self.clients\n else:\n clients = self.find_clients(group)\n\n if clients:\n return clients[0]\n\n def focus_next(self, win):\n if win not in self.clients or win.group is None:\n return\n\n clients = self.find_clients(win.group)\n idx = clients.index(win)\n if len(clients) > idx + 1:\n return clients[idx + 1]\n\n def focus_last(self, group=None):\n if group is None:\n clients = self.clients\n else:\n clients = self.find_clients(group)\n\n if clients:\n return clients[-1]\n\n def focus_previous(self, win):\n if win not in self.clients or win.group is None:\n return\n\n clients = self.find_clients(win.group)\n idx = clients.index(win)\n if idx > 0:\n return clients[idx - 1]\n\n def focus(self, client):\n self.focused = client\n\n def blur(self):\n self.focused = None\n\n def compute_client_position(self, client, screen):\n \"\"\" recompute client.x and client.y, returning whether or not to place\n this client above other windows or not \"\"\"\n above = False\n transient_for = client.window.get_wm_transient_for()\n win = client.group.qtile.windows_map.get(transient_for)\n if win is not None:\n # if transient for a window, place in the center of the window\n center_x = win.x + win.width / 2\n center_y = win.y + win.height / 2\n else:\n center_x = screen.x + screen.width / 2\n center_y = screen.y + screen.height / 2\n above = True\n\n x = center_x - client.width / 2\n y = center_y - client.height / 2\n\n # don't go off the right...\n x = min(x, screen.x + screen.width)\n # or left...\n x = max(x, screen.x)\n # or bottom...\n y = min(y, screen.y + screen.height)\n # or top\n y = max(y, screen.y)\n\n client.x = int(round(x))\n client.y = int(round(y))\n return above\n\n def configure(self, client, screen):\n # After this, the client will be mapped. Either this will do it, or the\n # client has already done it.\n client.hidden = False\n\n # 'sun-awt-X11-XWindowPeer' is a dropdown used in Java application,\n # don't reposition it anywhere, let Java app to control it\n cls = client.window.get_wm_class() or ''\n is_java_dropdown = 'sun-awt-X11-XWindowPeer' in cls\n if is_java_dropdown:\n return\n\n # similar to above but the X11 version, the client may have already\n # placed itself. let's respect that\n if client.has_user_set_position():\n return\n\n # ok, it's not java and the window itself didn't position it, but users\n # may still have asked us not to mess with it\n if self.no_reposition_match is not None and self.no_reposition_match.compare(client):\n return\n\n if client.has_focus:\n bc = client.group.qtile.color_pixel(self.border_focus)\n else:\n bc = client.group.qtile.color_pixel(self.border_normal)\n if client.maximized:\n bw = self.max_border_width\n elif client.fullscreen:\n bw = self.fullscreen_border_width\n else:\n bw = self.border_width\n above = False\n\n # We definitely have a screen here, so let's be sure we'll float on screen\n try:\n client.float_x\n client.float_y\n except AttributeError:\n # this window hasn't been placed before, let's put it in a sensible spot\n above = self.compute_client_position(client, screen)\n\n client.place(\n client.x,\n client.y,\n client.width,\n client.height,\n bw,\n bc,\n above,\n )\n client.unhide()\n\n def add(self, client):\n self.clients.append(client)\n self.focused = client\n\n def remove(self, client):\n if client not in self.clients:\n return\n\n next_focus = self.focus_next(client)\n if client is self.focused:\n self.blur()\n self.clients.remove(client)\n return next_focus\n\n def info(self):\n d = Layout.info(self)\n d[\"clients\"] = [c.name for c in self.clients]\n return d\n\n def cmd_next(self):\n # This can't ever be called, but implement the abstract method\n pass\n\n def cmd_previous(self):\n # This can't ever be called, but implement the abstract method\n pass\n", "path": "libqtile/layout/floating.py"}]}
| 3,855 | 304 |
gh_patches_debug_35244
|
rasdani/github-patches
|
git_diff
|
Zeroto521__my-data-toolkit-567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
EHN: New accessor `fillna_regresssion`
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [x] whatsnew entry
Fill na value with regression method
</issue>
<code>
[start of dtoolkit/accessor/dataframe/fillna_regression.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 import pandas as pd
6
7 from dtoolkit._typing import IntOrStr
8 from dtoolkit.accessor.register import register_dataframe_method
9
10 if TYPE_CHECKING:
11 from sklearn.base import RegressorMixin
12
13
14 @register_dataframe_method
15 def fillna_regression(
16 df: pd.DataFrame,
17 method: RegressorMixin,
18 X: IntOrStr | list[IntOrStr] | pd.Index,
19 y: IntOrStr,
20 how: str = "na",
21 **kwargs,
22 ) -> pd.DataFrame:
23 """
24 Fill na value with regression algorithm.
25
26 Parameters
27 ----------
28 method : RegressorMixin
29 Regression transformer.
30
31 X : int or str, list of int or str, Index
32 Feature columns.
33
34 y : int or str
35 Target column.
36
37 how : {'na', 'all'}, default 'na'
38 Only fill na value or apply regression to entire target column.
39
40 **kwargs
41 See the documentation for ``method`` for complete details on
42 the keyword arguments.
43
44 See Also
45 --------
46 sklearn.kernel_ridge
47 sklearn.linear_model
48 sklearn.dummy.DummyRegressor
49 sklearn.ensemble.AdaBoostRegressor
50 sklearn.ensemble.BaggingRegressor
51 sklearn.ensemble.ExtraTreesRegressor
52 sklearn.ensemble.GradientBoostingRegressor
53 sklearn.ensemble.RandomForestRegressor
54 sklearn.ensemble.StackingRegressor
55 sklearn.ensemble.VotingRegressor
56 sklearn.ensemble.HistGradientBoostingRegressor
57 sklearn.gaussian_process.GaussianProcessRegressor
58 sklearn.isotonic.IsotonicRegression
59 sklearn.kernel_ridge.KernelRidge
60 sklearn.neighbors.KNeighborsRegressor
61 sklearn.neighbors.RadiusNeighborsRegressor
62 sklearn.neural_network.MLPRegressor
63 sklearn.svm.LinearSVR
64 sklearn.svm.NuSVR
65 sklearn.svm.SVR
66 sklearn.tree.DecisionTreeRegressor
67 sklearn.tree.ExtraTreeRegressor
68
69 Examples
70 --------
71 >>> import dtoolkit.accessor
72 >>> import pandas as pd
73 >>> from sklearn.linear_model import LinearRegression
74
75 .. math:: y = 1 \\times x_0 + 2 \\times x_1 + 3
76
77 >>> df = pd.DataFrame(
78 ... [
79 ... [1, 1, 6],
80 ... [1, 2, 8],
81 ... [2, 2, 9],
82 ... [2, 3, 11],
83 ... [3, 5, None],
84 ... ],
85 ... columns=['x1', 'x2', 'y'],
86 ... )
87 >>> df
88 x1 x2 y
89 0 1 1 6.0
90 1 1 2 8.0
91 2 2 2 9.0
92 3 2 3 11.0
93 4 3 5 NaN
94
95 Use 'x1' and 'x2' columns to fit 'y' column and fill the value.
96
97 >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')
98 x1 x2 y
99 0 1 1 6.0
100 1 1 2 8.0
101 2 2 2 9.0
102 3 2 3 11.0
103 4 3 5 16.0
104 """
105
106 if how not in {"na", "all"}:
107 raise ValueError(f"invalid inf option: {how!r}")
108
109 if isinstance(X, (str, int)):
110 X = [X]
111
112 index_notnull = df[df[y].notnull()].index
113 model = method(**kwargs).fit(
114 df.loc[index_notnull, X],
115 df.loc[index_notnull, y],
116 )
117
118 if how == "all":
119 df[y] = model.predict(df[X])
120 elif how == "na":
121 index_null = df[df[y].isnull()].index
122 df.loc[index_null, y] = model.predict(df.loc[index_null, X])
123
124 return df
125
[end of dtoolkit/accessor/dataframe/fillna_regression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dtoolkit/accessor/dataframe/fillna_regression.py b/dtoolkit/accessor/dataframe/fillna_regression.py
--- a/dtoolkit/accessor/dataframe/fillna_regression.py
+++ b/dtoolkit/accessor/dataframe/fillna_regression.py
@@ -15,8 +15,7 @@
def fillna_regression(
df: pd.DataFrame,
method: RegressorMixin,
- X: IntOrStr | list[IntOrStr] | pd.Index,
- y: IntOrStr,
+ columns: dict[IntOrStr, IntOrStr | list[IntOrStr] | pd.Index],
how: str = "na",
**kwargs,
) -> pd.DataFrame:
@@ -28,11 +27,9 @@
method : RegressorMixin
Regression transformer.
- X : int or str, list of int or str, Index
- Feature columns.
-
- y : int or str
- Target column.
+ columns : dict, ``{y: X}``
+ A series of column names pairs. The key is the y (or target) column name, and
+ values are X (or feature) column names.
how : {'na', 'all'}, default 'na'
Only fill na value or apply regression to entire target column.
@@ -41,6 +38,10 @@
See the documentation for ``method`` for complete details on
the keyword arguments.
+ Returns
+ -------
+ DataFrame
+
See Also
--------
sklearn.kernel_ridge
@@ -94,7 +95,7 @@
Use 'x1' and 'x2' columns to fit 'y' column and fill the value.
- >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')
+ >>> df.fillna_regression(LinearRegression, {'y': ['x1', 'x2']})
x1 x2 y
0 1 1 6.0
1 1 2 8.0
@@ -106,6 +107,22 @@
if how not in {"na", "all"}:
raise ValueError(f"invalid inf option: {how!r}")
+ for y, X in columns.items():
+ df = _fillna_regression(df, method, y, X, how=how, **kwargs)
+
+ return df
+
+
+def _fillna_regression(
+ df: pd.DataFrame,
+ method: RegressorMixin,
+ y: IntOrStr,
+ X: IntOrStr | list[IntOrStr] | pd.Index,
+ how: str = "na",
+ **kwargs,
+):
+ """Fill single na column at once."""
+
if isinstance(X, (str, int)):
X = [X]
|
{"golden_diff": "diff --git a/dtoolkit/accessor/dataframe/fillna_regression.py b/dtoolkit/accessor/dataframe/fillna_regression.py\n--- a/dtoolkit/accessor/dataframe/fillna_regression.py\n+++ b/dtoolkit/accessor/dataframe/fillna_regression.py\n@@ -15,8 +15,7 @@\n def fillna_regression(\n df: pd.DataFrame,\n method: RegressorMixin,\n- X: IntOrStr | list[IntOrStr] | pd.Index,\n- y: IntOrStr,\n+ columns: dict[IntOrStr, IntOrStr | list[IntOrStr] | pd.Index],\n how: str = \"na\",\n **kwargs,\n ) -> pd.DataFrame:\n@@ -28,11 +27,9 @@\n method : RegressorMixin\n Regression transformer.\n \n- X : int or str, list of int or str, Index\n- Feature columns.\n-\n- y : int or str\n- Target column.\n+ columns : dict, ``{y: X}``\n+ A series of column names pairs. The key is the y (or target) column name, and\n+ values are X (or feature) column names.\n \n how : {'na', 'all'}, default 'na'\n Only fill na value or apply regression to entire target column.\n@@ -41,6 +38,10 @@\n See the documentation for ``method`` for complete details on\n the keyword arguments.\n \n+ Returns\n+ -------\n+ DataFrame\n+\n See Also\n --------\n sklearn.kernel_ridge\n@@ -94,7 +95,7 @@\n \n Use 'x1' and 'x2' columns to fit 'y' column and fill the value.\n \n- >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')\n+ >>> df.fillna_regression(LinearRegression, {'y': ['x1', 'x2']})\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n@@ -106,6 +107,22 @@\n if how not in {\"na\", \"all\"}:\n raise ValueError(f\"invalid inf option: {how!r}\")\n \n+ for y, X in columns.items():\n+ df = _fillna_regression(df, method, y, X, how=how, **kwargs)\n+\n+ return df\n+\n+\n+def _fillna_regression(\n+ df: pd.DataFrame,\n+ method: RegressorMixin,\n+ y: IntOrStr,\n+ X: IntOrStr | list[IntOrStr] | pd.Index,\n+ how: str = \"na\",\n+ **kwargs,\n+):\n+ \"\"\"Fill single na column at once.\"\"\"\n+\n if isinstance(X, (str, int)):\n X = [X]\n", "issue": "EHN: New accessor `fillna_regresssion`\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [x] whatsnew entry\r\n\r\nFill na value with regression method\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport pandas as pd\n\nfrom dtoolkit._typing import IntOrStr\nfrom dtoolkit.accessor.register import register_dataframe_method\n\nif TYPE_CHECKING:\n from sklearn.base import RegressorMixin\n\n\n@register_dataframe_method\ndef fillna_regression(\n df: pd.DataFrame,\n method: RegressorMixin,\n X: IntOrStr | list[IntOrStr] | pd.Index,\n y: IntOrStr,\n how: str = \"na\",\n **kwargs,\n) -> pd.DataFrame:\n \"\"\"\n Fill na value with regression algorithm.\n\n Parameters\n ----------\n method : RegressorMixin\n Regression transformer.\n\n X : int or str, list of int or str, Index\n Feature columns.\n\n y : int or str\n Target column.\n\n how : {'na', 'all'}, default 'na'\n Only fill na value or apply regression to entire target column.\n\n **kwargs\n See the documentation for ``method`` for complete details on\n the keyword arguments.\n\n See Also\n --------\n sklearn.kernel_ridge\n sklearn.linear_model\n sklearn.dummy.DummyRegressor\n sklearn.ensemble.AdaBoostRegressor\n sklearn.ensemble.BaggingRegressor\n sklearn.ensemble.ExtraTreesRegressor\n sklearn.ensemble.GradientBoostingRegressor\n sklearn.ensemble.RandomForestRegressor\n sklearn.ensemble.StackingRegressor\n sklearn.ensemble.VotingRegressor\n sklearn.ensemble.HistGradientBoostingRegressor\n sklearn.gaussian_process.GaussianProcessRegressor\n sklearn.isotonic.IsotonicRegression\n sklearn.kernel_ridge.KernelRidge\n sklearn.neighbors.KNeighborsRegressor\n sklearn.neighbors.RadiusNeighborsRegressor\n sklearn.neural_network.MLPRegressor\n sklearn.svm.LinearSVR\n sklearn.svm.NuSVR\n sklearn.svm.SVR\n sklearn.tree.DecisionTreeRegressor\n sklearn.tree.ExtraTreeRegressor\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> from sklearn.linear_model import LinearRegression\n\n .. math:: y = 1 \\\\times x_0 + 2 \\\\times x_1 + 3\n\n >>> df = pd.DataFrame(\n ... [\n ... [1, 1, 6],\n ... [1, 2, 8],\n ... [2, 2, 9],\n ... [2, 3, 11],\n ... [3, 5, None],\n ... ],\n ... columns=['x1', 'x2', 'y'],\n ... )\n >>> df\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n 2 2 2 9.0\n 3 2 3 11.0\n 4 3 5 NaN\n\n Use 'x1' and 'x2' columns to fit 'y' column and fill the value.\n\n >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n 2 2 2 9.0\n 3 2 3 11.0\n 4 3 5 16.0\n \"\"\"\n\n if how not in {\"na\", \"all\"}:\n raise ValueError(f\"invalid inf option: {how!r}\")\n\n if isinstance(X, (str, int)):\n X = [X]\n\n index_notnull = df[df[y].notnull()].index\n model = method(**kwargs).fit(\n df.loc[index_notnull, X],\n df.loc[index_notnull, y],\n )\n\n if how == \"all\":\n df[y] = model.predict(df[X])\n elif how == \"na\":\n index_null = df[df[y].isnull()].index\n df.loc[index_null, y] = model.predict(df.loc[index_null, X])\n\n return df\n", "path": "dtoolkit/accessor/dataframe/fillna_regression.py"}]}
| 1,964 | 640 |
gh_patches_debug_33389
|
rasdani/github-patches
|
git_diff
|
ansible__molecule-3605
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No doc for #3467
The new feature added in #3467 is not documented in the docs.
The new feature:
> provide[s] alternative location for
> tests or side effects playbook in test sequence like this:
>
> ```
> test_sequence:
> - create
> - converge
> - verify
> - side_effect side_effect1.yml
> - verify tests_for_effect1/
> - side_effect side_effect2.yml
> - verify tests_for_effect2/
> ```
</issue>
<code>
[start of src/molecule/scenario.py]
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20 """Molecule Scenario Module."""
21
22 import errno
23 import fcntl
24 import fnmatch
25 import logging
26 import os
27 import shutil
28 from pathlib import Path
29 from time import sleep
30 from typing import Optional
31
32 from molecule import scenarios, util
33 from molecule.constants import RC_TIMEOUT
34
35 LOG = logging.getLogger(__name__)
36
37
38 class Scenario(object):
39 """
40 A scenario allows Molecule test a role in a particular way, this is a \
41 fundamental change from Molecule v1.
42
43 A scenario is a self-contained directory containing everything necessary
44 for testing the role in a particular way. The default scenario is named
45 ``default``, and every role should contain a default scenario.
46
47 Unless mentioned explicitly, the scenario name will be the directory name
48 hosting the files.
49
50 Any option set in this section will override the defaults.
51
52 .. code-block:: yaml
53
54 scenario:
55 create_sequence:
56 - dependency
57 - create
58 - prepare
59 check_sequence:
60 - dependency
61 - cleanup
62 - destroy
63 - create
64 - prepare
65 - converge
66 - check
67 - destroy
68 converge_sequence:
69 - dependency
70 - create
71 - prepare
72 - converge
73 destroy_sequence:
74 - dependency
75 - cleanup
76 - destroy
77 test_sequence:
78 - dependency
79 - lint
80 - cleanup
81 - destroy
82 - syntax
83 - create
84 - prepare
85 - converge
86 - idempotence
87 - side_effect
88 - verify
89 - cleanup
90 - destroy
91 """ # noqa
92
93 def __init__(self, config):
94 """
95 Initialize a new scenario class and returns None.
96
97 :param config: An instance of a Molecule config.
98 :return: None
99 """
100 self._lock = None
101 self.config = config
102 self._setup()
103
104 def _remove_scenario_state_directory(self):
105 """Remove scenario cached disk stored state.
106
107 :return: None
108 """
109 directory = str(Path(self.ephemeral_directory).parent)
110 LOG.info("Removing %s", directory)
111 shutil.rmtree(directory)
112
113 def prune(self):
114 """
115 Prune the scenario ephemeral directory files and returns None.
116
117 "safe files" will not be pruned, including the ansible configuration
118 and inventory used by this scenario, the scenario state file, and
119 files declared as "safe_files" in the ``driver`` configuration
120 declared in ``molecule.yml``.
121
122 :return: None
123 """
124 LOG.info("Pruning extra files from scenario ephemeral directory")
125 safe_files = [
126 self.config.provisioner.config_file,
127 self.config.provisioner.inventory_file,
128 self.config.state.state_file,
129 ] + self.config.driver.safe_files
130 files = util.os_walk(self.ephemeral_directory, "*")
131 for f in files:
132 if not any(sf for sf in safe_files if fnmatch.fnmatch(f, sf)):
133 try:
134 os.remove(f)
135 except OSError as e:
136 if e.errno != errno.ENOENT:
137 raise
138
139 # Remove empty directories.
140 for dirpath, dirs, files in os.walk(self.ephemeral_directory, topdown=False):
141 if not dirs and not files:
142 os.removedirs(dirpath)
143
144 @property
145 def name(self):
146 return self.config.config["scenario"]["name"]
147
148 @property
149 def directory(self):
150 if self.config.molecule_file:
151 return os.path.dirname(self.config.molecule_file)
152 else:
153 return os.getcwd()
154
155 @property
156 def ephemeral_directory(self):
157 path = os.getenv("MOLECULE_EPHEMERAL_DIRECTORY", None)
158 if not path:
159
160 project_directory = os.path.basename(self.config.project_directory)
161
162 if self.config.is_parallel:
163 project_directory = f"{project_directory}-{self.config._run_uuid}"
164
165 project_scenario_directory = os.path.join(
166 self.config.cache_directory, project_directory, self.name
167 )
168
169 path = ephemeral_directory(project_scenario_directory)
170
171 if os.environ.get("MOLECULE_PARALLEL", False) and not self._lock:
172 with open(os.path.join(path, ".lock"), "w") as self._lock:
173 for i in range(1, 5):
174 try:
175 fcntl.lockf(self._lock, fcntl.LOCK_EX | fcntl.LOCK_NB)
176 break
177 except OSError:
178 delay = 30 * i
179 LOG.warning(
180 "Retrying to acquire lock on %s, waiting for %s seconds",
181 path,
182 delay,
183 )
184 sleep(delay)
185 else:
186 LOG.warning("Timedout trying to acquire lock on %s", path)
187 raise SystemExit(RC_TIMEOUT)
188
189 return path
190
191 @property
192 def inventory_directory(self):
193 return os.path.join(self.ephemeral_directory, "inventory")
194
195 @property
196 def check_sequence(self):
197 return self.config.config["scenario"]["check_sequence"]
198
199 @property
200 def cleanup_sequence(self):
201 return self.config.config["scenario"]["cleanup_sequence"]
202
203 @property
204 def converge_sequence(self):
205 return self.config.config["scenario"]["converge_sequence"]
206
207 @property
208 def create_sequence(self):
209 return self.config.config["scenario"]["create_sequence"]
210
211 @property
212 def dependency_sequence(self):
213 return ["dependency"]
214
215 @property
216 def destroy_sequence(self):
217 return self.config.config["scenario"]["destroy_sequence"]
218
219 @property
220 def idempotence_sequence(self):
221 return ["idempotence"]
222
223 @property
224 def lint_sequence(self):
225 # see https://github.com/ansible-community/molecule/issues/2216
226 return ["dependency", "lint"]
227
228 @property
229 def prepare_sequence(self):
230 return ["prepare"]
231
232 @property
233 def side_effect_sequence(self):
234 return ["side_effect"]
235
236 @property
237 def syntax_sequence(self):
238 return ["syntax"]
239
240 @property
241 def test_sequence(self):
242 return self.config.config["scenario"]["test_sequence"]
243
244 @property
245 def verify_sequence(self):
246 return ["verify"]
247
248 @property
249 def sequence(self):
250 """
251 Select the sequence based on scenario and subcommand of the provided \
252 scenario object and returns a list.
253
254 :param scenario: A scenario object.
255 :param skipped: An optional bool to include skipped scenarios.
256 :return: list
257 """
258 s = scenarios.Scenarios([self.config])
259 matrix = s._get_matrix()
260
261 try:
262 return matrix[self.name][self.config.subcommand]
263 except KeyError:
264 # TODO(retr0h): May change this handling in the future.
265 return []
266
267 def _setup(self):
268 """
269 Prepare the scenario for Molecule and returns None.
270
271 :return: None
272 """
273 if not os.path.isdir(self.inventory_directory):
274 os.makedirs(self.inventory_directory, exist_ok=True)
275
276
277 def ephemeral_directory(path: Optional[str] = None) -> str:
278 """
279 Return temporary directory to be used by molecule.
280
281 Molecule users should not make any assumptions about its location,
282 permissions or its content as this may change in future release.
283 """
284 d = os.getenv("MOLECULE_EPHEMERAL_DIRECTORY")
285 if not d:
286 d = os.getenv("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
287 if not d:
288 raise RuntimeError("Unable to determine ephemeral directory to use.")
289 d = os.path.abspath(os.path.join(d, path if path else "molecule"))
290
291 if not os.path.isdir(d):
292 os.umask(0o077)
293 Path(d).mkdir(mode=0o700, parents=True, exist_ok=True)
294
295 return d
296
[end of src/molecule/scenario.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/molecule/scenario.py b/src/molecule/scenario.py
--- a/src/molecule/scenario.py
+++ b/src/molecule/scenario.py
@@ -88,6 +88,70 @@
- verify
- cleanup
- destroy
+
+ :Advanced testing
+
+ If needed, Molecule can run multiple side effects and tests within a scenario.
+ This allows to perform advanced testing for stateful software under role/playbook
+ management. Actions `side_effect` and `verify` can take optional arguments to change
+ the playbook/test they execute.
+
+ Example of test sequence with multiple side effects and tests:
+
+ .. code-block:: yaml
+
+ test_sequence:
+ - converge
+ - side_effect reboot.yaml
+ - verify after_reboot/
+ - side_effect alter_configs.yaml
+ - converge
+ - verify other_test1.py other_test2.py
+ - side_effect
+ - verify
+
+ ``side_effect`` without an argument is executing the usual `side_effect` configured in
+ `provisioner.playbooks` section of molecule.yml.
+
+ ``side_effect`` can have one or more arguments (separated by spaces) which is
+ a playbook (plabyooks) to execute. If the argument for ``side_effect`` is present,
+ it's executed instead. The path to the playbook is relative to the molecule.yml location.
+ Normal side effect settings (from `provisioner.playbooks`) are ignored for action with
+ argument.
+
+ ``verify`` without an argument is executing usual tests configured in the verifier section
+ of molecule.yml.
+
+ If one or more arguments (separated by spaces) are present, each argument is treated
+ as a test name (file or directory) to pass to the verifier (either Ansible or Testinfra).
+ The kind of verifier is set in the `verifier` section of molecule.yml and is applied to all
+ `verify` actions in the scenario.
+
+ The path to tests is relative to the molecule.yml file location. The `additional_files_or_dirs`
+ setting for verifier is ignored if the `verify` action has an argument.
+
+ Multiple `side_effect` and `verify` actions can be used to a create a combination
+ of playbooks and tests, for example, for end-to-end playbook testing.
+
+ Additional `converge` and `idempotence` actions can be used multiple times:
+
+ .. code-block:: yaml
+
+ test_sequence:
+ - converge
+ - idempotence
+ - side_effect
+ - verify
+ - converge
+ - idempotence
+ - side_effect effect2.yml
+ - converge
+ - idempotence
+ - verify test2/
+ - side_effect effect3.yml
+ - verify test3/
+ - idempotence
+
""" # noqa
def __init__(self, config):
|
{"golden_diff": "diff --git a/src/molecule/scenario.py b/src/molecule/scenario.py\n--- a/src/molecule/scenario.py\n+++ b/src/molecule/scenario.py\n@@ -88,6 +88,70 @@\n - verify\n - cleanup\n - destroy\n+\n+ :Advanced testing\n+\n+ If needed, Molecule can run multiple side effects and tests within a scenario.\n+ This allows to perform advanced testing for stateful software under role/playbook\n+ management. Actions `side_effect` and `verify` can take optional arguments to change\n+ the playbook/test they execute.\n+\n+ Example of test sequence with multiple side effects and tests:\n+\n+ .. code-block:: yaml\n+\n+ test_sequence:\n+ - converge\n+ - side_effect reboot.yaml\n+ - verify after_reboot/\n+ - side_effect alter_configs.yaml\n+ - converge\n+ - verify other_test1.py other_test2.py\n+ - side_effect\n+ - verify\n+\n+ ``side_effect`` without an argument is executing the usual `side_effect` configured in\n+ `provisioner.playbooks` section of molecule.yml.\n+\n+ ``side_effect`` can have one or more arguments (separated by spaces) which is\n+ a playbook (plabyooks) to execute. If the argument for ``side_effect`` is present,\n+ it's executed instead. The path to the playbook is relative to the molecule.yml location.\n+ Normal side effect settings (from `provisioner.playbooks`) are ignored for action with\n+ argument.\n+\n+ ``verify`` without an argument is executing usual tests configured in the verifier section\n+ of molecule.yml.\n+\n+ If one or more arguments (separated by spaces) are present, each argument is treated\n+ as a test name (file or directory) to pass to the verifier (either Ansible or Testinfra).\n+ The kind of verifier is set in the `verifier` section of molecule.yml and is applied to all\n+ `verify` actions in the scenario.\n+\n+ The path to tests is relative to the molecule.yml file location. The `additional_files_or_dirs`\n+ setting for verifier is ignored if the `verify` action has an argument.\n+\n+ Multiple `side_effect` and `verify` actions can be used to a create a combination\n+ of playbooks and tests, for example, for end-to-end playbook testing.\n+\n+ Additional `converge` and `idempotence` actions can be used multiple times:\n+\n+ .. code-block:: yaml\n+\n+ test_sequence:\n+ - converge\n+ - idempotence\n+ - side_effect\n+ - verify\n+ - converge\n+ - idempotence\n+ - side_effect effect2.yml\n+ - converge\n+ - idempotence\n+ - verify test2/\n+ - side_effect effect3.yml\n+ - verify test3/\n+ - idempotence\n+\n \"\"\" # noqa\n \n def __init__(self, config):\n", "issue": "No doc for #3467\nThe new feature added in #3467 is not documented in the docs. \r\n\r\nThe new feature:\r\n\r\n> provide[s] alternative location for\r\n> tests or side effects playbook in test sequence like this:\r\n> \r\n> ```\r\n> test_sequence:\r\n> - create\r\n> - converge\r\n> - verify\r\n> - side_effect side_effect1.yml\r\n> - verify tests_for_effect1/\r\n> - side_effect side_effect2.yml\r\n> - verify tests_for_effect2/\r\n> ```\r\n\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Molecule Scenario Module.\"\"\"\n\nimport errno\nimport fcntl\nimport fnmatch\nimport logging\nimport os\nimport shutil\nfrom pathlib import Path\nfrom time import sleep\nfrom typing import Optional\n\nfrom molecule import scenarios, util\nfrom molecule.constants import RC_TIMEOUT\n\nLOG = logging.getLogger(__name__)\n\n\nclass Scenario(object):\n \"\"\"\n A scenario allows Molecule test a role in a particular way, this is a \\\n fundamental change from Molecule v1.\n\n A scenario is a self-contained directory containing everything necessary\n for testing the role in a particular way. The default scenario is named\n ``default``, and every role should contain a default scenario.\n\n Unless mentioned explicitly, the scenario name will be the directory name\n hosting the files.\n\n Any option set in this section will override the defaults.\n\n .. code-block:: yaml\n\n scenario:\n create_sequence:\n - dependency\n - create\n - prepare\n check_sequence:\n - dependency\n - cleanup\n - destroy\n - create\n - prepare\n - converge\n - check\n - destroy\n converge_sequence:\n - dependency\n - create\n - prepare\n - converge\n destroy_sequence:\n - dependency\n - cleanup\n - destroy\n test_sequence:\n - dependency\n - lint\n - cleanup\n - destroy\n - syntax\n - create\n - prepare\n - converge\n - idempotence\n - side_effect\n - verify\n - cleanup\n - destroy\n \"\"\" # noqa\n\n def __init__(self, config):\n \"\"\"\n Initialize a new scenario class and returns None.\n\n :param config: An instance of a Molecule config.\n :return: None\n \"\"\"\n self._lock = None\n self.config = config\n self._setup()\n\n def _remove_scenario_state_directory(self):\n \"\"\"Remove scenario cached disk stored state.\n\n :return: None\n \"\"\"\n directory = str(Path(self.ephemeral_directory).parent)\n LOG.info(\"Removing %s\", directory)\n shutil.rmtree(directory)\n\n def prune(self):\n \"\"\"\n Prune the scenario ephemeral directory files and returns None.\n\n \"safe files\" will not be pruned, including the ansible configuration\n and inventory used by this scenario, the scenario state file, and\n files declared as \"safe_files\" in the ``driver`` configuration\n declared in ``molecule.yml``.\n\n :return: None\n \"\"\"\n LOG.info(\"Pruning extra files from scenario ephemeral directory\")\n safe_files = [\n self.config.provisioner.config_file,\n self.config.provisioner.inventory_file,\n self.config.state.state_file,\n ] + self.config.driver.safe_files\n files = util.os_walk(self.ephemeral_directory, \"*\")\n for f in files:\n if not any(sf for sf in safe_files if fnmatch.fnmatch(f, sf)):\n try:\n os.remove(f)\n except OSError as e:\n if e.errno != errno.ENOENT:\n raise\n\n # Remove empty directories.\n for dirpath, dirs, files in os.walk(self.ephemeral_directory, topdown=False):\n if not dirs and not files:\n os.removedirs(dirpath)\n\n @property\n def name(self):\n return self.config.config[\"scenario\"][\"name\"]\n\n @property\n def directory(self):\n if self.config.molecule_file:\n return os.path.dirname(self.config.molecule_file)\n else:\n return os.getcwd()\n\n @property\n def ephemeral_directory(self):\n path = os.getenv(\"MOLECULE_EPHEMERAL_DIRECTORY\", None)\n if not path:\n\n project_directory = os.path.basename(self.config.project_directory)\n\n if self.config.is_parallel:\n project_directory = f\"{project_directory}-{self.config._run_uuid}\"\n\n project_scenario_directory = os.path.join(\n self.config.cache_directory, project_directory, self.name\n )\n\n path = ephemeral_directory(project_scenario_directory)\n\n if os.environ.get(\"MOLECULE_PARALLEL\", False) and not self._lock:\n with open(os.path.join(path, \".lock\"), \"w\") as self._lock:\n for i in range(1, 5):\n try:\n fcntl.lockf(self._lock, fcntl.LOCK_EX | fcntl.LOCK_NB)\n break\n except OSError:\n delay = 30 * i\n LOG.warning(\n \"Retrying to acquire lock on %s, waiting for %s seconds\",\n path,\n delay,\n )\n sleep(delay)\n else:\n LOG.warning(\"Timedout trying to acquire lock on %s\", path)\n raise SystemExit(RC_TIMEOUT)\n\n return path\n\n @property\n def inventory_directory(self):\n return os.path.join(self.ephemeral_directory, \"inventory\")\n\n @property\n def check_sequence(self):\n return self.config.config[\"scenario\"][\"check_sequence\"]\n\n @property\n def cleanup_sequence(self):\n return self.config.config[\"scenario\"][\"cleanup_sequence\"]\n\n @property\n def converge_sequence(self):\n return self.config.config[\"scenario\"][\"converge_sequence\"]\n\n @property\n def create_sequence(self):\n return self.config.config[\"scenario\"][\"create_sequence\"]\n\n @property\n def dependency_sequence(self):\n return [\"dependency\"]\n\n @property\n def destroy_sequence(self):\n return self.config.config[\"scenario\"][\"destroy_sequence\"]\n\n @property\n def idempotence_sequence(self):\n return [\"idempotence\"]\n\n @property\n def lint_sequence(self):\n # see https://github.com/ansible-community/molecule/issues/2216\n return [\"dependency\", \"lint\"]\n\n @property\n def prepare_sequence(self):\n return [\"prepare\"]\n\n @property\n def side_effect_sequence(self):\n return [\"side_effect\"]\n\n @property\n def syntax_sequence(self):\n return [\"syntax\"]\n\n @property\n def test_sequence(self):\n return self.config.config[\"scenario\"][\"test_sequence\"]\n\n @property\n def verify_sequence(self):\n return [\"verify\"]\n\n @property\n def sequence(self):\n \"\"\"\n Select the sequence based on scenario and subcommand of the provided \\\n scenario object and returns a list.\n\n :param scenario: A scenario object.\n :param skipped: An optional bool to include skipped scenarios.\n :return: list\n \"\"\"\n s = scenarios.Scenarios([self.config])\n matrix = s._get_matrix()\n\n try:\n return matrix[self.name][self.config.subcommand]\n except KeyError:\n # TODO(retr0h): May change this handling in the future.\n return []\n\n def _setup(self):\n \"\"\"\n Prepare the scenario for Molecule and returns None.\n\n :return: None\n \"\"\"\n if not os.path.isdir(self.inventory_directory):\n os.makedirs(self.inventory_directory, exist_ok=True)\n\n\ndef ephemeral_directory(path: Optional[str] = None) -> str:\n \"\"\"\n Return temporary directory to be used by molecule.\n\n Molecule users should not make any assumptions about its location,\n permissions or its content as this may change in future release.\n \"\"\"\n d = os.getenv(\"MOLECULE_EPHEMERAL_DIRECTORY\")\n if not d:\n d = os.getenv(\"XDG_CACHE_HOME\", os.path.expanduser(\"~/.cache\"))\n if not d:\n raise RuntimeError(\"Unable to determine ephemeral directory to use.\")\n d = os.path.abspath(os.path.join(d, path if path else \"molecule\"))\n\n if not os.path.isdir(d):\n os.umask(0o077)\n Path(d).mkdir(mode=0o700, parents=True, exist_ok=True)\n\n return d\n", "path": "src/molecule/scenario.py"}]}
| 3,375 | 669 |
gh_patches_debug_23247
|
rasdani/github-patches
|
git_diff
|
dmlc__dgl-1239
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HAN example error on version 0.4.2
get the below error running HAN example on version 0.4.2, version 0.4.1 works though
AttributeError: '<class 'dgl.heterograph_index.HeteroGraphIndex'>' object has no attribute 'get_immutable_gidx'
</issue>
<code>
[start of python/dgl/nn/pytorch/softmax.py]
1 """Torch modules for graph related softmax."""
2 # pylint: disable= no-member, arguments-differ
3 import torch as th
4
5 from ...function import TargetCode
6 from ...base import ALL, is_all
7 from ... import backend as F
8 from ... import utils
9
10 __all__ = ['edge_softmax']
11
12
13 class EdgeSoftmax(th.autograd.Function):
14 r"""Apply softmax over signals of incoming edges.
15
16 For a node :math:`i`, edgesoftmax is an operation of computing
17
18 .. math::
19 a_{ij} = \frac{\exp(z_{ij})}{\sum_{j\in\mathcal{N}(i)}\exp(z_{ij})}
20
21 where :math:`z_{ij}` is a signal of edge :math:`j\rightarrow i`, also
22 called logits in the context of softmax. :math:`\mathcal{N}(i)` is
23 the set of nodes that have an edge to :math:`i`.
24
25 An example of using edgesoftmax is in
26 `Graph Attention Network <https://arxiv.org/pdf/1710.10903.pdf>`__ where
27 the attention weights are computed with such an edgesoftmax operation.
28 """
29
30 @staticmethod
31 def forward(ctx, g, score, eids):
32 """Forward function.
33
34 Pseudo-code:
35
36 .. code:: python
37
38 score = dgl.EData(g, score)
39 score_max = score.dst_max() # of type dgl.NData
40 score = score - score_max # edge_sub_dst, ret dgl.EData
41 score_sum = score.dst_sum() # of type dgl.NData
42 out = score / score_sum # edge_div_dst, ret dgl.EData
43 return out.data
44 """
45 # remember to save the graph to backward cache before making it
46 # a local variable
47 if not is_all(eids):
48 g = g.edge_subgraph(eids.long())
49
50 n_nodes = g.number_of_nodes()
51 n_edges = g.number_of_edges()
52 gidx = g._graph.get_immutable_gidx(utils.to_dgl_context(score.device))
53 ctx.backward_cache = n_nodes, n_edges, gidx
54
55 #g.update_all(fn.copy_e('s', 'm'), fn.max('m', 'smax'))
56 smax = F.copy_reduce('max', gidx, TargetCode.EDGE, score, n_nodes)
57 #g.apply_edges(fn.e_sub_v('s', 'smax', 'out'))
58 out = F.binary_reduce(
59 'none', 'sub', gidx, TargetCode.EDGE, TargetCode.DST, score, smax, n_edges)
60 #g.edata['out'] = th.exp(g.edata['out'])
61 out = th.exp(out)
62 #g.update_all(fn.copy_e('out', 'm'), fn.sum('m', 'out_sum'))
63 out_sum = F.copy_reduce('sum', gidx, TargetCode.EDGE, out, n_nodes)
64 #g.apply_edges(fn.e_div_v('out', 'out_sum', 'out'))
65 out = F.binary_reduce(
66 'none', 'div', gidx, TargetCode.EDGE, TargetCode.DST, out, out_sum, n_edges)
67
68 ctx.save_for_backward(out)
69 return out
70
71 @staticmethod
72 def backward(ctx, grad_out):
73 """Backward function.
74
75 Pseudo-code:
76
77 .. code:: python
78
79 g, out = ctx.backward_cache
80 grad_out = dgl.EData(g, grad_out)
81 out = dgl.EData(g, out)
82 sds = out * grad_out # type dgl.EData
83 sds_sum = sds.dst_sum() # type dgl.NData
84 grad_score = sds - sds * sds_sum # multiple expressions
85 return grad_score.data
86 """
87 n_nodes, n_edges, gidx = ctx.backward_cache
88 out, = ctx.saved_tensors
89
90 #g.edata['grad_s'] = out * grad_out
91 grad_s = out * grad_out
92 #g.update_all(fn.copy_e('grad_s', 'm'), fn.sum('m', 'accum'))
93 accum = F.copy_reduce('sum', gidx, TargetCode.EDGE, grad_s, n_nodes)
94 #g.apply_edges(fn.e_mul_v('out', 'accum', 'out'))
95 out = F.binary_reduce(
96 'none', 'mul', gidx, TargetCode.EDGE, TargetCode.DST, out, accum, n_edges)
97 #grad_score = g.edata['grad_s'] - g.edata['out']
98 grad_score = grad_s - out
99
100 return None, grad_score, None
101
102
103 def edge_softmax(graph, logits, eids=ALL):
104 r"""Compute edge softmax.
105
106 For a node :math:`i`, edge softmax is an operation of computing
107
108 .. math::
109 a_{ij} = \frac{\exp(z_{ij})}{\sum_{j\in\mathcal{N}(i)}\exp(z_{ij})}
110
111 where :math:`z_{ij}` is a signal of edge :math:`j\rightarrow i`, also
112 called logits in the context of softmax. :math:`\mathcal{N}(i)` is
113 the set of nodes that have an edge to :math:`i`.
114
115 An example of using edge softmax is in
116 `Graph Attention Network <https://arxiv.org/pdf/1710.10903.pdf>`__ where
117 the attention weights are computed with such an edge softmax operation.
118
119 Parameters
120 ----------
121 graph : DGLGraph
122 The graph to perform edge softmax
123 logits : torch.Tensor
124 The input edge feature
125 eids : torch.Tensor or ALL, optional
126 Edges on which to apply edge softmax. If ALL, apply edge
127 softmax on all edges in the graph. Default: ALL.
128
129 Returns
130 -------
131 Tensor
132 Softmax value
133
134 Notes
135 -----
136 * Input shape: :math:`(E, *, 1)` where * means any number of
137 additional dimensions, :math:`E` equals the length of eids.
138 If eids is ALL, :math:`E` equals number of edges in the graph.
139 * Return shape: :math:`(E, *, 1)`
140
141 Examples
142 --------
143 >>> from dgl.nn.pytorch.softmax import edge_softmax
144 >>> import dgl
145 >>> import torch as th
146
147 Create a :code:`DGLGraph` object and initialize its edge features.
148
149 >>> g = dgl.DGLGraph()
150 >>> g.add_nodes(3)
151 >>> g.add_edges([0, 0, 0, 1, 1, 2], [0, 1, 2, 1, 2, 2])
152 >>> edata = th.ones(6, 1).float()
153 >>> edata
154 tensor([[1.],
155 [1.],
156 [1.],
157 [1.],
158 [1.],
159 [1.]])
160
161 Apply edge softmax on g:
162
163 >>> edge_softmax(g, edata)
164 tensor([[1.0000],
165 [0.5000],
166 [0.3333],
167 [0.5000],
168 [0.3333],
169 [0.3333]])
170
171 Apply edge softmax on first 4 edges of g:
172
173 >>> edge_softmax(g, edata[:4], th.Tensor([0,1,2,3]))
174 tensor([[1.0000],
175 [0.5000],
176 [1.0000],
177 [0.5000]])
178 """
179 return EdgeSoftmax.apply(graph, logits, eids)
180
[end of python/dgl/nn/pytorch/softmax.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/dgl/nn/pytorch/softmax.py b/python/dgl/nn/pytorch/softmax.py
--- a/python/dgl/nn/pytorch/softmax.py
+++ b/python/dgl/nn/pytorch/softmax.py
@@ -6,6 +6,8 @@
from ...base import ALL, is_all
from ... import backend as F
from ... import utils
+from ...graph import DGLGraph
+from ...heterograph import DGLHeteroGraph
__all__ = ['edge_softmax']
@@ -49,7 +51,19 @@
n_nodes = g.number_of_nodes()
n_edges = g.number_of_edges()
- gidx = g._graph.get_immutable_gidx(utils.to_dgl_context(score.device))
+
+ # TODO(BarclayII): this is a temporary fix of memory leakage in PyTorch
+ # in PR #1139. We should investigate further on what was actually happening
+ # when implementing EdgeSoftmax with message passing API instead of
+ # operators.
+ score_context = utils.to_dgl_context(score.device)
+ if isinstance(g, DGLGraph):
+ gidx = g._graph.get_immutable_gidx(score_context)
+ elif isinstance(g, DGLHeteroGraph):
+ assert g._graph.number_of_etypes() == 1, \
+ "EdgeSoftmax only support one edge type"
+ gidx = g._graph.get_unitgraph(0, score_context)
+
ctx.backward_cache = n_nodes, n_edges, gidx
#g.update_all(fn.copy_e('s', 'm'), fn.max('m', 'smax'))
|
{"golden_diff": "diff --git a/python/dgl/nn/pytorch/softmax.py b/python/dgl/nn/pytorch/softmax.py\n--- a/python/dgl/nn/pytorch/softmax.py\n+++ b/python/dgl/nn/pytorch/softmax.py\n@@ -6,6 +6,8 @@\n from ...base import ALL, is_all\n from ... import backend as F\n from ... import utils\n+from ...graph import DGLGraph\n+from ...heterograph import DGLHeteroGraph\n \n __all__ = ['edge_softmax']\n \n@@ -49,7 +51,19 @@\n \n n_nodes = g.number_of_nodes()\n n_edges = g.number_of_edges()\n- gidx = g._graph.get_immutable_gidx(utils.to_dgl_context(score.device))\n+\n+ # TODO(BarclayII): this is a temporary fix of memory leakage in PyTorch\n+ # in PR #1139. We should investigate further on what was actually happening\n+ # when implementing EdgeSoftmax with message passing API instead of\n+ # operators.\n+ score_context = utils.to_dgl_context(score.device)\n+ if isinstance(g, DGLGraph):\n+ gidx = g._graph.get_immutable_gidx(score_context)\n+ elif isinstance(g, DGLHeteroGraph):\n+ assert g._graph.number_of_etypes() == 1, \\\n+ \"EdgeSoftmax only support one edge type\"\n+ gidx = g._graph.get_unitgraph(0, score_context)\n+\n ctx.backward_cache = n_nodes, n_edges, gidx\n \n #g.update_all(fn.copy_e('s', 'm'), fn.max('m', 'smax'))\n", "issue": "HAN example error on version 0.4.2\nget the below error running HAN example on version 0.4.2, version 0.4.1 works though\r\n\r\nAttributeError: '<class 'dgl.heterograph_index.HeteroGraphIndex'>' object has no attribute 'get_immutable_gidx'\r\n\n", "before_files": [{"content": "\"\"\"Torch modules for graph related softmax.\"\"\"\n# pylint: disable= no-member, arguments-differ\nimport torch as th\n\nfrom ...function import TargetCode\nfrom ...base import ALL, is_all\nfrom ... import backend as F\nfrom ... import utils\n\n__all__ = ['edge_softmax']\n\n\nclass EdgeSoftmax(th.autograd.Function):\n r\"\"\"Apply softmax over signals of incoming edges.\n\n For a node :math:`i`, edgesoftmax is an operation of computing\n\n .. math::\n a_{ij} = \\frac{\\exp(z_{ij})}{\\sum_{j\\in\\mathcal{N}(i)}\\exp(z_{ij})}\n\n where :math:`z_{ij}` is a signal of edge :math:`j\\rightarrow i`, also\n called logits in the context of softmax. :math:`\\mathcal{N}(i)` is\n the set of nodes that have an edge to :math:`i`.\n\n An example of using edgesoftmax is in\n `Graph Attention Network <https://arxiv.org/pdf/1710.10903.pdf>`__ where\n the attention weights are computed with such an edgesoftmax operation.\n \"\"\"\n\n @staticmethod\n def forward(ctx, g, score, eids):\n \"\"\"Forward function.\n\n Pseudo-code:\n\n .. code:: python\n\n score = dgl.EData(g, score)\n score_max = score.dst_max() # of type dgl.NData\n score = score - score_max # edge_sub_dst, ret dgl.EData\n score_sum = score.dst_sum() # of type dgl.NData\n out = score / score_sum # edge_div_dst, ret dgl.EData\n return out.data\n \"\"\"\n # remember to save the graph to backward cache before making it\n # a local variable\n if not is_all(eids):\n g = g.edge_subgraph(eids.long())\n\n n_nodes = g.number_of_nodes()\n n_edges = g.number_of_edges()\n gidx = g._graph.get_immutable_gidx(utils.to_dgl_context(score.device))\n ctx.backward_cache = n_nodes, n_edges, gidx\n\n #g.update_all(fn.copy_e('s', 'm'), fn.max('m', 'smax'))\n smax = F.copy_reduce('max', gidx, TargetCode.EDGE, score, n_nodes)\n #g.apply_edges(fn.e_sub_v('s', 'smax', 'out'))\n out = F.binary_reduce(\n 'none', 'sub', gidx, TargetCode.EDGE, TargetCode.DST, score, smax, n_edges)\n #g.edata['out'] = th.exp(g.edata['out'])\n out = th.exp(out)\n #g.update_all(fn.copy_e('out', 'm'), fn.sum('m', 'out_sum'))\n out_sum = F.copy_reduce('sum', gidx, TargetCode.EDGE, out, n_nodes)\n #g.apply_edges(fn.e_div_v('out', 'out_sum', 'out'))\n out = F.binary_reduce(\n 'none', 'div', gidx, TargetCode.EDGE, TargetCode.DST, out, out_sum, n_edges)\n\n ctx.save_for_backward(out)\n return out\n\n @staticmethod\n def backward(ctx, grad_out):\n \"\"\"Backward function.\n\n Pseudo-code:\n\n .. code:: python\n\n g, out = ctx.backward_cache\n grad_out = dgl.EData(g, grad_out)\n out = dgl.EData(g, out)\n sds = out * grad_out # type dgl.EData\n sds_sum = sds.dst_sum() # type dgl.NData\n grad_score = sds - sds * sds_sum # multiple expressions\n return grad_score.data\n \"\"\"\n n_nodes, n_edges, gidx = ctx.backward_cache\n out, = ctx.saved_tensors\n\n #g.edata['grad_s'] = out * grad_out\n grad_s = out * grad_out\n #g.update_all(fn.copy_e('grad_s', 'm'), fn.sum('m', 'accum'))\n accum = F.copy_reduce('sum', gidx, TargetCode.EDGE, grad_s, n_nodes)\n #g.apply_edges(fn.e_mul_v('out', 'accum', 'out'))\n out = F.binary_reduce(\n 'none', 'mul', gidx, TargetCode.EDGE, TargetCode.DST, out, accum, n_edges)\n #grad_score = g.edata['grad_s'] - g.edata['out']\n grad_score = grad_s - out\n\n return None, grad_score, None\n\n\ndef edge_softmax(graph, logits, eids=ALL):\n r\"\"\"Compute edge softmax.\n\n For a node :math:`i`, edge softmax is an operation of computing\n\n .. math::\n a_{ij} = \\frac{\\exp(z_{ij})}{\\sum_{j\\in\\mathcal{N}(i)}\\exp(z_{ij})}\n\n where :math:`z_{ij}` is a signal of edge :math:`j\\rightarrow i`, also\n called logits in the context of softmax. :math:`\\mathcal{N}(i)` is\n the set of nodes that have an edge to :math:`i`.\n\n An example of using edge softmax is in\n `Graph Attention Network <https://arxiv.org/pdf/1710.10903.pdf>`__ where\n the attention weights are computed with such an edge softmax operation.\n\n Parameters\n ----------\n graph : DGLGraph\n The graph to perform edge softmax\n logits : torch.Tensor\n The input edge feature\n eids : torch.Tensor or ALL, optional\n Edges on which to apply edge softmax. If ALL, apply edge\n softmax on all edges in the graph. Default: ALL.\n\n Returns\n -------\n Tensor\n Softmax value\n\n Notes\n -----\n * Input shape: :math:`(E, *, 1)` where * means any number of\n additional dimensions, :math:`E` equals the length of eids.\n If eids is ALL, :math:`E` equals number of edges in the graph.\n * Return shape: :math:`(E, *, 1)`\n\n Examples\n --------\n >>> from dgl.nn.pytorch.softmax import edge_softmax\n >>> import dgl\n >>> import torch as th\n\n Create a :code:`DGLGraph` object and initialize its edge features.\n\n >>> g = dgl.DGLGraph()\n >>> g.add_nodes(3)\n >>> g.add_edges([0, 0, 0, 1, 1, 2], [0, 1, 2, 1, 2, 2])\n >>> edata = th.ones(6, 1).float()\n >>> edata\n tensor([[1.],\n [1.],\n [1.],\n [1.],\n [1.],\n [1.]])\n\n Apply edge softmax on g:\n\n >>> edge_softmax(g, edata)\n tensor([[1.0000],\n [0.5000],\n [0.3333],\n [0.5000],\n [0.3333],\n [0.3333]])\n\n Apply edge softmax on first 4 edges of g:\n\n >>> edge_softmax(g, edata[:4], th.Tensor([0,1,2,3]))\n tensor([[1.0000],\n [0.5000],\n [1.0000],\n [0.5000]])\n \"\"\"\n return EdgeSoftmax.apply(graph, logits, eids)\n", "path": "python/dgl/nn/pytorch/softmax.py"}]}
| 2,758 | 369 |
gh_patches_debug_18756
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-3678
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mouse movements example does not work
When I run the example at https://textual.textualize.io/guide/input/#mouse-movements in gnome-terminal all I can see is black screen, and nothing I do (apart from CTRL+C) seems to have any effect.
```
textual diagnose
```
# Textual Diagnostics
## Versions
| Name | Value |
|---------|--------|
| Textual | 0.41.0 |
| Rich | 13.6.0 |
## Python
| Name | Value |
|----------------|---------------------------------------------------------------------------|
| Version | 3.11.2 |
| Implementation | CPython |
| Compiler | GCC 12.2.0 |
| Executable | /srv/home/porridge/.local/share/virtualenvs/reconcile-Vnvz65ja/bin/python |
## Operating System
| Name | Value |
|---------|-----------------------------------------------------|
| System | Linux |
| Release | 6.1.0-13-amd64 |
| Version | #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) |
## Terminal
| Name | Value |
|----------------------|----------------|
| Terminal Application | *Unknown* |
| TERM | xterm-256color |
| COLORTERM | truecolor |
| FORCE_COLOR | *Not set* |
| NO_COLOR | *Not set* |
## Rich Console options
| Name | Value |
|----------------|---------------------|
| size | width=87, height=27 |
| legacy_windows | False |
| min_width | 1 |
| max_width | 87 |
| is_terminal | True |
| encoding | utf-8 |
| max_height | 27 |
| justify | None |
| overflow | None |
| no_wrap | False |
| highlight | None |
| markup | None |
| height | None |
</issue>
<code>
[start of docs/examples/guide/input/mouse01.py]
1 from textual import events
2 from textual.app import App, ComposeResult
3 from textual.containers import Container
4 from textual.widgets import RichLog, Static
5
6
7 class PlayArea(Container):
8 def on_mount(self) -> None:
9 self.capture_mouse()
10
11 def on_mouse_move(self, event: events.MouseMove) -> None:
12 self.screen.query_one(RichLog).write(event)
13 self.query_one(Ball).offset = event.offset - (8, 2)
14
15
16 class Ball(Static):
17 pass
18
19
20 class MouseApp(App):
21 CSS_PATH = "mouse01.tcss"
22
23 def compose(self) -> ComposeResult:
24 yield RichLog()
25 yield PlayArea(Ball("Textual"))
26
27
28 if __name__ == "__main__":
29 app = MouseApp()
30 app.run()
31
[end of docs/examples/guide/input/mouse01.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/examples/guide/input/mouse01.py b/docs/examples/guide/input/mouse01.py
--- a/docs/examples/guide/input/mouse01.py
+++ b/docs/examples/guide/input/mouse01.py
@@ -1,18 +1,8 @@
from textual import events
from textual.app import App, ComposeResult
-from textual.containers import Container
from textual.widgets import RichLog, Static
-class PlayArea(Container):
- def on_mount(self) -> None:
- self.capture_mouse()
-
- def on_mouse_move(self, event: events.MouseMove) -> None:
- self.screen.query_one(RichLog).write(event)
- self.query_one(Ball).offset = event.offset - (8, 2)
-
-
class Ball(Static):
pass
@@ -22,7 +12,11 @@
def compose(self) -> ComposeResult:
yield RichLog()
- yield PlayArea(Ball("Textual"))
+ yield Ball("Textual")
+
+ def on_mouse_move(self, event: events.MouseMove) -> None:
+ self.screen.query_one(RichLog).write(event)
+ self.query_one(Ball).offset = event.screen_offset - (8, 2)
if __name__ == "__main__":
|
{"golden_diff": "diff --git a/docs/examples/guide/input/mouse01.py b/docs/examples/guide/input/mouse01.py\n--- a/docs/examples/guide/input/mouse01.py\n+++ b/docs/examples/guide/input/mouse01.py\n@@ -1,18 +1,8 @@\n from textual import events\n from textual.app import App, ComposeResult\n-from textual.containers import Container\n from textual.widgets import RichLog, Static\n \n \n-class PlayArea(Container):\n- def on_mount(self) -> None:\n- self.capture_mouse()\n-\n- def on_mouse_move(self, event: events.MouseMove) -> None:\n- self.screen.query_one(RichLog).write(event)\n- self.query_one(Ball).offset = event.offset - (8, 2)\n-\n-\n class Ball(Static):\n pass\n \n@@ -22,7 +12,11 @@\n \n def compose(self) -> ComposeResult:\n yield RichLog()\n- yield PlayArea(Ball(\"Textual\"))\n+ yield Ball(\"Textual\")\n+\n+ def on_mouse_move(self, event: events.MouseMove) -> None:\n+ self.screen.query_one(RichLog).write(event)\n+ self.query_one(Ball).offset = event.screen_offset - (8, 2)\n \n \n if __name__ == \"__main__\":\n", "issue": "Mouse movements example does not work\nWhen I run the example at https://textual.textualize.io/guide/input/#mouse-movements in gnome-terminal all I can see is black screen, and nothing I do (apart from CTRL+C) seems to have any effect.\r\n\r\n```\r\ntextual diagnose\r\n```\r\n\r\n# Textual Diagnostics\r\n\r\n## Versions\r\n\r\n| Name | Value |\r\n|---------|--------|\r\n| Textual | 0.41.0 |\r\n| Rich | 13.6.0 |\r\n\r\n## Python\r\n\r\n| Name | Value |\r\n|----------------|---------------------------------------------------------------------------|\r\n| Version | 3.11.2 |\r\n| Implementation | CPython |\r\n| Compiler | GCC 12.2.0 |\r\n| Executable | /srv/home/porridge/.local/share/virtualenvs/reconcile-Vnvz65ja/bin/python |\r\n\r\n## Operating System\r\n\r\n| Name | Value |\r\n|---------|-----------------------------------------------------|\r\n| System | Linux |\r\n| Release | 6.1.0-13-amd64 |\r\n| Version | #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) |\r\n\r\n## Terminal\r\n\r\n| Name | Value |\r\n|----------------------|----------------|\r\n| Terminal Application | *Unknown* |\r\n| TERM | xterm-256color |\r\n| COLORTERM | truecolor |\r\n| FORCE_COLOR | *Not set* |\r\n| NO_COLOR | *Not set* |\r\n\r\n## Rich Console options\r\n\r\n| Name | Value |\r\n|----------------|---------------------|\r\n| size | width=87, height=27 |\r\n| legacy_windows | False |\r\n| min_width | 1 |\r\n| max_width | 87 |\r\n| is_terminal | True |\r\n| encoding | utf-8 |\r\n| max_height | 27 |\r\n| justify | None |\r\n| overflow | None |\r\n| no_wrap | False |\r\n| highlight | None |\r\n| markup | None |\r\n| height | None |\r\n\r\n\r\n\n", "before_files": [{"content": "from textual import events\nfrom textual.app import App, ComposeResult\nfrom textual.containers import Container\nfrom textual.widgets import RichLog, Static\n\n\nclass PlayArea(Container):\n def on_mount(self) -> None:\n self.capture_mouse()\n\n def on_mouse_move(self, event: events.MouseMove) -> None:\n self.screen.query_one(RichLog).write(event)\n self.query_one(Ball).offset = event.offset - (8, 2)\n\n\nclass Ball(Static):\n pass\n\n\nclass MouseApp(App):\n CSS_PATH = \"mouse01.tcss\"\n\n def compose(self) -> ComposeResult:\n yield RichLog()\n yield PlayArea(Ball(\"Textual\"))\n\n\nif __name__ == \"__main__\":\n app = MouseApp()\n app.run()\n", "path": "docs/examples/guide/input/mouse01.py"}]}
| 1,240 | 284 |
gh_patches_debug_16171
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1992
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken date handling in careeropportunity display date
## What kind of an issue is this?
- [x] Bug report
- [ ] Feature request
## What is the expected behaviour?
A careeropportunity should be displayed from the start date.
## What is the current behaviour?
If the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.
<!-- if this is a bug report -->
## How do you reproduce this problem?
Set start date to the middle of the night the day before the current day.
Broken date handling in careeropportunity display date
## What kind of an issue is this?
- [x] Bug report
- [ ] Feature request
## What is the expected behaviour?
A careeropportunity should be displayed from the start date.
## What is the current behaviour?
If the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.
<!-- if this is a bug report -->
## How do you reproduce this problem?
Set start date to the middle of the night the day before the current day.
</issue>
<code>
[start of apps/careeropportunity/views.py]
1 # -*- coding: utf-8 -*-
2
3 from django.shortcuts import render
4 from django.utils import timezone
5 # API v1
6 from rest_framework import mixins, viewsets
7 from rest_framework.pagination import PageNumberPagination
8 from rest_framework.permissions import AllowAny
9
10 from apps.careeropportunity.models import CareerOpportunity
11 from apps.careeropportunity.serializers import CareerSerializer
12
13
14 def index(request, id=None):
15 return render(request, 'careeropportunity/index.html')
16
17
18 class HundredItemsPaginator(PageNumberPagination):
19 page_size = 100
20
21
22 class CareerViewSet(viewsets.GenericViewSet, mixins.RetrieveModelMixin, mixins.ListModelMixin):
23 """
24 Viewset for Career serializer
25 """
26
27 queryset = CareerOpportunity.objects.filter(
28 start__lte=timezone.now(),
29 end__gte=timezone.now()
30 ).order_by('-featured', '-start')
31 serializer_class = CareerSerializer
32 permission_classes = (AllowAny,)
33 pagination_class = HundredItemsPaginator
34
[end of apps/careeropportunity/views.py]
[start of apps/careeropportunity/urls.py]
1 # -*- coding: utf-8 -*-
2
3 from django.conf.urls import url
4
5 from apps.api.utils import SharedAPIRootRouter
6 from apps.careeropportunity import views
7
8 urlpatterns = [
9 url(r'^$', views.index, name='careeropportunity_index'),
10 url(r'^(\d+)/$', views.index, name='careeropportunity_details'),
11 ]
12
13 # API v1
14 router = SharedAPIRootRouter()
15 router.register(r'career', views.CareerViewSet)
16
[end of apps/careeropportunity/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/careeropportunity/urls.py b/apps/careeropportunity/urls.py
--- a/apps/careeropportunity/urls.py
+++ b/apps/careeropportunity/urls.py
@@ -12,4 +12,4 @@
# API v1
router = SharedAPIRootRouter()
-router.register(r'career', views.CareerViewSet)
+router.register(r'career', views.CareerViewSet, base_name='careeropportunity')
diff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py
--- a/apps/careeropportunity/views.py
+++ b/apps/careeropportunity/views.py
@@ -24,10 +24,12 @@
Viewset for Career serializer
"""
- queryset = CareerOpportunity.objects.filter(
- start__lte=timezone.now(),
- end__gte=timezone.now()
- ).order_by('-featured', '-start')
serializer_class = CareerSerializer
permission_classes = (AllowAny,)
pagination_class = HundredItemsPaginator
+
+ def get_queryset(self, *args, **kwargs):
+ return CareerOpportunity.objects.filter(
+ start__lte=timezone.now(),
+ end__gte=timezone.now()
+ ).order_by('-featured', '-start')
|
{"golden_diff": "diff --git a/apps/careeropportunity/urls.py b/apps/careeropportunity/urls.py\n--- a/apps/careeropportunity/urls.py\n+++ b/apps/careeropportunity/urls.py\n@@ -12,4 +12,4 @@\n \n # API v1\n router = SharedAPIRootRouter()\n-router.register(r'career', views.CareerViewSet)\n+router.register(r'career', views.CareerViewSet, base_name='careeropportunity')\ndiff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py\n--- a/apps/careeropportunity/views.py\n+++ b/apps/careeropportunity/views.py\n@@ -24,10 +24,12 @@\n Viewset for Career serializer\n \"\"\"\n \n- queryset = CareerOpportunity.objects.filter(\n- start__lte=timezone.now(),\n- end__gte=timezone.now()\n- ).order_by('-featured', '-start')\n serializer_class = CareerSerializer\n permission_classes = (AllowAny,)\n pagination_class = HundredItemsPaginator\n+\n+ def get_queryset(self, *args, **kwargs):\n+ return CareerOpportunity.objects.filter(\n+ start__lte=timezone.now(),\n+ end__gte=timezone.now()\n+ ).order_by('-featured', '-start')\n", "issue": "Broken date handling in careeropportunity display date\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n- [ ] Feature request\r\n\r\n\r\n## What is the expected behaviour?\r\nA careeropportunity should be displayed from the start date.\r\n\r\n## What is the current behaviour?\r\nIf the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.\r\n\r\n<!-- if this is a bug report -->\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\nSet start date to the middle of the night the day before the current day.\nBroken date handling in careeropportunity display date\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n- [ ] Feature request\r\n\r\n\r\n## What is the expected behaviour?\r\nA careeropportunity should be displayed from the start date.\r\n\r\n## What is the current behaviour?\r\nIf the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.\r\n\r\n<!-- if this is a bug report -->\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\nSet start date to the middle of the night the day before the current day.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.shortcuts import render\nfrom django.utils import timezone\n# API v1\nfrom rest_framework import mixins, viewsets\nfrom rest_framework.pagination import PageNumberPagination\nfrom rest_framework.permissions import AllowAny\n\nfrom apps.careeropportunity.models import CareerOpportunity\nfrom apps.careeropportunity.serializers import CareerSerializer\n\n\ndef index(request, id=None):\n return render(request, 'careeropportunity/index.html')\n\n\nclass HundredItemsPaginator(PageNumberPagination):\n page_size = 100\n\n\nclass CareerViewSet(viewsets.GenericViewSet, mixins.RetrieveModelMixin, mixins.ListModelMixin):\n \"\"\"\n Viewset for Career serializer\n \"\"\"\n\n queryset = CareerOpportunity.objects.filter(\n start__lte=timezone.now(),\n end__gte=timezone.now()\n ).order_by('-featured', '-start')\n serializer_class = CareerSerializer\n permission_classes = (AllowAny,)\n pagination_class = HundredItemsPaginator\n", "path": "apps/careeropportunity/views.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf.urls import url\n\nfrom apps.api.utils import SharedAPIRootRouter\nfrom apps.careeropportunity import views\n\nurlpatterns = [\n url(r'^$', views.index, name='careeropportunity_index'),\n url(r'^(\\d+)/$', views.index, name='careeropportunity_details'),\n]\n\n# API v1\nrouter = SharedAPIRootRouter()\nrouter.register(r'career', views.CareerViewSet)\n", "path": "apps/careeropportunity/urls.py"}]}
| 1,217 | 287 |
gh_patches_debug_26083
|
rasdani/github-patches
|
git_diff
|
spotify__luigi-559
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Some Docs Have Disappeared
Not sure if this is the right place to report this, but it looks like package documentation has disappeared from readthedocs. For example:
http://luigi.readthedocs.org/en/latest/api/luigi.html#luigi-file-module
I swear there used to be a lot of useful information here, now there is nothing.
</issue>
<code>
[start of setup.py]
1 # Copyright (c) 2012 Spotify AB
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may not
4 # use this file except in compliance with the License. You may obtain a copy of
5 # the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations under
13 # the License.
14
15 import os
16 import sys
17
18 try:
19 from setuptools import setup
20 from setuptools.command.test import test as TestCommand
21 except:
22 from distutils.core import setup
23 from distutils.cmd import Command as TestCommand
24
25
26 class Tox(TestCommand):
27 user_options = [('tox-args=', None, "Arguments to pass to tox")]
28 def initialize_options(self):
29 TestCommand.initialize_options(self)
30 self.tox_args = ''
31 def finalize_options(self):
32 TestCommand.finalize_options(self)
33 self.test_args = []
34 self.test_suite = True
35 def run_tests(self):
36 #import here, cause outside the eggs aren't loaded
37 import tox
38 errno = tox.cmdline(args=self.tox_args.split())
39 sys.exit(errno)
40
41
42 def get_static_files(path):
43 return [os.path.join(dirpath.replace("luigi/", ""), ext)
44 for (dirpath, dirnames, filenames) in os.walk(path)
45 for ext in ["*.html", "*.js", "*.css", "*.png"]]
46
47
48 luigi_package_data = sum(map(get_static_files, ["luigi/static", "luigi/templates"]), [])
49
50 readme_note = """\
51 .. note::
52
53 For the latest source, discussion, etc, please visit the
54 `GitHub repository <https://github.com/spotify/luigi>`_\n\n
55 """
56
57 with open('README.rst') as fobj:
58 long_description = readme_note + fobj.read()
59
60
61 setup(
62 name='luigi',
63 version='1.0.19',
64 description='Workflow mgmgt + task scheduling + dependency resolution',
65 long_description=long_description,
66 author='Erik Bernhardsson',
67 author_email='[email protected]',
68 url='https://github.com/spotify/luigi',
69 license='Apache License 2.0',
70 packages=[
71 'luigi',
72 'luigi.contrib',
73 'luigi.tools'
74 ],
75 package_data={
76 'luigi': luigi_package_data
77 },
78 scripts=[
79 'bin/luigid',
80 'bin/luigi'
81 ],
82 tests_require=['tox', 'virtualenv'],
83 cmdclass={'test': Tox},
84 )
85
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,26 +17,8 @@
try:
from setuptools import setup
- from setuptools.command.test import test as TestCommand
except:
from distutils.core import setup
- from distutils.cmd import Command as TestCommand
-
-
-class Tox(TestCommand):
- user_options = [('tox-args=', None, "Arguments to pass to tox")]
- def initialize_options(self):
- TestCommand.initialize_options(self)
- self.tox_args = ''
- def finalize_options(self):
- TestCommand.finalize_options(self)
- self.test_args = []
- self.test_suite = True
- def run_tests(self):
- #import here, cause outside the eggs aren't loaded
- import tox
- errno = tox.cmdline(args=self.tox_args.split())
- sys.exit(errno)
def get_static_files(path):
@@ -57,6 +39,18 @@
with open('README.rst') as fobj:
long_description = readme_note + fobj.read()
+install_requires = [
+ 'boto',
+ 'pyparsing',
+ 'requests',
+ 'sqlalchemy',
+ 'tornado',
+ 'whoops',
+ 'snakebite>=2.4.10',
+]
+
+if sys.version_info[:2] < (2, 7):
+ install_requires.extend(['argparse', 'ordereddict'])
setup(
name='luigi',
@@ -79,6 +73,5 @@
'bin/luigid',
'bin/luigi'
],
- tests_require=['tox', 'virtualenv'],
- cmdclass={'test': Tox},
+ install_requires=install_requires,
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,26 +17,8 @@\n \n try:\n from setuptools import setup\n- from setuptools.command.test import test as TestCommand\n except:\n from distutils.core import setup\n- from distutils.cmd import Command as TestCommand\n-\n-\n-class Tox(TestCommand):\n- user_options = [('tox-args=', None, \"Arguments to pass to tox\")]\n- def initialize_options(self):\n- TestCommand.initialize_options(self)\n- self.tox_args = ''\n- def finalize_options(self):\n- TestCommand.finalize_options(self)\n- self.test_args = []\n- self.test_suite = True\n- def run_tests(self):\n- #import here, cause outside the eggs aren't loaded\n- import tox\n- errno = tox.cmdline(args=self.tox_args.split())\n- sys.exit(errno)\n \n \n def get_static_files(path):\n@@ -57,6 +39,18 @@\n with open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n \n+install_requires = [\n+ 'boto',\n+ 'pyparsing',\n+ 'requests',\n+ 'sqlalchemy',\n+ 'tornado',\n+ 'whoops',\n+ 'snakebite>=2.4.10',\n+]\n+\n+if sys.version_info[:2] < (2, 7):\n+ install_requires.extend(['argparse', 'ordereddict'])\n \n setup(\n name='luigi',\n@@ -79,6 +73,5 @@\n 'bin/luigid',\n 'bin/luigi'\n ],\n- tests_require=['tox', 'virtualenv'],\n- cmdclass={'test': Tox},\n+ install_requires=install_requires,\n )\n", "issue": "Some Docs Have Disappeared\nNot sure if this is the right place to report this, but it looks like package documentation has disappeared from readthedocs. For example:\n\nhttp://luigi.readthedocs.org/en/latest/api/luigi.html#luigi-file-module\n\nI swear there used to be a lot of useful information here, now there is nothing.\n\n", "before_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\nimport sys\n\ntry:\n from setuptools import setup\n from setuptools.command.test import test as TestCommand\nexcept:\n from distutils.core import setup\n from distutils.cmd import Command as TestCommand\n\n\nclass Tox(TestCommand):\n user_options = [('tox-args=', None, \"Arguments to pass to tox\")]\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = ''\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n def run_tests(self):\n #import here, cause outside the eggs aren't loaded\n import tox\n errno = tox.cmdline(args=self.tox_args.split())\n sys.exit(errno)\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext) \n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\n\nsetup(\n name='luigi',\n version='1.0.19',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='Erik Bernhardsson',\n author_email='[email protected]',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.contrib',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n scripts=[\n 'bin/luigid',\n 'bin/luigi'\n ],\n tests_require=['tox', 'virtualenv'],\n cmdclass={'test': Tox},\n)\n", "path": "setup.py"}]}
| 1,374 | 400 |
gh_patches_debug_26630
|
rasdani/github-patches
|
git_diff
|
spack__spack-14473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
spack extensions behavior when no extendable package is specified
If you run `spack extensions` with no extendable package, it prints an error message telling you that a package spec is required. It would be nice if Spack also printed a list of installed extendable packages to choose from.
</issue>
<code>
[start of lib/spack/spack/cmd/extensions.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import argparse
7
8 import llnl.util.tty as tty
9 from llnl.util.tty.colify import colify
10
11 import spack.environment as ev
12 import spack.cmd as cmd
13 import spack.cmd.common.arguments as arguments
14 import spack.repo
15 import spack.store
16 from spack.filesystem_view import YamlFilesystemView
17
18 description = "list extensions for package"
19 section = "extensions"
20 level = "long"
21
22
23 def setup_parser(subparser):
24 arguments.add_common_arguments(subparser, ['long', 'very_long'])
25 subparser.add_argument('-d', '--deps', action='store_true',
26 help='output dependencies along with found specs')
27
28 subparser.add_argument('-p', '--paths', action='store_true',
29 help='show paths to package install directories')
30 subparser.add_argument(
31 '-s', '--show', action='store', default='all',
32 choices=("packages", "installed", "activated", "all"),
33 help="show only part of output")
34 subparser.add_argument(
35 '-v', '--view', metavar='VIEW', type=str,
36 help="the view to operate on")
37
38 subparser.add_argument(
39 'spec', nargs=argparse.REMAINDER,
40 help='spec of package to list extensions for', metavar='extendable')
41
42
43 def extensions(parser, args):
44 if not args.spec:
45 tty.die("extensions requires a package spec.")
46
47 # Checks
48 spec = cmd.parse_specs(args.spec)
49 if len(spec) > 1:
50 tty.die("Can only list extensions for one package.")
51
52 if not spec[0].package.extendable:
53 tty.die("%s is not an extendable package." % spec[0].name)
54
55 env = ev.get_env(args, 'extensions')
56 spec = cmd.disambiguate_spec(spec[0], env)
57
58 if not spec.package.extendable:
59 tty.die("%s does not have extensions." % spec.short_spec)
60
61 if args.show in ("packages", "all"):
62 # List package names of extensions
63 extensions = spack.repo.path.extensions_for(spec)
64 if not extensions:
65 tty.msg("%s has no extensions." % spec.cshort_spec)
66 else:
67 tty.msg(spec.cshort_spec)
68 tty.msg("%d extensions:" % len(extensions))
69 colify(ext.name for ext in extensions)
70
71 if args.view:
72 target = args.view
73 else:
74 target = spec.prefix
75
76 view = YamlFilesystemView(target, spack.store.layout)
77
78 if args.show in ("installed", "all"):
79 # List specs of installed extensions.
80 installed = [
81 s.spec for s in spack.store.db.installed_extensions_for(spec)]
82
83 if args.show == "all":
84 print
85 if not installed:
86 tty.msg("None installed.")
87 else:
88 tty.msg("%d installed:" % len(installed))
89 cmd.display_specs(installed, args)
90
91 if args.show in ("activated", "all"):
92 # List specs of activated extensions.
93 activated = view.extensions_layout.extension_map(spec)
94 if args.show == "all":
95 print
96 if not activated:
97 tty.msg("None activated.")
98 else:
99 tty.msg("%d activated:" % len(activated))
100 cmd.display_specs(activated.values(), args)
101
[end of lib/spack/spack/cmd/extensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/spack/spack/cmd/extensions.py b/lib/spack/spack/cmd/extensions.py
--- a/lib/spack/spack/cmd/extensions.py
+++ b/lib/spack/spack/cmd/extensions.py
@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
+import sys
import llnl.util.tty as tty
from llnl.util.tty.colify import colify
@@ -21,6 +22,8 @@
def setup_parser(subparser):
+ subparser.epilog = 'If called without argument returns ' \
+ 'the list of all valid extendable packages'
arguments.add_common_arguments(subparser, ['long', 'very_long'])
subparser.add_argument('-d', '--deps', action='store_true',
help='output dependencies along with found specs')
@@ -42,7 +45,19 @@
def extensions(parser, args):
if not args.spec:
- tty.die("extensions requires a package spec.")
+ # If called without arguments, list all the extendable packages
+ isatty = sys.stdout.isatty()
+ if isatty:
+ tty.info('Extendable packages:')
+
+ extendable_pkgs = []
+ for name in spack.repo.all_package_names():
+ pkg = spack.repo.get(name)
+ if pkg.extendable:
+ extendable_pkgs.append(name)
+
+ colify(extendable_pkgs, indent=4)
+ return
# Checks
spec = cmd.parse_specs(args.spec)
|
{"golden_diff": "diff --git a/lib/spack/spack/cmd/extensions.py b/lib/spack/spack/cmd/extensions.py\n--- a/lib/spack/spack/cmd/extensions.py\n+++ b/lib/spack/spack/cmd/extensions.py\n@@ -4,6 +4,7 @@\n # SPDX-License-Identifier: (Apache-2.0 OR MIT)\n \n import argparse\n+import sys\n \n import llnl.util.tty as tty\n from llnl.util.tty.colify import colify\n@@ -21,6 +22,8 @@\n \n \n def setup_parser(subparser):\n+ subparser.epilog = 'If called without argument returns ' \\\n+ 'the list of all valid extendable packages'\n arguments.add_common_arguments(subparser, ['long', 'very_long'])\n subparser.add_argument('-d', '--deps', action='store_true',\n help='output dependencies along with found specs')\n@@ -42,7 +45,19 @@\n \n def extensions(parser, args):\n if not args.spec:\n- tty.die(\"extensions requires a package spec.\")\n+ # If called without arguments, list all the extendable packages\n+ isatty = sys.stdout.isatty()\n+ if isatty:\n+ tty.info('Extendable packages:')\n+\n+ extendable_pkgs = []\n+ for name in spack.repo.all_package_names():\n+ pkg = spack.repo.get(name)\n+ if pkg.extendable:\n+ extendable_pkgs.append(name)\n+\n+ colify(extendable_pkgs, indent=4)\n+ return\n \n # Checks\n spec = cmd.parse_specs(args.spec)\n", "issue": "spack extensions behavior when no extendable package is specified\nIf you run `spack extensions` with no extendable package, it prints an error message telling you that a package spec is required. It would be nice if Spack also printed a list of installed extendable packages to choose from.\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport argparse\n\nimport llnl.util.tty as tty\nfrom llnl.util.tty.colify import colify\n\nimport spack.environment as ev\nimport spack.cmd as cmd\nimport spack.cmd.common.arguments as arguments\nimport spack.repo\nimport spack.store\nfrom spack.filesystem_view import YamlFilesystemView\n\ndescription = \"list extensions for package\"\nsection = \"extensions\"\nlevel = \"long\"\n\n\ndef setup_parser(subparser):\n arguments.add_common_arguments(subparser, ['long', 'very_long'])\n subparser.add_argument('-d', '--deps', action='store_true',\n help='output dependencies along with found specs')\n\n subparser.add_argument('-p', '--paths', action='store_true',\n help='show paths to package install directories')\n subparser.add_argument(\n '-s', '--show', action='store', default='all',\n choices=(\"packages\", \"installed\", \"activated\", \"all\"),\n help=\"show only part of output\")\n subparser.add_argument(\n '-v', '--view', metavar='VIEW', type=str,\n help=\"the view to operate on\")\n\n subparser.add_argument(\n 'spec', nargs=argparse.REMAINDER,\n help='spec of package to list extensions for', metavar='extendable')\n\n\ndef extensions(parser, args):\n if not args.spec:\n tty.die(\"extensions requires a package spec.\")\n\n # Checks\n spec = cmd.parse_specs(args.spec)\n if len(spec) > 1:\n tty.die(\"Can only list extensions for one package.\")\n\n if not spec[0].package.extendable:\n tty.die(\"%s is not an extendable package.\" % spec[0].name)\n\n env = ev.get_env(args, 'extensions')\n spec = cmd.disambiguate_spec(spec[0], env)\n\n if not spec.package.extendable:\n tty.die(\"%s does not have extensions.\" % spec.short_spec)\n\n if args.show in (\"packages\", \"all\"):\n # List package names of extensions\n extensions = spack.repo.path.extensions_for(spec)\n if not extensions:\n tty.msg(\"%s has no extensions.\" % spec.cshort_spec)\n else:\n tty.msg(spec.cshort_spec)\n tty.msg(\"%d extensions:\" % len(extensions))\n colify(ext.name for ext in extensions)\n\n if args.view:\n target = args.view\n else:\n target = spec.prefix\n\n view = YamlFilesystemView(target, spack.store.layout)\n\n if args.show in (\"installed\", \"all\"):\n # List specs of installed extensions.\n installed = [\n s.spec for s in spack.store.db.installed_extensions_for(spec)]\n\n if args.show == \"all\":\n print\n if not installed:\n tty.msg(\"None installed.\")\n else:\n tty.msg(\"%d installed:\" % len(installed))\n cmd.display_specs(installed, args)\n\n if args.show in (\"activated\", \"all\"):\n # List specs of activated extensions.\n activated = view.extensions_layout.extension_map(spec)\n if args.show == \"all\":\n print\n if not activated:\n tty.msg(\"None activated.\")\n else:\n tty.msg(\"%d activated:\" % len(activated))\n cmd.display_specs(activated.values(), args)\n", "path": "lib/spack/spack/cmd/extensions.py"}]}
| 1,558 | 343 |
gh_patches_debug_10265
|
rasdani/github-patches
|
git_diff
|
yt-project__yt-3278
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reminder: remove dead module mods
As discussed in #3083, the `mods.py` module should be removed **after the 4.0 release**.
</issue>
<code>
[start of yt/mods.py]
1 #
2 # ALL IMPORTS GO HERE
3 #
4
5 import os
6
7 import numpy as np
8
9 # This next item will handle most of the actual startup procedures, but it will
10 # also attempt to parse the command line and set up the global state of various
11 # operations. The variable unparsed_args is not used internally but is
12 # provided as a convenience for users who wish to parse arguments in scripts.
13 # https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/
14 import yt.startup_tasks as __startup_tasks
15 from yt import *
16 from yt.config import ytcfg, ytcfg_defaults
17 from yt.utilities.logger import _level
18
19 unparsed_args = __startup_tasks.unparsed_args
20
21
22 if _level >= int(ytcfg_defaults["yt"]["log_level"]):
23 # This won't get displayed.
24 mylog.debug("Turning off NumPy error reporting")
25 np.seterr(all="ignore")
26
27 # We load plugins. Keep in mind, this can be fairly dangerous -
28 # the primary purpose is to allow people to have a set of functions
29 # that get used every time that they don't have to *define* every time.
30 # This way, other command-line tools can be used very simply.
31 # Unfortunately, for now, I think the easiest and simplest way of doing
32 # this is also the most dangerous way.
33 if ytcfg.get("yt", "load_field_plugins"):
34 enable_plugins()
35
[end of yt/mods.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt/mods.py b/yt/mods.py
--- a/yt/mods.py
+++ b/yt/mods.py
@@ -13,9 +13,14 @@
# https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/
import yt.startup_tasks as __startup_tasks
from yt import *
+from yt._maintenance.deprecation import issue_deprecation_warning
from yt.config import ytcfg, ytcfg_defaults
from yt.utilities.logger import _level
+issue_deprecation_warning(
+ "The yt.mods module is deprecated.", since="4.1.0", removal="4.2.0"
+)
+
unparsed_args = __startup_tasks.unparsed_args
|
{"golden_diff": "diff --git a/yt/mods.py b/yt/mods.py\n--- a/yt/mods.py\n+++ b/yt/mods.py\n@@ -13,9 +13,14 @@\n # https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/\n import yt.startup_tasks as __startup_tasks\n from yt import *\n+from yt._maintenance.deprecation import issue_deprecation_warning\n from yt.config import ytcfg, ytcfg_defaults\n from yt.utilities.logger import _level\n \n+issue_deprecation_warning(\n+ \"The yt.mods module is deprecated.\", since=\"4.1.0\", removal=\"4.2.0\"\n+)\n+\n unparsed_args = __startup_tasks.unparsed_args\n", "issue": "Reminder: remove dead module mods\nAs discussed in #3083, the `mods.py` module should be removed **after the 4.0 release**.\r\n\n", "before_files": [{"content": "#\n# ALL IMPORTS GO HERE\n#\n\nimport os\n\nimport numpy as np\n\n# This next item will handle most of the actual startup procedures, but it will\n# also attempt to parse the command line and set up the global state of various\n# operations. The variable unparsed_args is not used internally but is\n# provided as a convenience for users who wish to parse arguments in scripts.\n# https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/\nimport yt.startup_tasks as __startup_tasks\nfrom yt import *\nfrom yt.config import ytcfg, ytcfg_defaults\nfrom yt.utilities.logger import _level\n\nunparsed_args = __startup_tasks.unparsed_args\n\n\nif _level >= int(ytcfg_defaults[\"yt\"][\"log_level\"]):\n # This won't get displayed.\n mylog.debug(\"Turning off NumPy error reporting\")\n np.seterr(all=\"ignore\")\n\n# We load plugins. Keep in mind, this can be fairly dangerous -\n# the primary purpose is to allow people to have a set of functions\n# that get used every time that they don't have to *define* every time.\n# This way, other command-line tools can be used very simply.\n# Unfortunately, for now, I think the easiest and simplest way of doing\n# this is also the most dangerous way.\nif ytcfg.get(\"yt\", \"load_field_plugins\"):\n enable_plugins()\n", "path": "yt/mods.py"}]}
| 957 | 181 |
gh_patches_debug_67260
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-5487
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Hyperoptable parameter type: CategoricalParameter is not returning correctly.
## Describe your environment
* Operating system: MacOS 11.2.3 (20D91)
* Python Version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)
* CCXT version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)
* Freqtrade Version: freqtrade 2021.4
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
Hi! It appears the Hyperoptable parameter type: `CategoricalParameter` is not returning correctly.
If I run the example as per the Freqtrade Docs [here](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters), namely setting a `CategoricalParameter` like so:
```
buy_rsi_enabled = CategoricalParameter([True, False]),
```
...then when running the Hyperopt tool there is an error in the `populate_buy_trend` as below:
```
if self.buy_adx_enabled.value:
AttributeError: 'tuple' object has no attribute 'value'
```
It would appear that the `CategoricalParameter` is not actually returning one of the categories (even a default) but instead returning a Python Tuple.
### Steps to reproduce:
1. Follow the example in the [Docs](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters)
### Observed Results:
* What happened? There was an AttributeError: 'tuple' object has no attribute 'value'.
* What did you expect to happen? The 'value' property to exist and be set to either True or False
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
2021-05-02 09:48:02,421 - freqtrade - ERROR - Fatal exception!
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/cloudpickle_wrapper.py", line 38, in __call__
return self._obj(*args, **kwargs)
File "/freqtrade/freqtrade/optimize/hyperopt.py", line 288, in generate_optimizer
backtesting_results = self.backtesting.backtest(
File "/freqtrade/freqtrade/optimize/backtesting.py", line 352, in backtest
data: Dict = self._get_ohlcv_as_lists(processed)
File "/freqtrade/freqtrade/optimize/backtesting.py", line 196, in _get_ohlcv_as_lists
self.strategy.advise_buy(pair_data, {'pair': pair}), {'pair': pair})[headers].copy()
File "/freqtrade/freqtrade/optimize/hyperopt_auto.py", line 31, in populate_buy_trend
return self.strategy.populate_buy_trend(dataframe, metadata)
File "/freqtrade/user_data/strategies/Strategy004.py", line 149, in populate_buy_trend
if self.buy_adx_enabled.value:
AttributeError: 'tuple' object has no attribute 'value'
```
</issue>
<code>
[start of freqtrade/__init__.py]
1 """ Freqtrade bot """
2 __version__ = 'develop'
3
4 if __version__ == 'develop':
5
6 try:
7 import subprocess
8
9 __version__ = 'develop-' + subprocess.check_output(
10 ['git', 'log', '--format="%h"', '-n 1'],
11 stderr=subprocess.DEVNULL).decode("utf-8").rstrip().strip('"')
12
13 # from datetime import datetime
14 # last_release = subprocess.check_output(
15 # ['git', 'tag']
16 # ).decode('utf-8').split()[-1].split(".")
17 # # Releases are in the format "2020.1" - we increment the latest version for dev.
18 # prefix = f"{last_release[0]}.{int(last_release[1]) + 1}"
19 # dev_version = int(datetime.now().timestamp() // 1000)
20 # __version__ = f"{prefix}.dev{dev_version}"
21
22 # subprocess.check_output(
23 # ['git', 'log', '--format="%h"', '-n 1'],
24 # stderr=subprocess.DEVNULL).decode("utf-8").rstrip().strip('"')
25 except Exception:
26 # git not available, ignore
27 try:
28 # Try Fallback to freqtrade_commit file (created by CI while building docker image)
29 from pathlib import Path
30 versionfile = Path('./freqtrade_commit')
31 if versionfile.is_file():
32 __version__ = f"docker-{versionfile.read_text()[:8]}"
33 except Exception:
34 pass
35
[end of freqtrade/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/freqtrade/__init__.py b/freqtrade/__init__.py
--- a/freqtrade/__init__.py
+++ b/freqtrade/__init__.py
@@ -1,5 +1,5 @@
""" Freqtrade bot """
-__version__ = 'develop'
+__version__ = '2021.8'
if __version__ == 'develop':
|
{"golden_diff": "diff --git a/freqtrade/__init__.py b/freqtrade/__init__.py\n--- a/freqtrade/__init__.py\n+++ b/freqtrade/__init__.py\n@@ -1,5 +1,5 @@\n \"\"\" Freqtrade bot \"\"\"\n-__version__ = 'develop'\n+__version__ = '2021.8'\n \n if __version__ == 'develop':\n", "issue": "Hyperoptable parameter type: CategoricalParameter is not returning correctly.\n## Describe your environment\r\n\r\n * Operating system: MacOS 11.2.3 (20D91)\r\n * Python Version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)\r\n * CCXT version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)\r\n * Freqtrade Version: freqtrade 2021.4\r\n \r\nNote: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.\r\n\r\n## Describe the problem:\r\n\r\nHi! It appears the Hyperoptable parameter type: `CategoricalParameter` is not returning correctly.\r\n\r\nIf I run the example as per the Freqtrade Docs [here](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters), namely setting a `CategoricalParameter` like so:\r\n\r\n```\r\nbuy_rsi_enabled = CategoricalParameter([True, False]),\r\n```\r\n\r\n...then when running the Hyperopt tool there is an error in the `populate_buy_trend` as below:\r\n\r\n```\r\nif self.buy_adx_enabled.value:\r\nAttributeError: 'tuple' object has no attribute 'value'\r\n```\r\n\r\nIt would appear that the `CategoricalParameter` is not actually returning one of the categories (even a default) but instead returning a Python Tuple.\r\n\r\n### Steps to reproduce:\r\n\r\n 1. Follow the example in the [Docs](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters)\r\n \r\n### Observed Results:\r\n\r\n * What happened? There was an AttributeError: 'tuple' object has no attribute 'value'. \r\n * What did you expect to happen? The 'value' property to exist and be set to either True or False\r\n\r\n### Relevant code exceptions or logs\r\n\r\nNote: Please copy/paste text of the messages, no screenshots of logs please.\r\n\r\n ```\r\n2021-05-02 09:48:02,421 - freqtrade - ERROR - Fatal exception!\r\njoblib.externals.loky.process_executor._RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py\", line 431, in _process_worker\r\n r = call_item()\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py\", line 285, in __call__\r\n return self.fn(*self.args, **self.kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/_parallel_backends.py\", line 595, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py\", line 262, in __call__\r\n return [func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py\", line 262, in <listcomp>\r\n return [func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/cloudpickle_wrapper.py\", line 38, in __call__\r\n return self._obj(*args, **kwargs)\r\n File \"/freqtrade/freqtrade/optimize/hyperopt.py\", line 288, in generate_optimizer\r\n backtesting_results = self.backtesting.backtest(\r\n File \"/freqtrade/freqtrade/optimize/backtesting.py\", line 352, in backtest\r\n data: Dict = self._get_ohlcv_as_lists(processed)\r\n File \"/freqtrade/freqtrade/optimize/backtesting.py\", line 196, in _get_ohlcv_as_lists\r\n self.strategy.advise_buy(pair_data, {'pair': pair}), {'pair': pair})[headers].copy()\r\n File \"/freqtrade/freqtrade/optimize/hyperopt_auto.py\", line 31, in populate_buy_trend\r\n return self.strategy.populate_buy_trend(dataframe, metadata)\r\n File \"/freqtrade/user_data/strategies/Strategy004.py\", line 149, in populate_buy_trend\r\n if self.buy_adx_enabled.value:\r\nAttributeError: 'tuple' object has no attribute 'value'\r\n ```\r\n\n", "before_files": [{"content": "\"\"\" Freqtrade bot \"\"\"\n__version__ = 'develop'\n\nif __version__ == 'develop':\n\n try:\n import subprocess\n\n __version__ = 'develop-' + subprocess.check_output(\n ['git', 'log', '--format=\"%h\"', '-n 1'],\n stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n\n # from datetime import datetime\n # last_release = subprocess.check_output(\n # ['git', 'tag']\n # ).decode('utf-8').split()[-1].split(\".\")\n # # Releases are in the format \"2020.1\" - we increment the latest version for dev.\n # prefix = f\"{last_release[0]}.{int(last_release[1]) + 1}\"\n # dev_version = int(datetime.now().timestamp() // 1000)\n # __version__ = f\"{prefix}.dev{dev_version}\"\n\n # subprocess.check_output(\n # ['git', 'log', '--format=\"%h\"', '-n 1'],\n # stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n except Exception:\n # git not available, ignore\n try:\n # Try Fallback to freqtrade_commit file (created by CI while building docker image)\n from pathlib import Path\n versionfile = Path('./freqtrade_commit')\n if versionfile.is_file():\n __version__ = f\"docker-{versionfile.read_text()[:8]}\"\n except Exception:\n pass\n", "path": "freqtrade/__init__.py"}]}
| 1,915 | 86 |
gh_patches_debug_34463
|
rasdani/github-patches
|
git_diff
|
facebookresearch__mmf-203
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUTD training broken after PR #135
## 🐛 Bug
After PR #135, training of the BUTD method for image captioning is now broken.
This issue is that #135 removes the teacher forcing behavior during training. After #135, now BUTD to use beam search decoding (instead of teacher-forcing) during training, causing a lot of shape mismatch, gradient and other errors.
## Command
## To Reproduce
Steps to reproduce the behavior:
```
python tools/run.py \
--tasks captioning --datasets coco --model butd \
--config configs/captioning/coco/butd.yml
```
The training crashes due to various errors (such as batch size mismatch).
## Expected behavior
The training should work as expected.
## Environment
You can run the script with:
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.9.4
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] block.bootstrap.pytorch==0.1.5
[pip] bootstrap.pytorch==0.0.13
[pip] msgpack-numpy==0.4.1
[pip] murel.bootstrap.pytorch==0.0.0
[pip] numpy==1.17.2
[pip] numpydoc==0.7.0
[pip] torch==1.2.0
[pip] torchfile==0.1.0
[pip] torchtext==0.3.1
[pip] torchvision==0.4.0
[conda] block.bootstrap.pytorch 0.1.5 <pip>
[conda] bootstrap.pytorch 0.0.13 <pip>
[conda] cuda90 1.0 h6433d27_0 pytorch
[conda] faiss-gpu 1.4.0 py36_cuda9.0.176_1 [cuda90] pytorch
[conda] magma-cuda90 2.3.0 1 pytorch
[conda] mkl 2018.0.3 1
[conda] mkl-fft 1.0.0 <pip>
[conda] mkl-include 2018.0.3 1
[conda] mkl-service 1.1.2 py36h17a0993_4
[conda] mkl_fft 1.0.2 np114py36_intel_0 [intel] intel
[conda] mkl_random 1.0.1 py36h629b387_0 anaconda
[conda] mkldnn 0.14.0 0 mingfeima
[conda] murel.bootstrap.pytorch 0.0.0 <pip>
[conda] nccl2 1.0 0 pytorch
[conda] torch 1.2.0 <pip>
[conda] torchfile 0.1.0 <pip>
[conda] torchtext 0.3.1 <pip>
[conda] torchvision 0.4.0 <pip>
</issue>
<code>
[start of pythia/models/butd.py]
1 # Copyright (c) Facebook, Inc. and its affiliates.
2
3 import torch
4 from pythia.common.registry import registry
5 from pythia.modules.layers import ClassifierLayer
6 from pythia.models.pythia import Pythia
7 from pythia.utils.text_utils import BeamSearch, NucleusSampling
8
9
10 @registry.register_model("butd")
11 class BUTD(Pythia):
12 def __init__(self, config):
13 super().__init__(config)
14
15 def build(self):
16 self._build_word_embedding()
17 self._init_feature_encoders("image")
18 self._init_feature_embeddings("image")
19 self._init_classifier()
20 self._init_extras()
21
22 def _build_word_embedding(self):
23 self.text_processor = registry.get(self._datasets[0] + "_text_processor")
24 self.vocab = self.text_processor.vocab
25 self.vocab_size = self.vocab.get_size()
26 self.word_embedding = self.vocab.get_embedding(
27 torch.nn.Embedding, embedding_dim=self.config["embedding_dim"]
28 )
29 setattr(self, "text_embeddings_out_dim", self.config["embedding_dim"])
30
31 def _init_classifier(self):
32 self.classifier = ClassifierLayer(
33 self.config["classifier"]["type"],
34 in_dim=self.config["classifier"]["params"]["feature_dim"],
35 out_dim=self.vocab_size,
36 **self.config["classifier"]["params"]
37 )
38
39 def get_optimizer_parameters(self, config):
40 params = [
41 {"params": self.word_embedding.parameters()},
42 {"params": self.image_feature_embeddings_list.parameters()},
43 {"params": self.classifier.parameters()},
44 {
45 "params": self.image_feature_encoders.parameters(),
46 "lr": (config["optimizer_attributes"]["params"]["lr"] * 0.1),
47 },
48 ]
49 return params
50
51 def prepare_data(self, sample_list, batch_size):
52 setattr(self, "teacher_forcing", hasattr(sample_list, "text"))
53 data = {}
54 if self.teacher_forcing:
55 caption_lengths, sort_ind = sample_list.caption_len.sort(
56 dim=0, descending=True
57 )
58 data["decode_lengths"] = (caption_lengths - 1).tolist()
59 sample_list.text = sample_list.text[sort_ind]
60 sample_list.answers = sample_list.answers[sort_ind]
61 sample_list.image_feature_0 = sample_list.image_feature_0[sort_ind]
62 data["texts"] = sample_list.text
63 timesteps = max(data["decode_lengths"])
64 sample_list.add_field("targets", sample_list.text[:, 1:])
65 else:
66 data["texts"] = sample_list.answers.new_full(
67 (batch_size, 1), self.vocab.SOS_INDEX, dtype=torch.long
68 )
69 timesteps = self.text_processor.max_length
70 sample_list.add_field("targets", sample_list.answers[:, 0, 1:])
71 return data, sample_list, timesteps
72
73 def init_hidden_state(self, features):
74 h = features.new_zeros(
75 (features.size(0), self.config["classifier"]["params"]["hidden_dim"]),
76 dtype=torch.float,
77 )
78 c = features.new_zeros(
79 (features.size(0), self.config["classifier"]["params"]["hidden_dim"]),
80 dtype=torch.float,
81 )
82 return h, c
83
84 def get_data_t(self, t, data, batch_size_t, prev_output):
85 if self.teacher_forcing:
86 # Modify batch_size for timestep t
87 batch_size_t = sum([l > t for l in data["decode_lengths"]])
88 elif prev_output is not None and self.config["inference"]["type"] == "greedy":
89 # Adding t-1 output words to data["text"] for greedy decoding
90 output_softmax = torch.log_softmax(prev_output, dim=1)
91 _, indices = torch.max(output_softmax, dim=1, keepdim=True)
92 data["texts"] = torch.cat(
93 (data["texts"], indices.view(batch_size_t, 1)), dim=1
94 )
95
96 # Slice data based on batch_size at timestep t
97 data["texts"] = data["texts"][:batch_size_t]
98 if "state" in data:
99 h1 = data["state"]["td_hidden"][0][:batch_size_t]
100 c1 = data["state"]["td_hidden"][1][:batch_size_t]
101 h2 = data["state"]["lm_hidden"][0][:batch_size_t]
102 c2 = data["state"]["lm_hidden"][1][:batch_size_t]
103 else:
104 h1, c1 = self.init_hidden_state(data["texts"])
105 h2, c2 = self.init_hidden_state(data["texts"])
106 data["state"] = {"td_hidden": (h1, c1), "lm_hidden": (h2, c2)}
107 registry.register("{}_lstm_state".format(h1.device), data["state"])
108
109 return data, batch_size_t
110
111 def forward(self, sample_list):
112 # Stores the output probabilites.
113 scores = sample_list.answers.new_ones(
114 (
115 sample_list.answers.size(0),
116 self.text_processor.max_length,
117 self.vocab_size,
118 ),
119 dtype=torch.float,
120 )
121
122 decoder = registry.get_decoder_class(self.config["inference"]["type"])(self.vocab, self.config)
123
124 sample_list = decoder.init_batch(sample_list)
125 # batch_size = sample_list.get_batch_size()
126 batch_size = sample_list.image_feature_0.size(0)
127 data, sample_list, timesteps = self.prepare_data(sample_list, batch_size)
128 output = None
129 batch_size_t = batch_size
130 for t in range(timesteps):
131 data, batch_size_t = self.get_data_t(t, data, batch_size_t, output)
132 pi_t = data["texts"]
133 embedding = self.word_embedding(pi_t)
134 attention_feature, _ = self.process_feature_embedding(
135 "image", sample_list, embedding[:, 0, :], batch_size_t=batch_size_t
136 )
137 output = self.classifier(attention_feature)
138 # Compute decoding
139 finish, data, batch_size_t = decoder.decode(t, data, output)
140 if finish:
141 break
142
143 model_output = {"scores": scores}
144 model_output["captions"] = decoder.get_result()
145
146 return model_output
147
[end of pythia/models/butd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pythia/models/butd.py b/pythia/models/butd.py
--- a/pythia/models/butd.py
+++ b/pythia/models/butd.py
@@ -119,28 +119,37 @@
dtype=torch.float,
)
- decoder = registry.get_decoder_class(self.config["inference"]["type"])(self.vocab, self.config)
+ if self.config["inference"]["type"] in ["beam_search", "nucleus_sampling"]:
+ decoder = registry.get_decoder_class(self.config["inference"]["type"])(
+ self.vocab, self.config
+ )
+ sample_list = decoder.init_batch(sample_list)
- sample_list = decoder.init_batch(sample_list)
- # batch_size = sample_list.get_batch_size()
batch_size = sample_list.image_feature_0.size(0)
data, sample_list, timesteps = self.prepare_data(sample_list, batch_size)
output = None
batch_size_t = batch_size
for t in range(timesteps):
data, batch_size_t = self.get_data_t(t, data, batch_size_t, output)
- pi_t = data["texts"]
+ if self.config["inference"]["type"] in ["beam_search", "nucleus_sampling"]:
+ pi_t = data["texts"]
+ else:
+ pi_t = data["texts"][:, t].unsqueeze(-1)
embedding = self.word_embedding(pi_t)
attention_feature, _ = self.process_feature_embedding(
"image", sample_list, embedding[:, 0, :], batch_size_t=batch_size_t
)
output = self.classifier(attention_feature)
# Compute decoding
- finish, data, batch_size_t = decoder.decode(t, data, output)
- if finish:
- break
+ if self.config["inference"]["type"] in ["beam_search", "nucleus_sampling"]:
+ finish, data, batch_size_t = decoder.decode(t, data, output)
+ if finish:
+ break
+ else:
+ scores[:batch_size_t, t] = output
model_output = {"scores": scores}
- model_output["captions"] = decoder.get_result()
+ if self.config["inference"]["type"] in ["beam_search", "nucleus_sampling"]:
+ model_output["captions"] = decoder.get_result()
return model_output
|
{"golden_diff": "diff --git a/pythia/models/butd.py b/pythia/models/butd.py\n--- a/pythia/models/butd.py\n+++ b/pythia/models/butd.py\n@@ -119,28 +119,37 @@\n dtype=torch.float,\n )\n \n- decoder = registry.get_decoder_class(self.config[\"inference\"][\"type\"])(self.vocab, self.config)\n+ if self.config[\"inference\"][\"type\"] in [\"beam_search\", \"nucleus_sampling\"]:\n+ decoder = registry.get_decoder_class(self.config[\"inference\"][\"type\"])(\n+ self.vocab, self.config\n+ )\n+ sample_list = decoder.init_batch(sample_list)\n \n- sample_list = decoder.init_batch(sample_list)\n- # batch_size = sample_list.get_batch_size()\n batch_size = sample_list.image_feature_0.size(0)\n data, sample_list, timesteps = self.prepare_data(sample_list, batch_size)\n output = None\n batch_size_t = batch_size\n for t in range(timesteps):\n data, batch_size_t = self.get_data_t(t, data, batch_size_t, output)\n- pi_t = data[\"texts\"]\n+ if self.config[\"inference\"][\"type\"] in [\"beam_search\", \"nucleus_sampling\"]:\n+ pi_t = data[\"texts\"]\n+ else:\n+ pi_t = data[\"texts\"][:, t].unsqueeze(-1)\n embedding = self.word_embedding(pi_t)\n attention_feature, _ = self.process_feature_embedding(\n \"image\", sample_list, embedding[:, 0, :], batch_size_t=batch_size_t\n )\n output = self.classifier(attention_feature)\n # Compute decoding\n- finish, data, batch_size_t = decoder.decode(t, data, output)\n- if finish:\n- break\n+ if self.config[\"inference\"][\"type\"] in [\"beam_search\", \"nucleus_sampling\"]:\n+ finish, data, batch_size_t = decoder.decode(t, data, output)\n+ if finish:\n+ break\n+ else:\n+ scores[:batch_size_t, t] = output\n \n model_output = {\"scores\": scores}\n- model_output[\"captions\"] = decoder.get_result()\n+ if self.config[\"inference\"][\"type\"] in [\"beam_search\", \"nucleus_sampling\"]:\n+ model_output[\"captions\"] = decoder.get_result()\n \n return model_output\n", "issue": "BUTD training broken after PR #135\n## \ud83d\udc1b Bug\r\n\r\nAfter PR #135, training of the BUTD method for image captioning is now broken.\r\n\r\nThis issue is that #135 removes the teacher forcing behavior during training. After #135, now BUTD to use beam search decoding (instead of teacher-forcing) during training, causing a lot of shape mismatch, gradient and other errors.\r\n\r\n## Command\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\npython tools/run.py \\\r\n --tasks captioning --datasets coco --model butd \\\r\n --config configs/captioning/coco/butd.yml\r\n```\r\n\r\nThe training crashes due to various errors (such as batch size mismatch).\r\n\r\n## Expected behavior\r\n\r\nThe training should work as expected.\r\n\r\n## Environment\r\n\r\nYou can run the script with:\r\nPyTorch version: 1.2.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.0.130\r\n\r\nOS: Ubuntu 18.04.1 LTS\r\nGCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0\r\nCMake version: version 3.9.4\r\n\r\nPython version: 3.6\r\nIs CUDA available: Yes\r\nCUDA runtime version: 10.0.130\r\nGPU models and configuration: \r\nGPU 0: Quadro GP100\r\nGPU 1: Quadro GP100\r\n\r\nNvidia driver version: 410.79\r\ncuDNN version: Could not collect\r\n\r\nVersions of relevant libraries:\r\n[pip] block.bootstrap.pytorch==0.1.5\r\n[pip] bootstrap.pytorch==0.0.13\r\n[pip] msgpack-numpy==0.4.1\r\n[pip] murel.bootstrap.pytorch==0.0.0\r\n[pip] numpy==1.17.2\r\n[pip] numpydoc==0.7.0\r\n[pip] torch==1.2.0\r\n[pip] torchfile==0.1.0\r\n[pip] torchtext==0.3.1\r\n[pip] torchvision==0.4.0\r\n[conda] block.bootstrap.pytorch 0.1.5 <pip>\r\n[conda] bootstrap.pytorch 0.0.13 <pip>\r\n[conda] cuda90 1.0 h6433d27_0 pytorch\r\n[conda] faiss-gpu 1.4.0 py36_cuda9.0.176_1 [cuda90] pytorch\r\n[conda] magma-cuda90 2.3.0 1 pytorch\r\n[conda] mkl 2018.0.3 1 \r\n[conda] mkl-fft 1.0.0 <pip>\r\n[conda] mkl-include 2018.0.3 1 \r\n[conda] mkl-service 1.1.2 py36h17a0993_4 \r\n[conda] mkl_fft 1.0.2 np114py36_intel_0 [intel] intel\r\n[conda] mkl_random 1.0.1 py36h629b387_0 anaconda\r\n[conda] mkldnn 0.14.0 0 mingfeima\r\n[conda] murel.bootstrap.pytorch 0.0.0 <pip>\r\n[conda] nccl2 1.0 0 pytorch\r\n[conda] torch 1.2.0 <pip>\r\n[conda] torchfile 0.1.0 <pip>\r\n[conda] torchtext 0.3.1 <pip>\r\n[conda] torchvision 0.4.0 <pip>\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport torch\nfrom pythia.common.registry import registry\nfrom pythia.modules.layers import ClassifierLayer\nfrom pythia.models.pythia import Pythia\nfrom pythia.utils.text_utils import BeamSearch, NucleusSampling\n\n\[email protected]_model(\"butd\")\nclass BUTD(Pythia):\n def __init__(self, config):\n super().__init__(config)\n\n def build(self):\n self._build_word_embedding()\n self._init_feature_encoders(\"image\")\n self._init_feature_embeddings(\"image\")\n self._init_classifier()\n self._init_extras()\n\n def _build_word_embedding(self):\n self.text_processor = registry.get(self._datasets[0] + \"_text_processor\")\n self.vocab = self.text_processor.vocab\n self.vocab_size = self.vocab.get_size()\n self.word_embedding = self.vocab.get_embedding(\n torch.nn.Embedding, embedding_dim=self.config[\"embedding_dim\"]\n )\n setattr(self, \"text_embeddings_out_dim\", self.config[\"embedding_dim\"])\n\n def _init_classifier(self):\n self.classifier = ClassifierLayer(\n self.config[\"classifier\"][\"type\"],\n in_dim=self.config[\"classifier\"][\"params\"][\"feature_dim\"],\n out_dim=self.vocab_size,\n **self.config[\"classifier\"][\"params\"]\n )\n\n def get_optimizer_parameters(self, config):\n params = [\n {\"params\": self.word_embedding.parameters()},\n {\"params\": self.image_feature_embeddings_list.parameters()},\n {\"params\": self.classifier.parameters()},\n {\n \"params\": self.image_feature_encoders.parameters(),\n \"lr\": (config[\"optimizer_attributes\"][\"params\"][\"lr\"] * 0.1),\n },\n ]\n return params\n\n def prepare_data(self, sample_list, batch_size):\n setattr(self, \"teacher_forcing\", hasattr(sample_list, \"text\"))\n data = {}\n if self.teacher_forcing:\n caption_lengths, sort_ind = sample_list.caption_len.sort(\n dim=0, descending=True\n )\n data[\"decode_lengths\"] = (caption_lengths - 1).tolist()\n sample_list.text = sample_list.text[sort_ind]\n sample_list.answers = sample_list.answers[sort_ind]\n sample_list.image_feature_0 = sample_list.image_feature_0[sort_ind]\n data[\"texts\"] = sample_list.text\n timesteps = max(data[\"decode_lengths\"])\n sample_list.add_field(\"targets\", sample_list.text[:, 1:])\n else:\n data[\"texts\"] = sample_list.answers.new_full(\n (batch_size, 1), self.vocab.SOS_INDEX, dtype=torch.long\n )\n timesteps = self.text_processor.max_length\n sample_list.add_field(\"targets\", sample_list.answers[:, 0, 1:])\n return data, sample_list, timesteps\n\n def init_hidden_state(self, features):\n h = features.new_zeros(\n (features.size(0), self.config[\"classifier\"][\"params\"][\"hidden_dim\"]),\n dtype=torch.float,\n )\n c = features.new_zeros(\n (features.size(0), self.config[\"classifier\"][\"params\"][\"hidden_dim\"]),\n dtype=torch.float,\n )\n return h, c\n\n def get_data_t(self, t, data, batch_size_t, prev_output):\n if self.teacher_forcing:\n # Modify batch_size for timestep t\n batch_size_t = sum([l > t for l in data[\"decode_lengths\"]])\n elif prev_output is not None and self.config[\"inference\"][\"type\"] == \"greedy\":\n # Adding t-1 output words to data[\"text\"] for greedy decoding\n output_softmax = torch.log_softmax(prev_output, dim=1)\n _, indices = torch.max(output_softmax, dim=1, keepdim=True)\n data[\"texts\"] = torch.cat(\n (data[\"texts\"], indices.view(batch_size_t, 1)), dim=1\n )\n\n # Slice data based on batch_size at timestep t\n data[\"texts\"] = data[\"texts\"][:batch_size_t]\n if \"state\" in data:\n h1 = data[\"state\"][\"td_hidden\"][0][:batch_size_t]\n c1 = data[\"state\"][\"td_hidden\"][1][:batch_size_t]\n h2 = data[\"state\"][\"lm_hidden\"][0][:batch_size_t]\n c2 = data[\"state\"][\"lm_hidden\"][1][:batch_size_t]\n else:\n h1, c1 = self.init_hidden_state(data[\"texts\"])\n h2, c2 = self.init_hidden_state(data[\"texts\"])\n data[\"state\"] = {\"td_hidden\": (h1, c1), \"lm_hidden\": (h2, c2)}\n registry.register(\"{}_lstm_state\".format(h1.device), data[\"state\"])\n\n return data, batch_size_t\n\n def forward(self, sample_list):\n # Stores the output probabilites.\n scores = sample_list.answers.new_ones(\n (\n sample_list.answers.size(0),\n self.text_processor.max_length,\n self.vocab_size,\n ),\n dtype=torch.float,\n )\n\n decoder = registry.get_decoder_class(self.config[\"inference\"][\"type\"])(self.vocab, self.config)\n\n sample_list = decoder.init_batch(sample_list)\n # batch_size = sample_list.get_batch_size()\n batch_size = sample_list.image_feature_0.size(0)\n data, sample_list, timesteps = self.prepare_data(sample_list, batch_size)\n output = None\n batch_size_t = batch_size\n for t in range(timesteps):\n data, batch_size_t = self.get_data_t(t, data, batch_size_t, output)\n pi_t = data[\"texts\"]\n embedding = self.word_embedding(pi_t)\n attention_feature, _ = self.process_feature_embedding(\n \"image\", sample_list, embedding[:, 0, :], batch_size_t=batch_size_t\n )\n output = self.classifier(attention_feature)\n # Compute decoding\n finish, data, batch_size_t = decoder.decode(t, data, output)\n if finish:\n break\n\n model_output = {\"scores\": scores}\n model_output[\"captions\"] = decoder.get_result()\n\n return model_output\n", "path": "pythia/models/butd.py"}]}
| 3,065 | 523 |
gh_patches_debug_3756
|
rasdani/github-patches
|
git_diff
|
vacanza__python-holidays-1694
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix Carnival date of the "Portugal" calendar
Currently, the package incorrectly indicates that the Carnival "holiday" is on the Monday before the actual date, which in 2024 is 2024/02/13 (YYYY/MM/DD).
It's clear from the method name `_add_carnival_monday()`.
This seems to be the problematic area.
<img width="504" alt="Captura de ecrã 2024-02-16, às 17 56 57" src="https://github.com/vacanza/python-holidays/assets/16542680/bf1ac118-c246-43f3-ae53-535af6742cf6">
I'm more than happy to open a PR.
Thank you for the excellent work 🚀!
</issue>
<code>
[start of holidays/countries/portugal.py]
1 # python-holidays
2 # ---------------
3 # A fast, efficient Python library for generating country, province and state
4 # specific sets of holidays on the fly. It aims to make determining whether a
5 # specific date is a holiday as fast and flexible as possible.
6 #
7 # Authors: dr-prodigy <[email protected]> (c) 2017-2023
8 # ryanss <[email protected]> (c) 2014-2017
9 # Website: https://github.com/dr-prodigy/python-holidays
10 # License: MIT (see LICENSE file)
11
12 from datetime import timedelta as td
13 from gettext import gettext as tr
14
15 from holidays.constants import OPTIONAL, PUBLIC
16 from holidays.groups import ChristianHolidays, InternationalHolidays
17 from holidays.holiday_base import HolidayBase
18
19
20 class Portugal(HolidayBase, ChristianHolidays, InternationalHolidays):
21 """
22 A subclass of :py:class:`HolidayBase` representing public holidays
23 in Portugal.
24
25 References:
26 - https://en.wikipedia.org/wiki/Public_holidays_in_Portugal
27 - `Labour Day <https://www.e-konomista.pt/dia-do-trabalhador/>`_
28 - Portugal Day - Decreto 17.171
29 - Restoration of Independence Day - Gazeta de Lisboa, 8 de Dezembro
30 de 1823 (n.º 290), pp. 1789 e 1790
31 - Azores
32 - https://files.dre.pt/1s/1980/08/19200/23052305.pdf
33 - Madeira
34 - https://files.dre.pt/1s/1979/11/25900/28782878.pdf
35 - https://files.dre.pt/1s/1989/02/02800/04360436.pdf
36 - https://files.dre.pt/1s/2002/11/258a00/71837183.pdf
37 """
38
39 country = "PT"
40 default_language = "pt_PT"
41 supported_categories = (OPTIONAL, PUBLIC)
42
43 # https://en.wikipedia.org/wiki/ISO_3166-2:PT
44 subdivisions = (
45 "01",
46 "02",
47 "03",
48 "04",
49 "05",
50 "06",
51 "07",
52 "08",
53 "09",
54 "10",
55 "11",
56 "12",
57 "13",
58 "14",
59 "15",
60 "16",
61 "17",
62 "18",
63 "20",
64 "30",
65 )
66 _deprecated_subdivisions = ("Ext",)
67 supported_languages = ("en_US", "pt_PT", "uk")
68
69 def __init__(self, *args, **kwargs) -> None:
70 ChristianHolidays.__init__(self)
71 InternationalHolidays.__init__(self)
72 super().__init__(*args, **kwargs)
73
74 def _populate_public_holidays(self):
75 # New Year's Day.
76 self._add_new_years_day(tr("Ano Novo"))
77
78 # Carnival is no longer a holiday, but some companies let workers off.
79 # TODO: recollect the years in which it was a public holiday.
80
81 # Good Friday.
82 self._add_good_friday(tr("Sexta-feira Santa"))
83
84 # Easter Sunday.
85 self._add_easter_sunday(tr("Páscoa"))
86
87 # Revoked holidays in 2013–2015.
88 if self._year <= 2012 or self._year >= 2016:
89 # Corpus Christi.
90 self._add_corpus_christi_day(tr("Corpo de Deus"))
91
92 if self._year >= 1910:
93 # Republic Day.
94 self._add_holiday_oct_5(tr("Implantação da República"))
95
96 # All Saints' Day.
97 self._add_all_saints_day(tr("Dia de Todos os Santos"))
98
99 if self._year >= 1823:
100 # Restoration of Independence Day.
101 self._add_holiday_dec_1(tr("Restauração da Independência"))
102
103 if self._year >= 1974:
104 # Freedom Day.
105 self._add_holiday_apr_25(tr("Dia da Liberdade"))
106
107 # Labor Day.
108 self._add_labor_day(tr("Dia do Trabalhador"))
109
110 if self._year >= 1911:
111 if 1933 <= self._year <= 1973:
112 # Day of Camões, Portugal, and the Portuguese Race.
113 self._add_holiday_jun_10(tr("Dia de Camões, de Portugal e da Raça"))
114 elif self._year >= 1978:
115 self._add_holiday_jun_10(
116 # Day of Portugal, Camões, and the Portuguese Communities.
117 tr("Dia de Portugal, de Camões e das Comunidades Portuguesas")
118 )
119 else:
120 # Portugal Day.
121 self._add_holiday_jun_10(tr("Dia de Portugal"))
122
123 # Assumption Day.
124 self._add_assumption_of_mary_day(tr("Assunção de Nossa Senhora"))
125
126 # Immaculate Conception.
127 self._add_immaculate_conception_day(tr("Imaculada Conceição"))
128
129 # Christmas Day.
130 self._add_christmas_day(tr("Dia de Natal"))
131
132 def _populate_optional_holidays(self):
133 """
134 Adds extended days that most people have as a bonus from their
135 companies:
136
137 - Carnival
138 - the day before and after xmas
139 - the day before the new year
140 - Lisbon's city holiday
141 """
142
143 # TODO: add bridging days:
144 # - get Holidays that occur on Tuesday and add Monday (-1 day)
145 # - get Holidays that occur on Thursday and add Friday (+1 day)
146
147 # Carnival.
148 self._add_carnival_monday(tr("Carnaval"))
149
150 # St. Anthony's Day.
151 self._add_holiday_jun_13(tr("Dia de Santo António"))
152
153 # Christmas Eve.
154 self._add_christmas_eve(tr("Véspera de Natal"))
155
156 # Second Day of Christmas.
157 self._add_christmas_day_two(tr("26 de Dezembro"))
158
159 # New Year's Eve.
160 self._add_new_years_eve(tr("Véspera de Ano Novo"))
161
162 def _populate_subdiv_holidays(self):
163 if self._year >= 1911:
164 super()._populate_subdiv_holidays()
165
166 if self.subdiv == "Ext":
167 self._populate_optional_holidays()
168
169 def _populate_subdiv_01_public_holidays(self):
170 # St. Joanna's Day.
171 self._add_holiday_may_12(tr("Dia de Santa Joana"))
172
173 def _populate_subdiv_02_public_holidays(self):
174 # Ascension Day.
175 self._add_ascension_thursday(tr("Quinta-feira da Ascensão"))
176
177 def _populate_subdiv_03_public_holidays(self):
178 # St. John's Day.
179 self._add_saint_johns_day(tr("Dia de São João"))
180
181 def _populate_subdiv_04_public_holidays(self):
182 # Feast of Our Lady of Graces.
183 self._add_holiday_aug_22(tr("Dia de Nossa Senhora das Graças"))
184
185 def _populate_subdiv_05_public_holidays(self):
186 self._add_holiday(
187 # Feast of Our Lady of Mércoles.
188 tr("Dia de Nossa Senhora de Mércoles"),
189 self._easter_sunday + td(days=+16),
190 )
191
192 def _populate_subdiv_06_public_holidays(self):
193 # St. Elizabeth's Day.
194 self._add_holiday_jul_4(tr("Dia de Santa Isabel"))
195
196 def _populate_subdiv_07_public_holidays(self):
197 # St. Peter's Day.
198 self._add_saints_peter_and_paul_day(tr("Dia de São Pedro"))
199
200 def _populate_subdiv_08_public_holidays(self):
201 # Municipal Holiday of Faro.
202 self._add_holiday_sep_7(tr("Dia do Município de Faro"))
203
204 def _populate_subdiv_09_public_holidays(self):
205 # Municipal Holiday of Guarda.
206 self._add_holiday_nov_27(tr("Dia do Município da Guarda"))
207
208 def _populate_subdiv_10_public_holidays(self):
209 # Municipal Holiday of Leiria.
210 self._add_holiday_may_22(tr("Dia do Município de Leiria"))
211
212 def _populate_subdiv_11_public_holidays(self):
213 self._add_holiday_jun_13(tr("Dia de Santo António"))
214
215 def _populate_subdiv_12_public_holidays(self):
216 # Municipal Holiday of Portalegre.
217 self._add_holiday_may_23(tr("Dia do Município de Portalegre"))
218
219 def _populate_subdiv_13_public_holidays(self):
220 self._add_saint_johns_day(tr("Dia de São João"))
221
222 def _populate_subdiv_14_public_holidays(self):
223 # St. Joseph's Day.
224 self._add_saint_josephs_day(tr("Dia de São José"))
225
226 def _populate_subdiv_15_public_holidays(self):
227 # Bocage Day.
228 self._add_holiday_sep_15(tr("Dia de Bocage"))
229
230 def _populate_subdiv_16_public_holidays(self):
231 # Feast of Our Lady of Sorrows.
232 self._add_holiday_aug_20(tr("Dia de Nossa Senhora da Agonia"))
233
234 def _populate_subdiv_17_public_holidays(self):
235 self._add_holiday_jun_13(tr("Dia de Santo António"))
236
237 def _populate_subdiv_18_public_holidays(self):
238 # St. Matthew's Day.
239 self._add_holiday_sep_21(tr("Dia de São Mateus"))
240
241 def _populate_subdiv_20_public_holidays(self):
242 if self._year >= 1981:
243 # Day of the Autonomous Region of the Azores.
244 self._add_whit_monday(tr("Dia da Região Autónoma dos Açores"))
245
246 def _populate_subdiv_30_public_holidays(self):
247 if self._year >= 1979:
248 self._add_holiday_jul_1(
249 # Day of the Autonomous Region of Madeira and the Madeiran Communities.
250 tr("Dia da Região Autónoma da Madeira e das Comunidades Madeirenses")
251 if self._year >= 1989
252 # Day of the Autonomous Region of Madeira.
253 else tr("Dia da Região Autónoma da Madeira")
254 )
255
256 if self._year >= 2002:
257 # 1st Octave.
258 self._add_christmas_day_two(tr("Primeira Oitava"))
259
260
261 class PT(Portugal):
262 pass
263
264
265 class PRT(Portugal):
266 pass
267
[end of holidays/countries/portugal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/holidays/countries/portugal.py b/holidays/countries/portugal.py
--- a/holidays/countries/portugal.py
+++ b/holidays/countries/portugal.py
@@ -145,7 +145,7 @@
# - get Holidays that occur on Thursday and add Friday (+1 day)
# Carnival.
- self._add_carnival_monday(tr("Carnaval"))
+ self._add_carnival_tuesday(tr("Carnaval"))
# St. Anthony's Day.
self._add_holiday_jun_13(tr("Dia de Santo António"))
|
{"golden_diff": "diff --git a/holidays/countries/portugal.py b/holidays/countries/portugal.py\n--- a/holidays/countries/portugal.py\n+++ b/holidays/countries/portugal.py\n@@ -145,7 +145,7 @@\n # - get Holidays that occur on Thursday and add Friday (+1 day)\n \n # Carnival.\n- self._add_carnival_monday(tr(\"Carnaval\"))\n+ self._add_carnival_tuesday(tr(\"Carnaval\"))\n \n # St. Anthony's Day.\n self._add_holiday_jun_13(tr(\"Dia de Santo Ant\u00f3nio\"))\n", "issue": "Fix Carnival date of the \"Portugal\" calendar\nCurrently, the package incorrectly indicates that the Carnival \"holiday\" is on the Monday before the actual date, which in 2024 is 2024/02/13 (YYYY/MM/DD).\r\nIt's clear from the method name `_add_carnival_monday()`.\r\n\r\nThis seems to be the problematic area.\r\n<img width=\"504\" alt=\"Captura de ecra\u0303 2024-02-16, a\u0300s 17 56 57\" src=\"https://github.com/vacanza/python-holidays/assets/16542680/bf1ac118-c246-43f3-ae53-535af6742cf6\">\r\n\r\nI'm more than happy to open a PR.\r\n\r\nThank you for the excellent work \ud83d\ude80!\n", "before_files": [{"content": "# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: dr-prodigy <[email protected]> (c) 2017-2023\n# ryanss <[email protected]> (c) 2014-2017\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import timedelta as td\nfrom gettext import gettext as tr\n\nfrom holidays.constants import OPTIONAL, PUBLIC\nfrom holidays.groups import ChristianHolidays, InternationalHolidays\nfrom holidays.holiday_base import HolidayBase\n\n\nclass Portugal(HolidayBase, ChristianHolidays, InternationalHolidays):\n \"\"\"\n A subclass of :py:class:`HolidayBase` representing public holidays\n in Portugal.\n\n References:\n - https://en.wikipedia.org/wiki/Public_holidays_in_Portugal\n - `Labour Day <https://www.e-konomista.pt/dia-do-trabalhador/>`_\n - Portugal Day - Decreto 17.171\n - Restoration of Independence Day - Gazeta de Lisboa, 8 de Dezembro\n de 1823 (n.\u00ba 290), pp. 1789 e 1790\n - Azores\n - https://files.dre.pt/1s/1980/08/19200/23052305.pdf\n - Madeira\n - https://files.dre.pt/1s/1979/11/25900/28782878.pdf\n - https://files.dre.pt/1s/1989/02/02800/04360436.pdf\n - https://files.dre.pt/1s/2002/11/258a00/71837183.pdf\n \"\"\"\n\n country = \"PT\"\n default_language = \"pt_PT\"\n supported_categories = (OPTIONAL, PUBLIC)\n\n # https://en.wikipedia.org/wiki/ISO_3166-2:PT\n subdivisions = (\n \"01\",\n \"02\",\n \"03\",\n \"04\",\n \"05\",\n \"06\",\n \"07\",\n \"08\",\n \"09\",\n \"10\",\n \"11\",\n \"12\",\n \"13\",\n \"14\",\n \"15\",\n \"16\",\n \"17\",\n \"18\",\n \"20\",\n \"30\",\n )\n _deprecated_subdivisions = (\"Ext\",)\n supported_languages = (\"en_US\", \"pt_PT\", \"uk\")\n\n def __init__(self, *args, **kwargs) -> None:\n ChristianHolidays.__init__(self)\n InternationalHolidays.__init__(self)\n super().__init__(*args, **kwargs)\n\n def _populate_public_holidays(self):\n # New Year's Day.\n self._add_new_years_day(tr(\"Ano Novo\"))\n\n # Carnival is no longer a holiday, but some companies let workers off.\n # TODO: recollect the years in which it was a public holiday.\n\n # Good Friday.\n self._add_good_friday(tr(\"Sexta-feira Santa\"))\n\n # Easter Sunday.\n self._add_easter_sunday(tr(\"P\u00e1scoa\"))\n\n # Revoked holidays in 2013\u20132015.\n if self._year <= 2012 or self._year >= 2016:\n # Corpus Christi.\n self._add_corpus_christi_day(tr(\"Corpo de Deus\"))\n\n if self._year >= 1910:\n # Republic Day.\n self._add_holiday_oct_5(tr(\"Implanta\u00e7\u00e3o da Rep\u00fablica\"))\n\n # All Saints' Day.\n self._add_all_saints_day(tr(\"Dia de Todos os Santos\"))\n\n if self._year >= 1823:\n # Restoration of Independence Day.\n self._add_holiday_dec_1(tr(\"Restaura\u00e7\u00e3o da Independ\u00eancia\"))\n\n if self._year >= 1974:\n # Freedom Day.\n self._add_holiday_apr_25(tr(\"Dia da Liberdade\"))\n\n # Labor Day.\n self._add_labor_day(tr(\"Dia do Trabalhador\"))\n\n if self._year >= 1911:\n if 1933 <= self._year <= 1973:\n # Day of Cam\u00f5es, Portugal, and the Portuguese Race.\n self._add_holiday_jun_10(tr(\"Dia de Cam\u00f5es, de Portugal e da Ra\u00e7a\"))\n elif self._year >= 1978:\n self._add_holiday_jun_10(\n # Day of Portugal, Cam\u00f5es, and the Portuguese Communities.\n tr(\"Dia de Portugal, de Cam\u00f5es e das Comunidades Portuguesas\")\n )\n else:\n # Portugal Day.\n self._add_holiday_jun_10(tr(\"Dia de Portugal\"))\n\n # Assumption Day.\n self._add_assumption_of_mary_day(tr(\"Assun\u00e7\u00e3o de Nossa Senhora\"))\n\n # Immaculate Conception.\n self._add_immaculate_conception_day(tr(\"Imaculada Concei\u00e7\u00e3o\"))\n\n # Christmas Day.\n self._add_christmas_day(tr(\"Dia de Natal\"))\n\n def _populate_optional_holidays(self):\n \"\"\"\n Adds extended days that most people have as a bonus from their\n companies:\n\n - Carnival\n - the day before and after xmas\n - the day before the new year\n - Lisbon's city holiday\n \"\"\"\n\n # TODO: add bridging days:\n # - get Holidays that occur on Tuesday and add Monday (-1 day)\n # - get Holidays that occur on Thursday and add Friday (+1 day)\n\n # Carnival.\n self._add_carnival_monday(tr(\"Carnaval\"))\n\n # St. Anthony's Day.\n self._add_holiday_jun_13(tr(\"Dia de Santo Ant\u00f3nio\"))\n\n # Christmas Eve.\n self._add_christmas_eve(tr(\"V\u00e9spera de Natal\"))\n\n # Second Day of Christmas.\n self._add_christmas_day_two(tr(\"26 de Dezembro\"))\n\n # New Year's Eve.\n self._add_new_years_eve(tr(\"V\u00e9spera de Ano Novo\"))\n\n def _populate_subdiv_holidays(self):\n if self._year >= 1911:\n super()._populate_subdiv_holidays()\n\n if self.subdiv == \"Ext\":\n self._populate_optional_holidays()\n\n def _populate_subdiv_01_public_holidays(self):\n # St. Joanna's Day.\n self._add_holiday_may_12(tr(\"Dia de Santa Joana\"))\n\n def _populate_subdiv_02_public_holidays(self):\n # Ascension Day.\n self._add_ascension_thursday(tr(\"Quinta-feira da Ascens\u00e3o\"))\n\n def _populate_subdiv_03_public_holidays(self):\n # St. John's Day.\n self._add_saint_johns_day(tr(\"Dia de S\u00e3o Jo\u00e3o\"))\n\n def _populate_subdiv_04_public_holidays(self):\n # Feast of Our Lady of Graces.\n self._add_holiday_aug_22(tr(\"Dia de Nossa Senhora das Gra\u00e7as\"))\n\n def _populate_subdiv_05_public_holidays(self):\n self._add_holiday(\n # Feast of Our Lady of M\u00e9rcoles.\n tr(\"Dia de Nossa Senhora de M\u00e9rcoles\"),\n self._easter_sunday + td(days=+16),\n )\n\n def _populate_subdiv_06_public_holidays(self):\n # St. Elizabeth's Day.\n self._add_holiday_jul_4(tr(\"Dia de Santa Isabel\"))\n\n def _populate_subdiv_07_public_holidays(self):\n # St. Peter's Day.\n self._add_saints_peter_and_paul_day(tr(\"Dia de S\u00e3o Pedro\"))\n\n def _populate_subdiv_08_public_holidays(self):\n # Municipal Holiday of Faro.\n self._add_holiday_sep_7(tr(\"Dia do Munic\u00edpio de Faro\"))\n\n def _populate_subdiv_09_public_holidays(self):\n # Municipal Holiday of Guarda.\n self._add_holiday_nov_27(tr(\"Dia do Munic\u00edpio da Guarda\"))\n\n def _populate_subdiv_10_public_holidays(self):\n # Municipal Holiday of Leiria.\n self._add_holiday_may_22(tr(\"Dia do Munic\u00edpio de Leiria\"))\n\n def _populate_subdiv_11_public_holidays(self):\n self._add_holiday_jun_13(tr(\"Dia de Santo Ant\u00f3nio\"))\n\n def _populate_subdiv_12_public_holidays(self):\n # Municipal Holiday of Portalegre.\n self._add_holiday_may_23(tr(\"Dia do Munic\u00edpio de Portalegre\"))\n\n def _populate_subdiv_13_public_holidays(self):\n self._add_saint_johns_day(tr(\"Dia de S\u00e3o Jo\u00e3o\"))\n\n def _populate_subdiv_14_public_holidays(self):\n # St. Joseph's Day.\n self._add_saint_josephs_day(tr(\"Dia de S\u00e3o Jos\u00e9\"))\n\n def _populate_subdiv_15_public_holidays(self):\n # Bocage Day.\n self._add_holiday_sep_15(tr(\"Dia de Bocage\"))\n\n def _populate_subdiv_16_public_holidays(self):\n # Feast of Our Lady of Sorrows.\n self._add_holiday_aug_20(tr(\"Dia de Nossa Senhora da Agonia\"))\n\n def _populate_subdiv_17_public_holidays(self):\n self._add_holiday_jun_13(tr(\"Dia de Santo Ant\u00f3nio\"))\n\n def _populate_subdiv_18_public_holidays(self):\n # St. Matthew's Day.\n self._add_holiday_sep_21(tr(\"Dia de S\u00e3o Mateus\"))\n\n def _populate_subdiv_20_public_holidays(self):\n if self._year >= 1981:\n # Day of the Autonomous Region of the Azores.\n self._add_whit_monday(tr(\"Dia da Regi\u00e3o Aut\u00f3noma dos A\u00e7ores\"))\n\n def _populate_subdiv_30_public_holidays(self):\n if self._year >= 1979:\n self._add_holiday_jul_1(\n # Day of the Autonomous Region of Madeira and the Madeiran Communities.\n tr(\"Dia da Regi\u00e3o Aut\u00f3noma da Madeira e das Comunidades Madeirenses\")\n if self._year >= 1989\n # Day of the Autonomous Region of Madeira.\n else tr(\"Dia da Regi\u00e3o Aut\u00f3noma da Madeira\")\n )\n\n if self._year >= 2002:\n # 1st Octave.\n self._add_christmas_day_two(tr(\"Primeira Oitava\"))\n\n\nclass PT(Portugal):\n pass\n\n\nclass PRT(Portugal):\n pass\n", "path": "holidays/countries/portugal.py"}]}
| 3,978 | 136 |
gh_patches_debug_6735
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1250
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fails to start when reverse proxy is configured with boolean
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 22c1323
- Operating System: Ubuntu 18.04
- Web Browser and Version: n/a
**What happened?**
Running with docker-compose, fails to start due to bad type checking on the reverse proxy config var
**What did you expect to happen?**
Should have properly started
**How to reproduce your issue**
Set `REVERSE_PROXY` in CTFd/config.py to `True`
**Any associated stack traces or error logs**
```
ctfd_1 | Traceback (most recent call last):
ctfd_1 | File "manage.py", line 9, in <module>
ctfd_1 | app = create_app()
ctfd_1 | File "/opt/CTFd/CTFd/__init__.py", line 180, in create_app
ctfd_1 | if "," in reverse_proxy:
ctfd_1 | TypeError: argument of type 'bool' is not iterable
ctfd_1 | Traceback (most recent call last):
ctfd_1 | File "manage.py", line 9, in <module>
ctfd_1 | app = create_app()
ctfd_1 | File "/opt/CTFd/CTFd/__init__.py", line 180, in create_app
ctfd_1 | if "," in reverse_proxy:
ctfd_1 | TypeError: argument of type 'bool' is not iterable
ctfd_ctfd_1 exited with code 1
```
I have a patch for this ready to go and will make a pull request.
Fails to start when reverse proxy is configured with boolean
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 22c1323
- Operating System: Ubuntu 18.04
- Web Browser and Version: n/a
**What happened?**
Running with docker-compose, fails to start due to bad type checking on the reverse proxy config var
**What did you expect to happen?**
Should have properly started
**How to reproduce your issue**
Set `REVERSE_PROXY` in CTFd/config.py to `True`
**Any associated stack traces or error logs**
```
ctfd_1 | Traceback (most recent call last):
ctfd_1 | File "manage.py", line 9, in <module>
ctfd_1 | app = create_app()
ctfd_1 | File "/opt/CTFd/CTFd/__init__.py", line 180, in create_app
ctfd_1 | if "," in reverse_proxy:
ctfd_1 | TypeError: argument of type 'bool' is not iterable
ctfd_1 | Traceback (most recent call last):
ctfd_1 | File "manage.py", line 9, in <module>
ctfd_1 | app = create_app()
ctfd_1 | File "/opt/CTFd/CTFd/__init__.py", line 180, in create_app
ctfd_1 | if "," in reverse_proxy:
ctfd_1 | TypeError: argument of type 'bool' is not iterable
ctfd_ctfd_1 exited with code 1
```
I have a patch for this ready to go and will make a pull request.
</issue>
<code>
[start of CTFd/__init__.py]
1 import datetime
2 import os
3 import sys
4 from distutils.version import StrictVersion
5
6 from flask import Flask, Request
7 from flask_migrate import upgrade
8 from jinja2 import FileSystemLoader
9 from jinja2.sandbox import SandboxedEnvironment
10 from six.moves import input
11 from werkzeug.middleware.proxy_fix import ProxyFix
12 from werkzeug.utils import cached_property
13
14 from CTFd import utils
15 from CTFd.plugins import init_plugins
16 from CTFd.utils.crypto import sha256
17 from CTFd.utils.initialization import (
18 init_events,
19 init_logs,
20 init_request_processors,
21 init_template_filters,
22 init_template_globals,
23 )
24 from CTFd.utils.migrations import create_database, migrations, stamp_latest_revision
25 from CTFd.utils.sessions import CachingSessionInterface
26 from CTFd.utils.updates import update_check
27
28 # Hack to support Unicode in Python 2 properly
29 if sys.version_info[0] < 3:
30 reload(sys) # noqa: F821
31 sys.setdefaultencoding("utf-8")
32
33 __version__ = "2.3.1"
34
35
36 class CTFdRequest(Request):
37 @cached_property
38 def path(self):
39 """
40 Hijack the original Flask request path because it does not account for subdirectory deployments in an intuitive
41 manner. We append script_root so that the path always points to the full path as seen in the browser.
42 e.g. /subdirectory/path/route vs /path/route
43
44 :return: string
45 """
46 return self.script_root + super(CTFdRequest, self).path
47
48
49 class CTFdFlask(Flask):
50 def __init__(self, *args, **kwargs):
51 """Overriden Jinja constructor setting a custom jinja_environment"""
52 self.jinja_environment = SandboxedBaseEnvironment
53 self.session_interface = CachingSessionInterface(key_prefix="session")
54 self.request_class = CTFdRequest
55
56 # Store server start time
57 self.start_time = datetime.datetime.utcnow()
58
59 # Create generally unique run identifier
60 self.run_id = sha256(str(self.start_time))[0:8]
61 Flask.__init__(self, *args, **kwargs)
62
63 def create_jinja_environment(self):
64 """Overridden jinja environment constructor"""
65 return super(CTFdFlask, self).create_jinja_environment()
66
67
68 class SandboxedBaseEnvironment(SandboxedEnvironment):
69 """SandboxEnvironment that mimics the Flask BaseEnvironment"""
70
71 def __init__(self, app, **options):
72 if "loader" not in options:
73 options["loader"] = app.create_global_jinja_loader()
74 # Disable cache entirely so that themes can be switched (#662)
75 # If the cache is enabled, switching themes will cause odd rendering errors
76 SandboxedEnvironment.__init__(self, cache_size=0, **options)
77 self.app = app
78
79
80 class ThemeLoader(FileSystemLoader):
81 """Custom FileSystemLoader that switches themes based on the configuration value"""
82
83 def __init__(self, searchpath, encoding="utf-8", followlinks=False):
84 super(ThemeLoader, self).__init__(searchpath, encoding, followlinks)
85 self.overriden_templates = {}
86
87 def get_source(self, environment, template):
88 # Check if the template has been overriden
89 if template in self.overriden_templates:
90 return self.overriden_templates[template], template, True
91
92 # Check if the template requested is for the admin panel
93 if template.startswith("admin/"):
94 template = template[6:] # Strip out admin/
95 template = "/".join(["admin", "templates", template])
96 return super(ThemeLoader, self).get_source(environment, template)
97
98 # Load regular theme data
99 theme = utils.get_config("ctf_theme")
100 template = "/".join([theme, "templates", template])
101 return super(ThemeLoader, self).get_source(environment, template)
102
103
104 def confirm_upgrade():
105 if sys.stdin.isatty():
106 print("/*\\ CTFd has updated and must update the database! /*\\")
107 print("/*\\ Please backup your database before proceeding! /*\\")
108 print("/*\\ CTFd maintainers are not responsible for any data loss! /*\\")
109 if input("Run database migrations (Y/N)").lower().strip() == "y":
110 return True
111 else:
112 print("/*\\ Ignored database migrations... /*\\")
113 return False
114 else:
115 return True
116
117
118 def run_upgrade():
119 upgrade()
120 utils.set_config("ctf_version", __version__)
121
122
123 def create_app(config="CTFd.config.Config"):
124 app = CTFdFlask(__name__)
125 with app.app_context():
126 app.config.from_object(config)
127
128 theme_loader = ThemeLoader(
129 os.path.join(app.root_path, "themes"), followlinks=True
130 )
131 app.jinja_loader = theme_loader
132
133 from CTFd.models import ( # noqa: F401
134 db,
135 Teams,
136 Solves,
137 Challenges,
138 Fails,
139 Flags,
140 Tags,
141 Files,
142 Tracking,
143 )
144
145 url = create_database()
146
147 # This allows any changes to the SQLALCHEMY_DATABASE_URI to get pushed back in
148 # This is mostly so we can force MySQL's charset
149 app.config["SQLALCHEMY_DATABASE_URI"] = str(url)
150
151 # Register database
152 db.init_app(app)
153
154 # Register Flask-Migrate
155 migrations.init_app(app, db)
156
157 # Alembic sqlite support is lacking so we should just create_all anyway
158 if url.drivername.startswith("sqlite"):
159 db.create_all()
160 stamp_latest_revision()
161 else:
162 # This creates tables instead of db.create_all()
163 # Allows migrations to happen properly
164 upgrade()
165
166 from CTFd.models import ma
167
168 ma.init_app(app)
169
170 app.db = db
171 app.VERSION = __version__
172
173 from CTFd.cache import cache
174
175 cache.init_app(app)
176 app.cache = cache
177
178 reverse_proxy = app.config.get("REVERSE_PROXY")
179 if reverse_proxy:
180 if "," in reverse_proxy:
181 proxyfix_args = [int(i) for i in reverse_proxy.split(",")]
182 app.wsgi_app = ProxyFix(app.wsgi_app, None, *proxyfix_args)
183 else:
184 app.wsgi_app = ProxyFix(
185 app.wsgi_app,
186 num_proxies=None,
187 x_for=1,
188 x_proto=1,
189 x_host=1,
190 x_port=1,
191 x_prefix=1,
192 )
193
194 version = utils.get_config("ctf_version")
195
196 # Upgrading from an older version of CTFd
197 if version and (StrictVersion(version) < StrictVersion(__version__)):
198 if confirm_upgrade():
199 run_upgrade()
200 else:
201 exit()
202
203 if not version:
204 utils.set_config("ctf_version", __version__)
205
206 if not utils.get_config("ctf_theme"):
207 utils.set_config("ctf_theme", "core")
208
209 update_check(force=True)
210
211 init_request_processors(app)
212 init_template_filters(app)
213 init_template_globals(app)
214
215 # Importing here allows tests to use sensible names (e.g. api instead of api_bp)
216 from CTFd.views import views
217 from CTFd.teams import teams
218 from CTFd.users import users
219 from CTFd.challenges import challenges
220 from CTFd.scoreboard import scoreboard
221 from CTFd.auth import auth
222 from CTFd.admin import admin
223 from CTFd.api import api
224 from CTFd.events import events
225 from CTFd.errors import page_not_found, forbidden, general_error, gateway_error
226
227 app.register_blueprint(views)
228 app.register_blueprint(teams)
229 app.register_blueprint(users)
230 app.register_blueprint(challenges)
231 app.register_blueprint(scoreboard)
232 app.register_blueprint(auth)
233 app.register_blueprint(api)
234 app.register_blueprint(events)
235
236 app.register_blueprint(admin)
237
238 app.register_error_handler(404, page_not_found)
239 app.register_error_handler(403, forbidden)
240 app.register_error_handler(500, general_error)
241 app.register_error_handler(502, gateway_error)
242
243 init_logs(app)
244 init_events(app)
245 init_plugins(app)
246
247 return app
248
[end of CTFd/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/CTFd/__init__.py b/CTFd/__init__.py
--- a/CTFd/__init__.py
+++ b/CTFd/__init__.py
@@ -177,7 +177,7 @@
reverse_proxy = app.config.get("REVERSE_PROXY")
if reverse_proxy:
- if "," in reverse_proxy:
+ if type(reverse_proxy) is str and "," in reverse_proxy:
proxyfix_args = [int(i) for i in reverse_proxy.split(",")]
app.wsgi_app = ProxyFix(app.wsgi_app, None, *proxyfix_args)
else:
|
{"golden_diff": "diff --git a/CTFd/__init__.py b/CTFd/__init__.py\n--- a/CTFd/__init__.py\n+++ b/CTFd/__init__.py\n@@ -177,7 +177,7 @@\n \n reverse_proxy = app.config.get(\"REVERSE_PROXY\")\n if reverse_proxy:\n- if \",\" in reverse_proxy:\n+ if type(reverse_proxy) is str and \",\" in reverse_proxy:\n proxyfix_args = [int(i) for i in reverse_proxy.split(\",\")]\n app.wsgi_app = ProxyFix(app.wsgi_app, None, *proxyfix_args)\n else:\n", "issue": "Fails to start when reverse proxy is configured with boolean\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit: 22c1323\r\n - Operating System: Ubuntu 18.04\r\n - Web Browser and Version: n/a\r\n\r\n**What happened?**\r\n\r\nRunning with docker-compose, fails to start due to bad type checking on the reverse proxy config var\r\n\r\n**What did you expect to happen?**\r\n\r\nShould have properly started\r\n\r\n**How to reproduce your issue**\r\n\r\nSet `REVERSE_PROXY` in CTFd/config.py to `True`\r\n\r\n**Any associated stack traces or error logs**\r\n\r\n```\r\nctfd_1 | Traceback (most recent call last):\r\nctfd_1 | File \"manage.py\", line 9, in <module>\r\nctfd_1 | app = create_app()\r\nctfd_1 | File \"/opt/CTFd/CTFd/__init__.py\", line 180, in create_app\r\nctfd_1 | if \",\" in reverse_proxy:\r\nctfd_1 | TypeError: argument of type 'bool' is not iterable\r\nctfd_1 | Traceback (most recent call last):\r\nctfd_1 | File \"manage.py\", line 9, in <module>\r\nctfd_1 | app = create_app()\r\nctfd_1 | File \"/opt/CTFd/CTFd/__init__.py\", line 180, in create_app\r\nctfd_1 | if \",\" in reverse_proxy:\r\nctfd_1 | TypeError: argument of type 'bool' is not iterable\r\nctfd_ctfd_1 exited with code 1\r\n```\r\n\r\nI have a patch for this ready to go and will make a pull request.\r\n\nFails to start when reverse proxy is configured with boolean\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit: 22c1323\r\n - Operating System: Ubuntu 18.04\r\n - Web Browser and Version: n/a\r\n\r\n**What happened?**\r\n\r\nRunning with docker-compose, fails to start due to bad type checking on the reverse proxy config var\r\n\r\n**What did you expect to happen?**\r\n\r\nShould have properly started\r\n\r\n**How to reproduce your issue**\r\n\r\nSet `REVERSE_PROXY` in CTFd/config.py to `True`\r\n\r\n**Any associated stack traces or error logs**\r\n\r\n```\r\nctfd_1 | Traceback (most recent call last):\r\nctfd_1 | File \"manage.py\", line 9, in <module>\r\nctfd_1 | app = create_app()\r\nctfd_1 | File \"/opt/CTFd/CTFd/__init__.py\", line 180, in create_app\r\nctfd_1 | if \",\" in reverse_proxy:\r\nctfd_1 | TypeError: argument of type 'bool' is not iterable\r\nctfd_1 | Traceback (most recent call last):\r\nctfd_1 | File \"manage.py\", line 9, in <module>\r\nctfd_1 | app = create_app()\r\nctfd_1 | File \"/opt/CTFd/CTFd/__init__.py\", line 180, in create_app\r\nctfd_1 | if \",\" in reverse_proxy:\r\nctfd_1 | TypeError: argument of type 'bool' is not iterable\r\nctfd_ctfd_1 exited with code 1\r\n```\r\n\r\nI have a patch for this ready to go and will make a pull request.\r\n\n", "before_files": [{"content": "import datetime\nimport os\nimport sys\nfrom distutils.version import StrictVersion\n\nfrom flask import Flask, Request\nfrom flask_migrate import upgrade\nfrom jinja2 import FileSystemLoader\nfrom jinja2.sandbox import SandboxedEnvironment\nfrom six.moves import input\nfrom werkzeug.middleware.proxy_fix import ProxyFix\nfrom werkzeug.utils import cached_property\n\nfrom CTFd import utils\nfrom CTFd.plugins import init_plugins\nfrom CTFd.utils.crypto import sha256\nfrom CTFd.utils.initialization import (\n init_events,\n init_logs,\n init_request_processors,\n init_template_filters,\n init_template_globals,\n)\nfrom CTFd.utils.migrations import create_database, migrations, stamp_latest_revision\nfrom CTFd.utils.sessions import CachingSessionInterface\nfrom CTFd.utils.updates import update_check\n\n# Hack to support Unicode in Python 2 properly\nif sys.version_info[0] < 3:\n reload(sys) # noqa: F821\n sys.setdefaultencoding(\"utf-8\")\n\n__version__ = \"2.3.1\"\n\n\nclass CTFdRequest(Request):\n @cached_property\n def path(self):\n \"\"\"\n Hijack the original Flask request path because it does not account for subdirectory deployments in an intuitive\n manner. We append script_root so that the path always points to the full path as seen in the browser.\n e.g. /subdirectory/path/route vs /path/route\n\n :return: string\n \"\"\"\n return self.script_root + super(CTFdRequest, self).path\n\n\nclass CTFdFlask(Flask):\n def __init__(self, *args, **kwargs):\n \"\"\"Overriden Jinja constructor setting a custom jinja_environment\"\"\"\n self.jinja_environment = SandboxedBaseEnvironment\n self.session_interface = CachingSessionInterface(key_prefix=\"session\")\n self.request_class = CTFdRequest\n\n # Store server start time\n self.start_time = datetime.datetime.utcnow()\n\n # Create generally unique run identifier\n self.run_id = sha256(str(self.start_time))[0:8]\n Flask.__init__(self, *args, **kwargs)\n\n def create_jinja_environment(self):\n \"\"\"Overridden jinja environment constructor\"\"\"\n return super(CTFdFlask, self).create_jinja_environment()\n\n\nclass SandboxedBaseEnvironment(SandboxedEnvironment):\n \"\"\"SandboxEnvironment that mimics the Flask BaseEnvironment\"\"\"\n\n def __init__(self, app, **options):\n if \"loader\" not in options:\n options[\"loader\"] = app.create_global_jinja_loader()\n # Disable cache entirely so that themes can be switched (#662)\n # If the cache is enabled, switching themes will cause odd rendering errors\n SandboxedEnvironment.__init__(self, cache_size=0, **options)\n self.app = app\n\n\nclass ThemeLoader(FileSystemLoader):\n \"\"\"Custom FileSystemLoader that switches themes based on the configuration value\"\"\"\n\n def __init__(self, searchpath, encoding=\"utf-8\", followlinks=False):\n super(ThemeLoader, self).__init__(searchpath, encoding, followlinks)\n self.overriden_templates = {}\n\n def get_source(self, environment, template):\n # Check if the template has been overriden\n if template in self.overriden_templates:\n return self.overriden_templates[template], template, True\n\n # Check if the template requested is for the admin panel\n if template.startswith(\"admin/\"):\n template = template[6:] # Strip out admin/\n template = \"/\".join([\"admin\", \"templates\", template])\n return super(ThemeLoader, self).get_source(environment, template)\n\n # Load regular theme data\n theme = utils.get_config(\"ctf_theme\")\n template = \"/\".join([theme, \"templates\", template])\n return super(ThemeLoader, self).get_source(environment, template)\n\n\ndef confirm_upgrade():\n if sys.stdin.isatty():\n print(\"/*\\\\ CTFd has updated and must update the database! /*\\\\\")\n print(\"/*\\\\ Please backup your database before proceeding! /*\\\\\")\n print(\"/*\\\\ CTFd maintainers are not responsible for any data loss! /*\\\\\")\n if input(\"Run database migrations (Y/N)\").lower().strip() == \"y\":\n return True\n else:\n print(\"/*\\\\ Ignored database migrations... /*\\\\\")\n return False\n else:\n return True\n\n\ndef run_upgrade():\n upgrade()\n utils.set_config(\"ctf_version\", __version__)\n\n\ndef create_app(config=\"CTFd.config.Config\"):\n app = CTFdFlask(__name__)\n with app.app_context():\n app.config.from_object(config)\n\n theme_loader = ThemeLoader(\n os.path.join(app.root_path, \"themes\"), followlinks=True\n )\n app.jinja_loader = theme_loader\n\n from CTFd.models import ( # noqa: F401\n db,\n Teams,\n Solves,\n Challenges,\n Fails,\n Flags,\n Tags,\n Files,\n Tracking,\n )\n\n url = create_database()\n\n # This allows any changes to the SQLALCHEMY_DATABASE_URI to get pushed back in\n # This is mostly so we can force MySQL's charset\n app.config[\"SQLALCHEMY_DATABASE_URI\"] = str(url)\n\n # Register database\n db.init_app(app)\n\n # Register Flask-Migrate\n migrations.init_app(app, db)\n\n # Alembic sqlite support is lacking so we should just create_all anyway\n if url.drivername.startswith(\"sqlite\"):\n db.create_all()\n stamp_latest_revision()\n else:\n # This creates tables instead of db.create_all()\n # Allows migrations to happen properly\n upgrade()\n\n from CTFd.models import ma\n\n ma.init_app(app)\n\n app.db = db\n app.VERSION = __version__\n\n from CTFd.cache import cache\n\n cache.init_app(app)\n app.cache = cache\n\n reverse_proxy = app.config.get(\"REVERSE_PROXY\")\n if reverse_proxy:\n if \",\" in reverse_proxy:\n proxyfix_args = [int(i) for i in reverse_proxy.split(\",\")]\n app.wsgi_app = ProxyFix(app.wsgi_app, None, *proxyfix_args)\n else:\n app.wsgi_app = ProxyFix(\n app.wsgi_app,\n num_proxies=None,\n x_for=1,\n x_proto=1,\n x_host=1,\n x_port=1,\n x_prefix=1,\n )\n\n version = utils.get_config(\"ctf_version\")\n\n # Upgrading from an older version of CTFd\n if version and (StrictVersion(version) < StrictVersion(__version__)):\n if confirm_upgrade():\n run_upgrade()\n else:\n exit()\n\n if not version:\n utils.set_config(\"ctf_version\", __version__)\n\n if not utils.get_config(\"ctf_theme\"):\n utils.set_config(\"ctf_theme\", \"core\")\n\n update_check(force=True)\n\n init_request_processors(app)\n init_template_filters(app)\n init_template_globals(app)\n\n # Importing here allows tests to use sensible names (e.g. api instead of api_bp)\n from CTFd.views import views\n from CTFd.teams import teams\n from CTFd.users import users\n from CTFd.challenges import challenges\n from CTFd.scoreboard import scoreboard\n from CTFd.auth import auth\n from CTFd.admin import admin\n from CTFd.api import api\n from CTFd.events import events\n from CTFd.errors import page_not_found, forbidden, general_error, gateway_error\n\n app.register_blueprint(views)\n app.register_blueprint(teams)\n app.register_blueprint(users)\n app.register_blueprint(challenges)\n app.register_blueprint(scoreboard)\n app.register_blueprint(auth)\n app.register_blueprint(api)\n app.register_blueprint(events)\n\n app.register_blueprint(admin)\n\n app.register_error_handler(404, page_not_found)\n app.register_error_handler(403, forbidden)\n app.register_error_handler(500, general_error)\n app.register_error_handler(502, gateway_error)\n\n init_logs(app)\n init_events(app)\n init_plugins(app)\n\n return app\n", "path": "CTFd/__init__.py"}]}
| 3,813 | 135 |
gh_patches_debug_22437
|
rasdani/github-patches
|
git_diff
|
openmc-dev__openmc-2540
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MicroXS reactions arg in docsstring not available to users
While reading through the MicroXS class I noted we have an argument in the doc string called ```reactions``` but no option for the user to set it as it is not an accepted arg keyword.
https://github.com/openmc-dev/openmc/blob/6c7f857e1e98ce4fd5b39d8be034308596628d7e/openmc/deplete/microxs.py#L58
```reactions``` is however defined few lines later by this function call ```reactions, diluted_materials = cls._add_dilute_nuclides(chain_file,```
perhaps we are missing some logic checks to see if the user sets reactions and if not then if finds them?
If so I think I can make a PR with some if statement to fix that
</issue>
<code>
[start of openmc/deplete/microxs.py]
1 """MicroXS module
2
3 A pandas.DataFrame storing microscopic cross section data with
4 nuclide names as row indices and reaction names as column indices.
5 """
6
7 import tempfile
8 from copy import deepcopy
9
10 from pandas import DataFrame, read_csv
11 import numpy as np
12
13 from openmc.checkvalue import check_type, check_value, check_iterable_type
14 from openmc.exceptions import DataError
15 from openmc.mgxs import EnergyGroups, ArbitraryXS, FissionXS
16 from openmc import Tallies, StatePoint, Materials
17 import openmc
18 from .chain import Chain, REACTIONS
19 from .coupled_operator import _find_cross_sections, _get_nuclides_with_data
20
21 _valid_rxns = list(REACTIONS)
22 _valid_rxns.append('fission')
23
24
25 class MicroXS(DataFrame):
26 """Microscopic cross section data for use in transport-independent depletion.
27
28 .. versionadded:: 0.13.1
29
30 """
31
32 @classmethod
33 def from_model(cls,
34 model,
35 reaction_domain,
36 chain_file=None,
37 dilute_initial=1.0e3,
38 energy_bounds=(0, 20e6),
39 run_kwargs=None):
40 """Generate a one-group cross-section dataframe using
41 OpenMC. Note that the ``openmc`` executable must be compiled.
42
43 Parameters
44 ----------
45 model : openmc.Model
46 OpenMC model object. Must contain geometry, materials, and settings.
47 reaction_domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
48 Domain in which to tally reaction rates.
49 chain_file : str, optional
50 Path to the depletion chain XML file that will be used in depletion
51 simulation. Used to determine cross sections for materials not
52 present in the inital composition. Defaults to
53 ``openmc.config['chain_file']``.
54 dilute_initial : float, optional
55 Initial atom density [atoms/cm^3] to add for nuclides that
56 are zero in initial condition to ensure they exist in the cross
57 section data. Only done for nuclides with reaction rates.
58 reactions : list of str, optional
59 Reaction names to tally
60 energy_bound : 2-tuple of float, optional
61 Bounds for the energy group.
62 run_kwargs : dict, optional
63 Keyword arguments passed to :meth:`openmc.model.Model.run`
64
65 Returns
66 -------
67 MicroXS
68 Cross section data in [b]
69
70 """
71 groups = EnergyGroups(energy_bounds)
72
73 # Set up the reaction tallies
74 original_tallies = model.tallies
75 original_materials = deepcopy(model.materials)
76 tallies = Tallies()
77 xs = {}
78 reactions, diluted_materials = cls._add_dilute_nuclides(chain_file,
79 model,
80 dilute_initial)
81 model.materials = diluted_materials
82
83 for rx in reactions:
84 if rx == 'fission':
85 xs[rx] = FissionXS(domain=reaction_domain,
86 energy_groups=groups, by_nuclide=True)
87 else:
88 xs[rx] = ArbitraryXS(rx, domain=reaction_domain,
89 energy_groups=groups, by_nuclide=True)
90 tallies += xs[rx].tallies.values()
91
92 model.tallies = tallies
93
94 # create temporary run
95 with tempfile.TemporaryDirectory() as temp_dir:
96 if run_kwargs is None:
97 run_kwargs = {}
98 run_kwargs.setdefault('cwd', temp_dir)
99 statepoint_path = model.run(**run_kwargs)
100
101 with StatePoint(statepoint_path) as sp:
102 for rx in xs:
103 xs[rx].load_from_statepoint(sp)
104
105 # Build the DataFrame
106 series = {}
107 for rx in xs:
108 df = xs[rx].get_pandas_dataframe(xs_type='micro')
109 series[rx] = df.set_index('nuclide')['mean']
110
111 # Revert to the original tallies and materials
112 model.tallies = original_tallies
113 model.materials = original_materials
114
115 return cls(series)
116
117 @classmethod
118 def _add_dilute_nuclides(cls, chain_file, model, dilute_initial):
119 """
120 Add nuclides not present in burnable materials that have neutron data
121 and are present in the depletion chain to those materials. This allows
122 us to tally those specific nuclides for reactions to create one-group
123 cross sections.
124
125 Parameters
126 ----------
127 chain_file : str
128 Path to the depletion chain XML file that will be used in depletion
129 simulation. Used to determine cross sections for materials not
130 present in the inital composition.
131 model : openmc.Model
132 Model object
133 dilute_initial : float
134 Initial atom density [atoms/cm^3] to add for nuclides that
135 are zero in initial condition to ensure they exist in the cross
136 section data. Only done for nuclides with reaction rates.
137
138 Returns
139 -------
140 reactions : list of str
141 List of reaction names
142 diluted_materials : openmc.Materials
143 :class:`openmc.Materials` object with nuclides added to burnable
144 materials.
145 """
146 if chain_file is None:
147 chain_file = openmc.config.get('chain_file')
148 if chain_file is None:
149 raise DataError(
150 "No depletion chain specified and could not find depletion "
151 "chain in openmc.config['chain_file']"
152 )
153 chain = Chain.from_xml(chain_file)
154 reactions = chain.reactions
155 cross_sections = _find_cross_sections(model)
156 nuclides_with_data = _get_nuclides_with_data(cross_sections)
157 burnable_nucs = [nuc.name for nuc in chain.nuclides
158 if nuc.name in nuclides_with_data]
159 diluted_materials = Materials()
160 for material in model.materials:
161 if material.depletable:
162 nuc_densities = material.get_nuclide_atom_densities()
163 dilute_density = 1.0e-24 * dilute_initial
164 material.set_density('sum')
165 for nuc, density in nuc_densities.items():
166 material.remove_nuclide(nuc)
167 material.add_nuclide(nuc, density)
168 for burn_nuc in burnable_nucs:
169 if burn_nuc not in nuc_densities:
170 material.add_nuclide(burn_nuc,
171 dilute_density)
172 diluted_materials.append(material)
173
174 return reactions, diluted_materials
175
176 @classmethod
177 def from_array(cls, nuclides, reactions, data):
178 """
179 Creates a ``MicroXS`` object from arrays.
180
181 Parameters
182 ----------
183 nuclides : list of str
184 List of nuclide symbols for that have data for at least one
185 reaction.
186 reactions : list of str
187 List of reactions. All reactions must match those in
188 :data:`openmc.deplete.chain.REACTIONS`
189 data : ndarray of floats
190 Array containing one-group microscopic cross section values for
191 each nuclide and reaction. Cross section values are assumed to be
192 in [b].
193
194 Returns
195 -------
196 MicroXS
197 """
198
199 # Validate inputs
200 if data.shape != (len(nuclides), len(reactions)):
201 raise ValueError(
202 f'Nuclides list of length {len(nuclides)} and '
203 f'reactions array of length {len(reactions)} do not '
204 f'match dimensions of data array of shape {data.shape}')
205
206 cls._validate_micro_xs_inputs(
207 nuclides, reactions, data)
208 micro_xs = cls(index=nuclides, columns=reactions, data=data)
209
210 return micro_xs
211
212 @classmethod
213 def from_csv(cls, csv_file, **kwargs):
214 """
215 Load a ``MicroXS`` object from a ``.csv`` file.
216
217 Parameters
218 ----------
219 csv_file : str
220 Relative path to csv-file containing microscopic cross section
221 data. Cross section values are assumed to be in [b]
222 **kwargs : dict
223 Keyword arguments to pass to :func:`pandas.read_csv()`.
224
225 Returns
226 -------
227 MicroXS
228
229 """
230 if 'float_precision' not in kwargs:
231 kwargs['float_precision'] = 'round_trip'
232
233 micro_xs = cls(read_csv(csv_file, index_col=0, **kwargs))
234
235 cls._validate_micro_xs_inputs(list(micro_xs.index),
236 list(micro_xs.columns),
237 micro_xs.to_numpy())
238 return micro_xs
239
240 @staticmethod
241 def _validate_micro_xs_inputs(nuclides, reactions, data):
242 check_iterable_type('nuclides', nuclides, str)
243 check_iterable_type('reactions', reactions, str)
244 check_type('data', data, np.ndarray, expected_iter_type=float)
245 for reaction in reactions:
246 check_value('reactions', reaction, _valid_rxns)
247
[end of openmc/deplete/microxs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openmc/deplete/microxs.py b/openmc/deplete/microxs.py
--- a/openmc/deplete/microxs.py
+++ b/openmc/deplete/microxs.py
@@ -37,8 +37,9 @@
dilute_initial=1.0e3,
energy_bounds=(0, 20e6),
run_kwargs=None):
- """Generate a one-group cross-section dataframe using
- OpenMC. Note that the ``openmc`` executable must be compiled.
+ """Generate a one-group cross-section dataframe using OpenMC.
+
+ Note that the ``openmc`` executable must be compiled.
Parameters
----------
@@ -55,8 +56,6 @@
Initial atom density [atoms/cm^3] to add for nuclides that
are zero in initial condition to ensure they exist in the cross
section data. Only done for nuclides with reaction rates.
- reactions : list of str, optional
- Reaction names to tally
energy_bound : 2-tuple of float, optional
Bounds for the energy group.
run_kwargs : dict, optional
|
{"golden_diff": "diff --git a/openmc/deplete/microxs.py b/openmc/deplete/microxs.py\n--- a/openmc/deplete/microxs.py\n+++ b/openmc/deplete/microxs.py\n@@ -37,8 +37,9 @@\n dilute_initial=1.0e3,\n energy_bounds=(0, 20e6),\n run_kwargs=None):\n- \"\"\"Generate a one-group cross-section dataframe using\n- OpenMC. Note that the ``openmc`` executable must be compiled.\n+ \"\"\"Generate a one-group cross-section dataframe using OpenMC.\n+\n+ Note that the ``openmc`` executable must be compiled.\n \n Parameters\n ----------\n@@ -55,8 +56,6 @@\n Initial atom density [atoms/cm^3] to add for nuclides that\n are zero in initial condition to ensure they exist in the cross\n section data. Only done for nuclides with reaction rates.\n- reactions : list of str, optional\n- Reaction names to tally\n energy_bound : 2-tuple of float, optional\n Bounds for the energy group.\n run_kwargs : dict, optional\n", "issue": "MicroXS reactions arg in docsstring not available to users\nWhile reading through the MicroXS class I noted we have an argument in the doc string called ```reactions``` but no option for the user to set it as it is not an accepted arg keyword.\r\n\r\nhttps://github.com/openmc-dev/openmc/blob/6c7f857e1e98ce4fd5b39d8be034308596628d7e/openmc/deplete/microxs.py#L58\r\n\r\n```reactions``` is however defined few lines later by this function call ```reactions, diluted_materials = cls._add_dilute_nuclides(chain_file,```\r\n\r\nperhaps we are missing some logic checks to see if the user sets reactions and if not then if finds them?\r\n\r\nIf so I think I can make a PR with some if statement to fix that\r\n\r\n\n", "before_files": [{"content": "\"\"\"MicroXS module\n\nA pandas.DataFrame storing microscopic cross section data with\nnuclide names as row indices and reaction names as column indices.\n\"\"\"\n\nimport tempfile\nfrom copy import deepcopy\n\nfrom pandas import DataFrame, read_csv\nimport numpy as np\n\nfrom openmc.checkvalue import check_type, check_value, check_iterable_type\nfrom openmc.exceptions import DataError\nfrom openmc.mgxs import EnergyGroups, ArbitraryXS, FissionXS\nfrom openmc import Tallies, StatePoint, Materials\nimport openmc\nfrom .chain import Chain, REACTIONS\nfrom .coupled_operator import _find_cross_sections, _get_nuclides_with_data\n\n_valid_rxns = list(REACTIONS)\n_valid_rxns.append('fission')\n\n\nclass MicroXS(DataFrame):\n \"\"\"Microscopic cross section data for use in transport-independent depletion.\n\n .. versionadded:: 0.13.1\n\n \"\"\"\n\n @classmethod\n def from_model(cls,\n model,\n reaction_domain,\n chain_file=None,\n dilute_initial=1.0e3,\n energy_bounds=(0, 20e6),\n run_kwargs=None):\n \"\"\"Generate a one-group cross-section dataframe using\n OpenMC. Note that the ``openmc`` executable must be compiled.\n\n Parameters\n ----------\n model : openmc.Model\n OpenMC model object. Must contain geometry, materials, and settings.\n reaction_domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh\n Domain in which to tally reaction rates.\n chain_file : str, optional\n Path to the depletion chain XML file that will be used in depletion\n simulation. Used to determine cross sections for materials not\n present in the inital composition. Defaults to\n ``openmc.config['chain_file']``.\n dilute_initial : float, optional\n Initial atom density [atoms/cm^3] to add for nuclides that\n are zero in initial condition to ensure they exist in the cross\n section data. Only done for nuclides with reaction rates.\n reactions : list of str, optional\n Reaction names to tally\n energy_bound : 2-tuple of float, optional\n Bounds for the energy group.\n run_kwargs : dict, optional\n Keyword arguments passed to :meth:`openmc.model.Model.run`\n\n Returns\n -------\n MicroXS\n Cross section data in [b]\n\n \"\"\"\n groups = EnergyGroups(energy_bounds)\n\n # Set up the reaction tallies\n original_tallies = model.tallies\n original_materials = deepcopy(model.materials)\n tallies = Tallies()\n xs = {}\n reactions, diluted_materials = cls._add_dilute_nuclides(chain_file,\n model,\n dilute_initial)\n model.materials = diluted_materials\n\n for rx in reactions:\n if rx == 'fission':\n xs[rx] = FissionXS(domain=reaction_domain,\n energy_groups=groups, by_nuclide=True)\n else:\n xs[rx] = ArbitraryXS(rx, domain=reaction_domain,\n energy_groups=groups, by_nuclide=True)\n tallies += xs[rx].tallies.values()\n\n model.tallies = tallies\n\n # create temporary run\n with tempfile.TemporaryDirectory() as temp_dir:\n if run_kwargs is None:\n run_kwargs = {}\n run_kwargs.setdefault('cwd', temp_dir)\n statepoint_path = model.run(**run_kwargs)\n\n with StatePoint(statepoint_path) as sp:\n for rx in xs:\n xs[rx].load_from_statepoint(sp)\n\n # Build the DataFrame\n series = {}\n for rx in xs:\n df = xs[rx].get_pandas_dataframe(xs_type='micro')\n series[rx] = df.set_index('nuclide')['mean']\n\n # Revert to the original tallies and materials\n model.tallies = original_tallies\n model.materials = original_materials\n\n return cls(series)\n\n @classmethod\n def _add_dilute_nuclides(cls, chain_file, model, dilute_initial):\n \"\"\"\n Add nuclides not present in burnable materials that have neutron data\n and are present in the depletion chain to those materials. This allows\n us to tally those specific nuclides for reactions to create one-group\n cross sections.\n\n Parameters\n ----------\n chain_file : str\n Path to the depletion chain XML file that will be used in depletion\n simulation. Used to determine cross sections for materials not\n present in the inital composition.\n model : openmc.Model\n Model object\n dilute_initial : float\n Initial atom density [atoms/cm^3] to add for nuclides that\n are zero in initial condition to ensure they exist in the cross\n section data. Only done for nuclides with reaction rates.\n\n Returns\n -------\n reactions : list of str\n List of reaction names\n diluted_materials : openmc.Materials\n :class:`openmc.Materials` object with nuclides added to burnable\n materials.\n \"\"\"\n if chain_file is None:\n chain_file = openmc.config.get('chain_file')\n if chain_file is None:\n raise DataError(\n \"No depletion chain specified and could not find depletion \"\n \"chain in openmc.config['chain_file']\"\n )\n chain = Chain.from_xml(chain_file)\n reactions = chain.reactions\n cross_sections = _find_cross_sections(model)\n nuclides_with_data = _get_nuclides_with_data(cross_sections)\n burnable_nucs = [nuc.name for nuc in chain.nuclides\n if nuc.name in nuclides_with_data]\n diluted_materials = Materials()\n for material in model.materials:\n if material.depletable:\n nuc_densities = material.get_nuclide_atom_densities()\n dilute_density = 1.0e-24 * dilute_initial\n material.set_density('sum')\n for nuc, density in nuc_densities.items():\n material.remove_nuclide(nuc)\n material.add_nuclide(nuc, density)\n for burn_nuc in burnable_nucs:\n if burn_nuc not in nuc_densities:\n material.add_nuclide(burn_nuc,\n dilute_density)\n diluted_materials.append(material)\n\n return reactions, diluted_materials\n\n @classmethod\n def from_array(cls, nuclides, reactions, data):\n \"\"\"\n Creates a ``MicroXS`` object from arrays.\n\n Parameters\n ----------\n nuclides : list of str\n List of nuclide symbols for that have data for at least one\n reaction.\n reactions : list of str\n List of reactions. All reactions must match those in\n :data:`openmc.deplete.chain.REACTIONS`\n data : ndarray of floats\n Array containing one-group microscopic cross section values for\n each nuclide and reaction. Cross section values are assumed to be\n in [b].\n\n Returns\n -------\n MicroXS\n \"\"\"\n\n # Validate inputs\n if data.shape != (len(nuclides), len(reactions)):\n raise ValueError(\n f'Nuclides list of length {len(nuclides)} and '\n f'reactions array of length {len(reactions)} do not '\n f'match dimensions of data array of shape {data.shape}')\n\n cls._validate_micro_xs_inputs(\n nuclides, reactions, data)\n micro_xs = cls(index=nuclides, columns=reactions, data=data)\n\n return micro_xs\n\n @classmethod\n def from_csv(cls, csv_file, **kwargs):\n \"\"\"\n Load a ``MicroXS`` object from a ``.csv`` file.\n\n Parameters\n ----------\n csv_file : str\n Relative path to csv-file containing microscopic cross section\n data. Cross section values are assumed to be in [b]\n **kwargs : dict\n Keyword arguments to pass to :func:`pandas.read_csv()`.\n\n Returns\n -------\n MicroXS\n\n \"\"\"\n if 'float_precision' not in kwargs:\n kwargs['float_precision'] = 'round_trip'\n\n micro_xs = cls(read_csv(csv_file, index_col=0, **kwargs))\n\n cls._validate_micro_xs_inputs(list(micro_xs.index),\n list(micro_xs.columns),\n micro_xs.to_numpy())\n return micro_xs\n\n @staticmethod\n def _validate_micro_xs_inputs(nuclides, reactions, data):\n check_iterable_type('nuclides', nuclides, str)\n check_iterable_type('reactions', reactions, str)\n check_type('data', data, np.ndarray, expected_iter_type=float)\n for reaction in reactions:\n check_value('reactions', reaction, _valid_rxns)\n", "path": "openmc/deplete/microxs.py"}]}
| 3,301 | 251 |
gh_patches_debug_17333
|
rasdani/github-patches
|
git_diff
|
sanic-org__sanic-2651
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ASGI websocket must pass thru bytes as is
_Originally posted by @Tronic in https://github.com/sanic-org/sanic/pull/2640#discussion_r1058027028_
</issue>
<code>
[start of sanic/server/websockets/connection.py]
1 from typing import (
2 Any,
3 Awaitable,
4 Callable,
5 Dict,
6 List,
7 MutableMapping,
8 Optional,
9 Union,
10 )
11
12 from sanic.exceptions import InvalidUsage
13
14
15 ASGIMessage = MutableMapping[str, Any]
16
17
18 class WebSocketConnection:
19 """
20 This is for ASGI Connections.
21 It provides an interface similar to WebsocketProtocol, but
22 sends/receives over an ASGI connection.
23 """
24
25 # TODO
26 # - Implement ping/pong
27
28 def __init__(
29 self,
30 send: Callable[[ASGIMessage], Awaitable[None]],
31 receive: Callable[[], Awaitable[ASGIMessage]],
32 subprotocols: Optional[List[str]] = None,
33 ) -> None:
34 self._send = send
35 self._receive = receive
36 self._subprotocols = subprotocols or []
37
38 async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:
39 message: Dict[str, Union[str, bytes]] = {"type": "websocket.send"}
40
41 if isinstance(data, bytes):
42 message.update({"bytes": data})
43 else:
44 message.update({"text": str(data)})
45
46 await self._send(message)
47
48 async def recv(self, *args, **kwargs) -> Optional[str]:
49 message = await self._receive()
50
51 if message["type"] == "websocket.receive":
52 try:
53 return message["text"]
54 except KeyError:
55 try:
56 return message["bytes"].decode()
57 except KeyError:
58 raise InvalidUsage("Bad ASGI message received")
59 elif message["type"] == "websocket.disconnect":
60 pass
61
62 return None
63
64 receive = recv
65
66 async def accept(self, subprotocols: Optional[List[str]] = None) -> None:
67 subprotocol = None
68 if subprotocols:
69 for subp in subprotocols:
70 if subp in self.subprotocols:
71 subprotocol = subp
72 break
73
74 await self._send(
75 {
76 "type": "websocket.accept",
77 "subprotocol": subprotocol,
78 }
79 )
80
81 async def close(self, code: int = 1000, reason: str = "") -> None:
82 pass
83
84 @property
85 def subprotocols(self):
86 return self._subprotocols
87
88 @subprotocols.setter
89 def subprotocols(self, subprotocols: Optional[List[str]] = None):
90 self._subprotocols = subprotocols or []
91
[end of sanic/server/websockets/connection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sanic/server/websockets/connection.py b/sanic/server/websockets/connection.py
--- a/sanic/server/websockets/connection.py
+++ b/sanic/server/websockets/connection.py
@@ -45,7 +45,7 @@
await self._send(message)
- async def recv(self, *args, **kwargs) -> Optional[str]:
+ async def recv(self, *args, **kwargs) -> Optional[Union[str, bytes]]:
message = await self._receive()
if message["type"] == "websocket.receive":
@@ -53,7 +53,7 @@
return message["text"]
except KeyError:
try:
- return message["bytes"].decode()
+ return message["bytes"]
except KeyError:
raise InvalidUsage("Bad ASGI message received")
elif message["type"] == "websocket.disconnect":
|
{"golden_diff": "diff --git a/sanic/server/websockets/connection.py b/sanic/server/websockets/connection.py\n--- a/sanic/server/websockets/connection.py\n+++ b/sanic/server/websockets/connection.py\n@@ -45,7 +45,7 @@\n \n await self._send(message)\n \n- async def recv(self, *args, **kwargs) -> Optional[str]:\n+ async def recv(self, *args, **kwargs) -> Optional[Union[str, bytes]]:\n message = await self._receive()\n \n if message[\"type\"] == \"websocket.receive\":\n@@ -53,7 +53,7 @@\n return message[\"text\"]\n except KeyError:\n try:\n- return message[\"bytes\"].decode()\n+ return message[\"bytes\"]\n except KeyError:\n raise InvalidUsage(\"Bad ASGI message received\")\n elif message[\"type\"] == \"websocket.disconnect\":\n", "issue": "ASGI websocket must pass thru bytes as is\n\r\n\r\n_Originally posted by @Tronic in https://github.com/sanic-org/sanic/pull/2640#discussion_r1058027028_\r\n \n", "before_files": [{"content": "from typing import (\n Any,\n Awaitable,\n Callable,\n Dict,\n List,\n MutableMapping,\n Optional,\n Union,\n)\n\nfrom sanic.exceptions import InvalidUsage\n\n\nASGIMessage = MutableMapping[str, Any]\n\n\nclass WebSocketConnection:\n \"\"\"\n This is for ASGI Connections.\n It provides an interface similar to WebsocketProtocol, but\n sends/receives over an ASGI connection.\n \"\"\"\n\n # TODO\n # - Implement ping/pong\n\n def __init__(\n self,\n send: Callable[[ASGIMessage], Awaitable[None]],\n receive: Callable[[], Awaitable[ASGIMessage]],\n subprotocols: Optional[List[str]] = None,\n ) -> None:\n self._send = send\n self._receive = receive\n self._subprotocols = subprotocols or []\n\n async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:\n message: Dict[str, Union[str, bytes]] = {\"type\": \"websocket.send\"}\n\n if isinstance(data, bytes):\n message.update({\"bytes\": data})\n else:\n message.update({\"text\": str(data)})\n\n await self._send(message)\n\n async def recv(self, *args, **kwargs) -> Optional[str]:\n message = await self._receive()\n\n if message[\"type\"] == \"websocket.receive\":\n try:\n return message[\"text\"]\n except KeyError:\n try:\n return message[\"bytes\"].decode()\n except KeyError:\n raise InvalidUsage(\"Bad ASGI message received\")\n elif message[\"type\"] == \"websocket.disconnect\":\n pass\n\n return None\n\n receive = recv\n\n async def accept(self, subprotocols: Optional[List[str]] = None) -> None:\n subprotocol = None\n if subprotocols:\n for subp in subprotocols:\n if subp in self.subprotocols:\n subprotocol = subp\n break\n\n await self._send(\n {\n \"type\": \"websocket.accept\",\n \"subprotocol\": subprotocol,\n }\n )\n\n async def close(self, code: int = 1000, reason: str = \"\") -> None:\n pass\n\n @property\n def subprotocols(self):\n return self._subprotocols\n\n @subprotocols.setter\n def subprotocols(self, subprotocols: Optional[List[str]] = None):\n self._subprotocols = subprotocols or []\n", "path": "sanic/server/websockets/connection.py"}]}
| 1,293 | 186 |
gh_patches_debug_11443
|
rasdani/github-patches
|
git_diff
|
pytorch__text-1067
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip install torchtext==0.7.0 installs incompatible PyTorch 1.7.0
## 🐛 Bug
**Describe the bug**
Recently, after I do `pip install torchtext==0.7.0`, import torchtext would cause segmentation fault. I found that degrading pytorch to 1.6.0 fixes this issue.
**To Reproduce**
Steps to reproduce the behavior:
1. `pip install torchtext==0.7.0` (assuming that pytorch is not installed yet, and this command will install the latest pytorch)
2. python -c "import torchtext"
**Expected behavior**
Segmentation Fault
**Environment**
- PyTorch Version (e.g., 1.0): 1.7.0
- OS (e.g., Linux): Linux/MacOS
- Python: 3.8.3
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import io
3 import os
4 import shutil
5 import subprocess
6 from pathlib import Path
7 import distutils.command.clean
8 from setuptools import setup, find_packages
9
10 from build_tools import setup_helpers
11
12 ROOT_DIR = Path(__file__).parent.resolve()
13
14
15 def read(*names, **kwargs):
16 with io.open(ROOT_DIR.joinpath(*names), encoding=kwargs.get("encoding", "utf8")) as fp:
17 return fp.read()
18
19
20 def _get_version():
21 version = '0.9.0a0'
22 sha = None
23
24 try:
25 cmd = ['git', 'rev-parse', 'HEAD']
26 sha = subprocess.check_output(cmd, cwd=str(ROOT_DIR)).decode('ascii').strip()
27 except Exception:
28 pass
29
30 if os.getenv('BUILD_VERSION'):
31 version = os.getenv('BUILD_VERSION')
32 elif sha is not None:
33 version += '+' + sha[:7]
34
35 if sha is None:
36 sha = 'Unknown'
37 return version, sha
38
39
40 def _export_version(version, sha):
41 version_path = ROOT_DIR / 'torchtext' / 'version.py'
42 with open(version_path, 'w') as fileobj:
43 fileobj.write("__version__ = '{}'\n".format(version))
44 fileobj.write("git_version = {}\n".format(repr(sha)))
45
46
47 VERSION, SHA = _get_version()
48 _export_version(VERSION, SHA)
49
50 print('-- Building version ' + VERSION)
51
52
53 class clean(distutils.command.clean.clean):
54 def run(self):
55 # Run default behavior first
56 distutils.command.clean.clean.run(self)
57
58 # Remove torchtext extension
59 for path in (ROOT_DIR / 'torchtext').glob('**/*.so'):
60 print(f'removing \'{path}\'')
61 path.unlink()
62 # Remove build directory
63 build_dirs = [
64 ROOT_DIR / 'build',
65 ROOT_DIR / 'third_party' / 'build',
66 ]
67 for path in build_dirs:
68 if path.exists():
69 print(f'removing \'{path}\' (and everything under it)')
70 shutil.rmtree(str(path), ignore_errors=True)
71
72
73 setup_info = dict(
74 # Metadata
75 name='torchtext',
76 version=VERSION,
77 author='PyTorch core devs and James Bradbury',
78 author_email='[email protected]',
79 url='https://github.com/pytorch/text',
80 description='Text utilities and datasets for PyTorch',
81 long_description=read('README.rst'),
82 license='BSD',
83
84 install_requires=[
85 'tqdm', 'requests', 'torch', 'numpy'
86 ],
87 python_requires='>=3.5',
88 classifiers=[
89 'Programming Language :: Python :: 3',
90 'Programming Language :: Python :: 3.5',
91 'Programming Language :: Python :: 3.6',
92 'Programming Language :: Python :: 3.7',
93 'Programming Language :: Python :: 3.8',
94 'Programming Language :: Python :: 3 :: Only',
95 ],
96 # Package info
97 packages=find_packages(exclude=('test*', 'build_tools*')),
98 zip_safe=False,
99 # Extension info
100 # If you are trying to use torchtext.so and see no registered op.
101 # See here: https://github.com/pytorch/vision/issues/2134"
102 ext_modules=setup_helpers.get_ext_modules(),
103 cmdclass={
104 'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True),
105 'clean': clean,
106 },
107 )
108
109 setup(**setup_info)
110
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,6 +49,12 @@
print('-- Building version ' + VERSION)
+pytorch_package_version = os.getenv('PYTORCH_VERSION')
+
+pytorch_package_dep = 'torch'
+if pytorch_package_version is not None:
+ pytorch_package_dep += "==" + pytorch_package_version
+
class clean(distutils.command.clean.clean):
def run(self):
@@ -82,7 +88,7 @@
license='BSD',
install_requires=[
- 'tqdm', 'requests', 'torch', 'numpy'
+ 'tqdm', 'requests', pytorch_package_dep, 'numpy'
],
python_requires='>=3.5',
classifiers=[
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,6 +49,12 @@\n \n print('-- Building version ' + VERSION)\n \n+pytorch_package_version = os.getenv('PYTORCH_VERSION')\n+\n+pytorch_package_dep = 'torch'\n+if pytorch_package_version is not None:\n+ pytorch_package_dep += \"==\" + pytorch_package_version\n+\n \n class clean(distutils.command.clean.clean):\n def run(self):\n@@ -82,7 +88,7 @@\n license='BSD',\n \n install_requires=[\n- 'tqdm', 'requests', 'torch', 'numpy'\n+ 'tqdm', 'requests', pytorch_package_dep, 'numpy'\n ],\n python_requires='>=3.5',\n classifiers=[\n", "issue": "pip install torchtext==0.7.0 installs incompatible PyTorch 1.7.0\n## \ud83d\udc1b Bug\r\n**Describe the bug**\r\nRecently, after I do `pip install torchtext==0.7.0`, import torchtext would cause segmentation fault. I found that degrading pytorch to 1.6.0 fixes this issue. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. `pip install torchtext==0.7.0` (assuming that pytorch is not installed yet, and this command will install the latest pytorch)\r\n2. python -c \"import torchtext\"\r\n\r\n**Expected behavior**\r\nSegmentation Fault\r\n\r\n**Environment**\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.0\r\n - OS (e.g., Linux): Linux/MacOS\r\n - Python: 3.8.3\n", "before_files": [{"content": "#!/usr/bin/env python\nimport io\nimport os\nimport shutil\nimport subprocess\nfrom pathlib import Path\nimport distutils.command.clean\nfrom setuptools import setup, find_packages\n\nfrom build_tools import setup_helpers\n\nROOT_DIR = Path(__file__).parent.resolve()\n\n\ndef read(*names, **kwargs):\n with io.open(ROOT_DIR.joinpath(*names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef _get_version():\n version = '0.9.0a0'\n sha = None\n\n try:\n cmd = ['git', 'rev-parse', 'HEAD']\n sha = subprocess.check_output(cmd, cwd=str(ROOT_DIR)).decode('ascii').strip()\n except Exception:\n pass\n\n if os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\n elif sha is not None:\n version += '+' + sha[:7]\n\n if sha is None:\n sha = 'Unknown'\n return version, sha\n\n\ndef _export_version(version, sha):\n version_path = ROOT_DIR / 'torchtext' / 'version.py'\n with open(version_path, 'w') as fileobj:\n fileobj.write(\"__version__ = '{}'\\n\".format(version))\n fileobj.write(\"git_version = {}\\n\".format(repr(sha)))\n\n\nVERSION, SHA = _get_version()\n_export_version(VERSION, SHA)\n\nprint('-- Building version ' + VERSION)\n\n\nclass clean(distutils.command.clean.clean):\n def run(self):\n # Run default behavior first\n distutils.command.clean.clean.run(self)\n\n # Remove torchtext extension\n for path in (ROOT_DIR / 'torchtext').glob('**/*.so'):\n print(f'removing \\'{path}\\'')\n path.unlink()\n # Remove build directory\n build_dirs = [\n ROOT_DIR / 'build',\n ROOT_DIR / 'third_party' / 'build',\n ]\n for path in build_dirs:\n if path.exists():\n print(f'removing \\'{path}\\' (and everything under it)')\n shutil.rmtree(str(path), ignore_errors=True)\n\n\nsetup_info = dict(\n # Metadata\n name='torchtext',\n version=VERSION,\n author='PyTorch core devs and James Bradbury',\n author_email='[email protected]',\n url='https://github.com/pytorch/text',\n description='Text utilities and datasets for PyTorch',\n long_description=read('README.rst'),\n license='BSD',\n\n install_requires=[\n 'tqdm', 'requests', 'torch', 'numpy'\n ],\n python_requires='>=3.5',\n classifiers=[\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3 :: Only',\n ],\n # Package info\n packages=find_packages(exclude=('test*', 'build_tools*')),\n zip_safe=False,\n # Extension info\n # If you are trying to use torchtext.so and see no registered op.\n # See here: https://github.com/pytorch/vision/issues/2134\"\n ext_modules=setup_helpers.get_ext_modules(),\n cmdclass={\n 'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True),\n 'clean': clean,\n },\n)\n\nsetup(**setup_info)\n", "path": "setup.py"}]}
| 1,699 | 177 |
gh_patches_debug_6296
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-2158
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing requirement in documentation for Volume widget
# Issue description
The Volume widget by default needs access to amixer, which is part of the package alsa-utils, unless the `get_volume_command` is manually set to something else by the user.
This is not mentioned in the documentation [http://docs.qtile.org/en/latest/manual/ref/widgets.html#volume](http://docs.qtile.org/en/latest/manual/ref/widgets.html#volume)
# Qtile version
0.16.1
# Stack traces
N/A
# Configuration
Irrelevant
</issue>
<code>
[start of libqtile/widget/volume.py]
1 # Copyright (c) 2010, 2012, 2014 roger
2 # Copyright (c) 2011 Kirk Strauser
3 # Copyright (c) 2011 Florian Mounier
4 # Copyright (c) 2011 Mounier Florian
5 # Copyright (c) 2011 Roger Duran
6 # Copyright (c) 2012-2015 Tycho Andersen
7 # Copyright (c) 2013 Tao Sauvage
8 # Copyright (c) 2013 Craig Barnes
9 # Copyright (c) 2014-2015 Sean Vig
10 # Copyright (c) 2014 Adi Sieker
11 # Copyright (c) 2014 dmpayton
12 # Copyright (c) 2014 Jody Frankowski
13 # Copyright (c) 2016 Christoph Lassner
14 #
15 # Permission is hereby granted, free of charge, to any person obtaining a copy
16 # of this software and associated documentation files (the "Software"), to deal
17 # in the Software without restriction, including without limitation the rights
18 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
19 # copies of the Software, and to permit persons to whom the Software is
20 # furnished to do so, subject to the following conditions:
21 #
22 # The above copyright notice and this permission notice shall be included in
23 # all copies or substantial portions of the Software.
24 #
25 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
26 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
27 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
28 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
29 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
30 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
31 # SOFTWARE.
32
33 import re
34 import subprocess
35
36 from libqtile import bar
37 from libqtile.widget import base
38
39 __all__ = [
40 'Volume',
41 ]
42
43 re_vol = re.compile(r'\[(\d?\d?\d?)%\]')
44
45
46 class Volume(base._TextBox):
47 """Widget that display and change volume
48
49 If theme_path is set it draw widget as icons.
50 """
51 orientations = base.ORIENTATION_HORIZONTAL
52 defaults = [
53 ("cardid", None, "Card Id"),
54 ("device", "default", "Device Name"),
55 ("channel", "Master", "Channel"),
56 ("padding", 3, "Padding left and right. Calculated if None."),
57 ("update_interval", 0.2, "Update time in seconds."),
58 ("theme_path", None, "Path of the icons"),
59 ("emoji", False, "Use emoji to display volume states, only if ``theme_path`` is not set."
60 "The specified font needs to contain the correct unicode characters."),
61 ("mute_command", None, "Mute command"),
62 ("volume_app", None, "App to control volume"),
63 ("volume_up_command", None, "Volume up command"),
64 ("volume_down_command", None, "Volume down command"),
65 ("get_volume_command", None, "Command to get the current volume"),
66 ("step", 2, "Volume change for up an down commands in percentage."
67 "Only used if ``volume_up_command`` and ``volume_down_command`` are not set.")
68 ]
69
70 def __init__(self, **config):
71 base._TextBox.__init__(self, '0', width=bar.CALCULATED, **config)
72 self.add_defaults(Volume.defaults)
73 if self.theme_path:
74 self.length_type = bar.STATIC
75 self.length = 0
76 self.surfaces = {}
77 self.volume = None
78
79 self.add_callbacks({
80 'Button1': self.cmd_mute,
81 'Button3': self.cmd_run_app,
82 'Button4': self.cmd_increase_vol,
83 'Button5': self.cmd_decrease_vol,
84 })
85
86 def timer_setup(self):
87 self.timeout_add(self.update_interval, self.update)
88 if self.theme_path:
89 self.setup_images()
90
91 def create_amixer_command(self, *args):
92 cmd = ['amixer']
93
94 if (self.cardid is not None):
95 cmd.extend(['-c', str(self.cardid)])
96
97 if (self.device is not None):
98 cmd.extend(['-D', str(self.device)])
99
100 cmd.extend([x for x in args])
101 return cmd
102
103 def button_press(self, x, y, button):
104 base._TextBox.button_press(self, x, y, button)
105 self.draw()
106
107 def update(self):
108 vol = self.get_volume()
109 if vol != self.volume:
110 self.volume = vol
111 # Update the underlying canvas size before actually attempting
112 # to figure out how big it is and draw it.
113 self._update_drawer()
114 self.bar.draw()
115 self.timeout_add(self.update_interval, self.update)
116
117 def _update_drawer(self):
118 if self.theme_path:
119 self.drawer.clear(self.background or self.bar.background)
120 if self.volume <= 0:
121 img_name = 'audio-volume-muted'
122 elif self.volume <= 30:
123 img_name = 'audio-volume-low'
124 elif self.volume < 80:
125 img_name = 'audio-volume-medium'
126 else: # self.volume >= 80:
127 img_name = 'audio-volume-high'
128
129 self.drawer.ctx.set_source(self.surfaces[img_name])
130 self.drawer.ctx.paint()
131 elif self.emoji:
132 if self.volume <= 0:
133 self.text = u'\U0001f507'
134 elif self.volume <= 30:
135 self.text = u'\U0001f508'
136 elif self.volume < 80:
137 self.text = u'\U0001f509'
138 elif self.volume >= 80:
139 self.text = u'\U0001f50a'
140 else:
141 if self.volume == -1:
142 self.text = 'M'
143 else:
144 self.text = '{}%'.format(self.volume)
145
146 def setup_images(self):
147 from libqtile import images
148 names = (
149 'audio-volume-high',
150 'audio-volume-low',
151 'audio-volume-medium',
152 'audio-volume-muted',
153 )
154 d_images = images.Loader(self.theme_path)(*names)
155 for name, img in d_images.items():
156 new_height = self.bar.height - 1
157 img.resize(height=new_height)
158 if img.width > self.length:
159 self.length = img.width + self.actual_padding * 2
160 self.surfaces[name] = img.pattern
161
162 def get_volume(self):
163 try:
164 get_volume_cmd = self.create_amixer_command('sget',
165 self.channel)
166
167 if self.get_volume_command:
168 get_volume_cmd = self.get_volume_command
169
170 mixer_out = self.call_process(get_volume_cmd)
171 except subprocess.CalledProcessError:
172 return -1
173
174 if '[off]' in mixer_out:
175 return -1
176
177 volgroups = re_vol.search(mixer_out)
178 if volgroups:
179 return int(volgroups.groups()[0])
180 else:
181 # this shouldn't happen
182 return -1
183
184 def draw(self):
185 if self.theme_path:
186 self.drawer.draw(offsetx=self.offset, width=self.length)
187 else:
188 base._TextBox.draw(self)
189
190 def cmd_increase_vol(self):
191 if self.volume_up_command is not None:
192 subprocess.call(self.volume_up_command, shell=True)
193 else:
194 subprocess.call(self.create_amixer_command('-q',
195 'sset',
196 self.channel,
197 '{}%+'.format(self.step)))
198
199 def cmd_decrease_vol(self):
200 if self.volume_down_command is not None:
201 subprocess.call(self.volume_down_command, shell=True)
202 else:
203 subprocess.call(self.create_amixer_command('-q',
204 'sset',
205 self.channel,
206 '{}%-'.format(self.step)))
207
208 def cmd_mute(self):
209 if self.mute_command is not None:
210 subprocess.call(self.mute_command, shell=True)
211 else:
212 subprocess.call(self.create_amixer_command('-q',
213 'sset',
214 self.channel,
215 'toggle'))
216
217 def cmd_run_app(self):
218 if self.volume_app is not None:
219 subprocess.Popen(self.volume_app, shell=True)
220
[end of libqtile/widget/volume.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/widget/volume.py b/libqtile/widget/volume.py
--- a/libqtile/widget/volume.py
+++ b/libqtile/widget/volume.py
@@ -46,6 +46,10 @@
class Volume(base._TextBox):
"""Widget that display and change volume
+ By default, this widget uses ``amixer`` to get and set the volume so users
+ will need to make sure this is installed. Alternatively, users may set the
+ relevant parameters for the widget to use a different application.
+
If theme_path is set it draw widget as icons.
"""
orientations = base.ORIENTATION_HORIZONTAL
|
{"golden_diff": "diff --git a/libqtile/widget/volume.py b/libqtile/widget/volume.py\n--- a/libqtile/widget/volume.py\n+++ b/libqtile/widget/volume.py\n@@ -46,6 +46,10 @@\n class Volume(base._TextBox):\n \"\"\"Widget that display and change volume\n \n+ By default, this widget uses ``amixer`` to get and set the volume so users\n+ will need to make sure this is installed. Alternatively, users may set the\n+ relevant parameters for the widget to use a different application.\n+\n If theme_path is set it draw widget as icons.\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n", "issue": "Missing requirement in documentation for Volume widget\n# Issue description\r\nThe Volume widget by default needs access to amixer, which is part of the package alsa-utils, unless the `get_volume_command` is manually set to something else by the user.\r\n\r\nThis is not mentioned in the documentation [http://docs.qtile.org/en/latest/manual/ref/widgets.html#volume](http://docs.qtile.org/en/latest/manual/ref/widgets.html#volume)\r\n\r\n# Qtile version\r\n0.16.1\r\n\r\n# Stack traces\r\nN/A\r\n\r\n# Configuration\r\nIrrelevant\r\n\n", "before_files": [{"content": "# Copyright (c) 2010, 2012, 2014 roger\n# Copyright (c) 2011 Kirk Strauser\n# Copyright (c) 2011 Florian Mounier\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2011 Roger Duran\n# Copyright (c) 2012-2015 Tycho Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2013 Craig Barnes\n# Copyright (c) 2014-2015 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n# Copyright (c) 2014 dmpayton\n# Copyright (c) 2014 Jody Frankowski\n# Copyright (c) 2016 Christoph Lassner\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport re\nimport subprocess\n\nfrom libqtile import bar\nfrom libqtile.widget import base\n\n__all__ = [\n 'Volume',\n]\n\nre_vol = re.compile(r'\\[(\\d?\\d?\\d?)%\\]')\n\n\nclass Volume(base._TextBox):\n \"\"\"Widget that display and change volume\n\n If theme_path is set it draw widget as icons.\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"cardid\", None, \"Card Id\"),\n (\"device\", \"default\", \"Device Name\"),\n (\"channel\", \"Master\", \"Channel\"),\n (\"padding\", 3, \"Padding left and right. Calculated if None.\"),\n (\"update_interval\", 0.2, \"Update time in seconds.\"),\n (\"theme_path\", None, \"Path of the icons\"),\n (\"emoji\", False, \"Use emoji to display volume states, only if ``theme_path`` is not set.\"\n \"The specified font needs to contain the correct unicode characters.\"),\n (\"mute_command\", None, \"Mute command\"),\n (\"volume_app\", None, \"App to control volume\"),\n (\"volume_up_command\", None, \"Volume up command\"),\n (\"volume_down_command\", None, \"Volume down command\"),\n (\"get_volume_command\", None, \"Command to get the current volume\"),\n (\"step\", 2, \"Volume change for up an down commands in percentage.\"\n \"Only used if ``volume_up_command`` and ``volume_down_command`` are not set.\")\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, '0', width=bar.CALCULATED, **config)\n self.add_defaults(Volume.defaults)\n if self.theme_path:\n self.length_type = bar.STATIC\n self.length = 0\n self.surfaces = {}\n self.volume = None\n\n self.add_callbacks({\n 'Button1': self.cmd_mute,\n 'Button3': self.cmd_run_app,\n 'Button4': self.cmd_increase_vol,\n 'Button5': self.cmd_decrease_vol,\n })\n\n def timer_setup(self):\n self.timeout_add(self.update_interval, self.update)\n if self.theme_path:\n self.setup_images()\n\n def create_amixer_command(self, *args):\n cmd = ['amixer']\n\n if (self.cardid is not None):\n cmd.extend(['-c', str(self.cardid)])\n\n if (self.device is not None):\n cmd.extend(['-D', str(self.device)])\n\n cmd.extend([x for x in args])\n return cmd\n\n def button_press(self, x, y, button):\n base._TextBox.button_press(self, x, y, button)\n self.draw()\n\n def update(self):\n vol = self.get_volume()\n if vol != self.volume:\n self.volume = vol\n # Update the underlying canvas size before actually attempting\n # to figure out how big it is and draw it.\n self._update_drawer()\n self.bar.draw()\n self.timeout_add(self.update_interval, self.update)\n\n def _update_drawer(self):\n if self.theme_path:\n self.drawer.clear(self.background or self.bar.background)\n if self.volume <= 0:\n img_name = 'audio-volume-muted'\n elif self.volume <= 30:\n img_name = 'audio-volume-low'\n elif self.volume < 80:\n img_name = 'audio-volume-medium'\n else: # self.volume >= 80:\n img_name = 'audio-volume-high'\n\n self.drawer.ctx.set_source(self.surfaces[img_name])\n self.drawer.ctx.paint()\n elif self.emoji:\n if self.volume <= 0:\n self.text = u'\\U0001f507'\n elif self.volume <= 30:\n self.text = u'\\U0001f508'\n elif self.volume < 80:\n self.text = u'\\U0001f509'\n elif self.volume >= 80:\n self.text = u'\\U0001f50a'\n else:\n if self.volume == -1:\n self.text = 'M'\n else:\n self.text = '{}%'.format(self.volume)\n\n def setup_images(self):\n from libqtile import images\n names = (\n 'audio-volume-high',\n 'audio-volume-low',\n 'audio-volume-medium',\n 'audio-volume-muted',\n )\n d_images = images.Loader(self.theme_path)(*names)\n for name, img in d_images.items():\n new_height = self.bar.height - 1\n img.resize(height=new_height)\n if img.width > self.length:\n self.length = img.width + self.actual_padding * 2\n self.surfaces[name] = img.pattern\n\n def get_volume(self):\n try:\n get_volume_cmd = self.create_amixer_command('sget',\n self.channel)\n\n if self.get_volume_command:\n get_volume_cmd = self.get_volume_command\n\n mixer_out = self.call_process(get_volume_cmd)\n except subprocess.CalledProcessError:\n return -1\n\n if '[off]' in mixer_out:\n return -1\n\n volgroups = re_vol.search(mixer_out)\n if volgroups:\n return int(volgroups.groups()[0])\n else:\n # this shouldn't happen\n return -1\n\n def draw(self):\n if self.theme_path:\n self.drawer.draw(offsetx=self.offset, width=self.length)\n else:\n base._TextBox.draw(self)\n\n def cmd_increase_vol(self):\n if self.volume_up_command is not None:\n subprocess.call(self.volume_up_command, shell=True)\n else:\n subprocess.call(self.create_amixer_command('-q',\n 'sset',\n self.channel,\n '{}%+'.format(self.step)))\n\n def cmd_decrease_vol(self):\n if self.volume_down_command is not None:\n subprocess.call(self.volume_down_command, shell=True)\n else:\n subprocess.call(self.create_amixer_command('-q',\n 'sset',\n self.channel,\n '{}%-'.format(self.step)))\n\n def cmd_mute(self):\n if self.mute_command is not None:\n subprocess.call(self.mute_command, shell=True)\n else:\n subprocess.call(self.create_amixer_command('-q',\n 'sset',\n self.channel,\n 'toggle'))\n\n def cmd_run_app(self):\n if self.volume_app is not None:\n subprocess.Popen(self.volume_app, shell=True)\n", "path": "libqtile/widget/volume.py"}]}
| 3,030 | 143 |
gh_patches_debug_8330
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-12653
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PEP 621-style license field shows the entire license text
**Describe the bug**
When using PEP 621-style project metadata with a license file (see https://peps.python.org/pep-0621/#license), the entire license text is shown in the Meta -> License section of a package on pypi. For an example see https://pypi.org/project/pytest-logikal/ and the relevant pyproject.toml file https://github.com/logikal-io/pytest-logikal/blob/main/pyproject.toml#L10. Showing the entire license text makes the Meta section almost useless by pushing the rest of the metadata quite far to the bottom.
**Expected behavior**
The license text should be hidden by default and perhaps shown with a modal upon clicking "see full text". Note that the issue is similar to https://github.com/pypi/warehouse/issues/1354, however, it is a little different due to the PEP 621 standard causing this behavior. With more and more projects adopting PEP 621 I would expect this to be a bigger issue over time.
</issue>
<code>
[start of warehouse/packaging/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from natsort import natsorted
14 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
15 from pyramid.view import view_config
16 from sqlalchemy.orm.exc import NoResultFound
17
18 from warehouse.accounts.models import User
19 from warehouse.cache.origin import origin_cache
20 from warehouse.packaging.models import File, Project, Release, Role
21 from warehouse.utils import readme
22
23
24 @view_config(
25 route_name="packaging.project",
26 context=Project,
27 renderer="packaging/detail.html",
28 decorator=[
29 origin_cache(
30 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale
31 )
32 ],
33 has_translations=True,
34 )
35 def project_detail(project, request):
36 if project.name != request.matchdict.get("name", project.name):
37 return HTTPMovedPermanently(request.current_route_path(name=project.name))
38
39 try:
40 release = (
41 request.db.query(Release)
42 .filter(Release.project == project)
43 .order_by(
44 Release.yanked,
45 Release.is_prerelease.nullslast(),
46 Release._pypi_ordering.desc(),
47 )
48 .limit(1)
49 .one()
50 )
51 except NoResultFound:
52 raise HTTPNotFound
53
54 return release_detail(release, request)
55
56
57 @view_config(
58 route_name="packaging.release",
59 context=Release,
60 renderer="packaging/detail.html",
61 decorator=[
62 origin_cache(
63 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale
64 )
65 ],
66 has_translations=True,
67 )
68 def release_detail(release, request):
69 project = release.project
70
71 # Check if the requested version is equivalent but not exactly the same as
72 # the release's version. Use `.get` because this view is used by
73 # `project_detail` and there may not be a version.
74 #
75 # This also handles the case where both the version and the project name
76 # need adjusted, and handles it in a single redirect.
77 if release.version != request.matchdict.get("version", release.version):
78 return HTTPMovedPermanently(
79 request.current_route_path(name=project.name, version=release.version)
80 )
81
82 # It's possible that the requested version was correct (or not provided),
83 # but we still need to adjust the project name.
84 if project.name != request.matchdict.get("name", project.name):
85 return HTTPMovedPermanently(request.current_route_path(name=project.name))
86
87 # Grab the rendered description if it exists, and if it doesn't, then we will render
88 # it inline.
89 # TODO: Remove the fallback to rendering inline and only support displaying the
90 # already rendered content.
91 if release.description.html:
92 description = release.description.html
93 else:
94 description = readme.render(
95 release.description.raw, release.description.content_type
96 )
97
98 # Get all of the maintainers for this project.
99 maintainers = [
100 r.user
101 for r in (
102 request.db.query(Role)
103 .join(User)
104 .filter(Role.project == project)
105 .distinct(User.username)
106 .order_by(User.username)
107 .all()
108 )
109 ]
110
111 # Get the license from both the `Classifier` and `License` metadata fields
112 license_classifiers = ", ".join(
113 c.split(" :: ")[-1] for c in release.classifiers if c.startswith("License")
114 )
115
116 # Make a best effort when the entire license text is given by using the
117 # first line only.
118 short_license = release.license.split("\n")[0] if release.license else None
119
120 if license_classifiers and short_license:
121 license = f"{license_classifiers} ({short_license})"
122 else:
123 license = license_classifiers or short_license or None
124
125 # We cannot easily sort naturally in SQL, sort here and pass to template
126 sdists = natsorted(
127 release.files.filter(File.packagetype == "sdist").all(),
128 reverse=True,
129 key=lambda f: f.filename,
130 )
131 bdists = natsorted(
132 release.files.filter(File.packagetype != "sdist").all(),
133 reverse=True,
134 key=lambda f: f.filename,
135 )
136
137 return {
138 "project": project,
139 "release": release,
140 "description": description,
141 "files": sdists + bdists,
142 "sdists": sdists,
143 "bdists": bdists,
144 "latest_version": project.latest_version,
145 "all_versions": project.all_versions,
146 "maintainers": maintainers,
147 "license": license,
148 }
149
150
151 @view_config(
152 route_name="includes.edit-project-button",
153 context=Project,
154 renderer="includes/manage-project-button.html",
155 uses_session=True,
156 permission="manage:project",
157 has_translations=True,
158 )
159 def edit_project_button(project, request):
160 return {"project": project}
161
[end of warehouse/packaging/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py
--- a/warehouse/packaging/views.py
+++ b/warehouse/packaging/views.py
@@ -117,6 +117,10 @@
# first line only.
short_license = release.license.split("\n")[0] if release.license else None
+ # Truncate the short license if we were unable to shorten it with newlines
+ if short_license and len(short_license) > 100 and short_license == release.license:
+ short_license = short_license[:100] + "..."
+
if license_classifiers and short_license:
license = f"{license_classifiers} ({short_license})"
else:
|
{"golden_diff": "diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py\n--- a/warehouse/packaging/views.py\n+++ b/warehouse/packaging/views.py\n@@ -117,6 +117,10 @@\n # first line only.\n short_license = release.license.split(\"\\n\")[0] if release.license else None\n \n+ # Truncate the short license if we were unable to shorten it with newlines\n+ if short_license and len(short_license) > 100 and short_license == release.license:\n+ short_license = short_license[:100] + \"...\"\n+\n if license_classifiers and short_license:\n license = f\"{license_classifiers} ({short_license})\"\n else:\n", "issue": "PEP 621-style license field shows the entire license text\n**Describe the bug**\r\nWhen using PEP 621-style project metadata with a license file (see https://peps.python.org/pep-0621/#license), the entire license text is shown in the Meta -> License section of a package on pypi. For an example see https://pypi.org/project/pytest-logikal/ and the relevant pyproject.toml file https://github.com/logikal-io/pytest-logikal/blob/main/pyproject.toml#L10. Showing the entire license text makes the Meta section almost useless by pushing the rest of the metadata quite far to the bottom.\r\n\r\n**Expected behavior**\r\nThe license text should be hidden by default and perhaps shown with a modal upon clicking \"see full text\". Note that the issue is similar to https://github.com/pypi/warehouse/issues/1354, however, it is a little different due to the PEP 621 standard causing this behavior. With more and more projects adopting PEP 621 I would expect this to be a bigger issue over time.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom natsort import natsorted\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import File, Project, Release, Role\nfrom warehouse.utils import readme\n\n\n@view_config(\n route_name=\"packaging.project\",\n context=Project,\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale\n )\n ],\n has_translations=True,\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(request.current_route_path(name=project.name))\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(\n Release.yanked,\n Release.is_prerelease.nullslast(),\n Release._pypi_ordering.desc(),\n )\n .limit(1)\n .one()\n )\n except NoResultFound:\n raise HTTPNotFound\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n context=Release,\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale\n )\n ],\n has_translations=True,\n)\ndef release_detail(release, request):\n project = release.project\n\n # Check if the requested version is equivalent but not exactly the same as\n # the release's version. Use `.get` because this view is used by\n # `project_detail` and there may not be a version.\n #\n # This also handles the case where both the version and the project name\n # need adjusted, and handles it in a single redirect.\n if release.version != request.matchdict.get(\"version\", release.version):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name, version=release.version)\n )\n\n # It's possible that the requested version was correct (or not provided),\n # but we still need to adjust the project name.\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(request.current_route_path(name=project.name))\n\n # Grab the rendered description if it exists, and if it doesn't, then we will render\n # it inline.\n # TODO: Remove the fallback to rendering inline and only support displaying the\n # already rendered content.\n if release.description.html:\n description = release.description.html\n else:\n description = readme.render(\n release.description.raw, release.description.content_type\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n # Get the license from both the `Classifier` and `License` metadata fields\n license_classifiers = \", \".join(\n c.split(\" :: \")[-1] for c in release.classifiers if c.startswith(\"License\")\n )\n\n # Make a best effort when the entire license text is given by using the\n # first line only.\n short_license = release.license.split(\"\\n\")[0] if release.license else None\n\n if license_classifiers and short_license:\n license = f\"{license_classifiers} ({short_license})\"\n else:\n license = license_classifiers or short_license or None\n\n # We cannot easily sort naturally in SQL, sort here and pass to template\n sdists = natsorted(\n release.files.filter(File.packagetype == \"sdist\").all(),\n reverse=True,\n key=lambda f: f.filename,\n )\n bdists = natsorted(\n release.files.filter(File.packagetype != \"sdist\").all(),\n reverse=True,\n key=lambda f: f.filename,\n )\n\n return {\n \"project\": project,\n \"release\": release,\n \"description\": description,\n \"files\": sdists + bdists,\n \"sdists\": sdists,\n \"bdists\": bdists,\n \"latest_version\": project.latest_version,\n \"all_versions\": project.all_versions,\n \"maintainers\": maintainers,\n \"license\": license,\n }\n\n\n@view_config(\n route_name=\"includes.edit-project-button\",\n context=Project,\n renderer=\"includes/manage-project-button.html\",\n uses_session=True,\n permission=\"manage:project\",\n has_translations=True,\n)\ndef edit_project_button(project, request):\n return {\"project\": project}\n", "path": "warehouse/packaging/views.py"}]}
| 2,359 | 164 |
gh_patches_debug_10135
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-1878
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation: Focal loss example misleading
In the example of the focal loss (`tfa.losses.SigmoidFocalCrossEntropy`) the label and the prediction data seems to be swapped.
This is misleading as `sigmoid_focal_crossentropy()` expects `y_true, y_pred`, whereas the example presents `y_pred, y_true`.
https://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L42-L47
https://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L90-L92
</issue>
<code>
[start of tensorflow_addons/losses/focal_loss.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements Focal loss."""
16
17 import tensorflow as tf
18 import tensorflow.keras.backend as K
19
20 from tensorflow_addons.utils.keras_utils import LossFunctionWrapper
21 from tensorflow_addons.utils.types import FloatTensorLike, TensorLike
22 from typeguard import typechecked
23
24
25 @tf.keras.utils.register_keras_serializable(package="Addons")
26 class SigmoidFocalCrossEntropy(LossFunctionWrapper):
27 """Implements the focal loss function.
28
29 Focal loss was first introduced in the RetinaNet paper
30 (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for
31 classification when you have highly imbalanced classes. It down-weights
32 well-classified examples and focuses on hard examples. The loss value is
33 much high for a sample which is misclassified by the classifier as compared
34 to the loss value corresponding to a well-classified example. One of the
35 best use-cases of focal loss is its usage in object detection where the
36 imbalance between the background class and other classes is extremely high.
37
38 Usage:
39
40 ```python
41 fl = tfa.losses.SigmoidFocalCrossEntropy()
42 loss = fl(
43 [[0.97], [0.91], [0.03]],
44 [[1.0], [1.0], [0.0]])
45 print('Loss: ', loss.numpy()) # Loss: [0.00010971,
46 0.0032975,
47 0.00030611]
48 ```
49 Usage with tf.keras API:
50
51 ```python
52 model = tf.keras.Model(inputs, outputs)
53 model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())
54 ```
55
56 Args
57 alpha: balancing factor, default value is 0.25
58 gamma: modulating factor, default value is 2.0
59
60 Returns:
61 Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same
62 shape as `y_true`; otherwise, it is scalar.
63
64 Raises:
65 ValueError: If the shape of `sample_weight` is invalid or value of
66 `gamma` is less than zero
67 """
68
69 @typechecked
70 def __init__(
71 self,
72 from_logits: bool = False,
73 alpha: FloatTensorLike = 0.25,
74 gamma: FloatTensorLike = 2.0,
75 reduction: str = tf.keras.losses.Reduction.NONE,
76 name: str = "sigmoid_focal_crossentropy",
77 ):
78 super().__init__(
79 sigmoid_focal_crossentropy,
80 name=name,
81 reduction=reduction,
82 from_logits=from_logits,
83 alpha=alpha,
84 gamma=gamma,
85 )
86
87
88 @tf.keras.utils.register_keras_serializable(package="Addons")
89 @tf.function
90 def sigmoid_focal_crossentropy(
91 y_true: TensorLike,
92 y_pred: TensorLike,
93 alpha: FloatTensorLike = 0.25,
94 gamma: FloatTensorLike = 2.0,
95 from_logits: bool = False,
96 ) -> tf.Tensor:
97 """
98 Args
99 y_true: true targets tensor.
100 y_pred: predictions tensor.
101 alpha: balancing factor.
102 gamma: modulating factor.
103
104 Returns:
105 Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the
106 same shape as `y_true`; otherwise, it is scalar.
107 """
108 if gamma and gamma < 0:
109 raise ValueError("Value of gamma should be greater than or equal to zero")
110
111 y_pred = tf.convert_to_tensor(y_pred)
112 y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)
113
114 # Get the cross_entropy for each entry
115 ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)
116
117 # If logits are provided then convert the predictions into probabilities
118 if from_logits:
119 pred_prob = tf.sigmoid(y_pred)
120 else:
121 pred_prob = y_pred
122
123 p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))
124 alpha_factor = 1.0
125 modulating_factor = 1.0
126
127 if alpha:
128 alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())
129 alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)
130
131 if gamma:
132 gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())
133 modulating_factor = tf.pow((1.0 - p_t), gamma)
134
135 # compute the final loss and return
136 return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)
137
[end of tensorflow_addons/losses/focal_loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py
--- a/tensorflow_addons/losses/focal_loss.py
+++ b/tensorflow_addons/losses/focal_loss.py
@@ -40,11 +40,11 @@
```python
fl = tfa.losses.SigmoidFocalCrossEntropy()
loss = fl(
- [[0.97], [0.91], [0.03]],
- [[1.0], [1.0], [0.0]])
- print('Loss: ', loss.numpy()) # Loss: [0.00010971,
- 0.0032975,
- 0.00030611]
+ y_true = [[1.0], [1.0], [0.0]],
+ y_pred = [[0.97], [0.91], [0.03]])
+ print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,
+ 1.9097870e-04,
+ 2.0559824e-05]
```
Usage with tf.keras API:
|
{"golden_diff": "diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py\n--- a/tensorflow_addons/losses/focal_loss.py\n+++ b/tensorflow_addons/losses/focal_loss.py\n@@ -40,11 +40,11 @@\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n- [[0.97], [0.91], [0.03]],\n- [[1.0], [1.0], [0.0]])\n- print('Loss: ', loss.numpy()) # Loss: [0.00010971,\n- 0.0032975,\n- 0.00030611]\n+ y_true = [[1.0], [1.0], [0.0]],\n+ y_pred = [[0.97], [0.91], [0.03]])\n+ print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,\n+ 1.9097870e-04,\n+ 2.0559824e-05]\n ```\n Usage with tf.keras API:\n", "issue": "Documentation: Focal loss example misleading\nIn the example of the focal loss (`tfa.losses.SigmoidFocalCrossEntropy`) the label and the prediction data seems to be swapped. \r\nThis is misleading as `sigmoid_focal_crossentropy()` expects `y_true, y_pred`, whereas the example presents `y_pred, y_true`.\r\n\r\nhttps://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L42-L47\r\n\r\nhttps://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L90-L92\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Focal loss.\"\"\"\n\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\n\nfrom tensorflow_addons.utils.keras_utils import LossFunctionWrapper\nfrom tensorflow_addons.utils.types import FloatTensorLike, TensorLike\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass SigmoidFocalCrossEntropy(LossFunctionWrapper):\n \"\"\"Implements the focal loss function.\n\n Focal loss was first introduced in the RetinaNet paper\n (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for\n classification when you have highly imbalanced classes. It down-weights\n well-classified examples and focuses on hard examples. The loss value is\n much high for a sample which is misclassified by the classifier as compared\n to the loss value corresponding to a well-classified example. One of the\n best use-cases of focal loss is its usage in object detection where the\n imbalance between the background class and other classes is extremely high.\n\n Usage:\n\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n [[0.97], [0.91], [0.03]],\n [[1.0], [1.0], [0.0]])\n print('Loss: ', loss.numpy()) # Loss: [0.00010971,\n 0.0032975,\n 0.00030611]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())\n ```\n\n Args\n alpha: balancing factor, default value is 0.25\n gamma: modulating factor, default value is 2.0\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `y_true`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `sample_weight` is invalid or value of\n `gamma` is less than zero\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n from_logits: bool = False,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n reduction: str = tf.keras.losses.Reduction.NONE,\n name: str = \"sigmoid_focal_crossentropy\",\n ):\n super().__init__(\n sigmoid_focal_crossentropy,\n name=name,\n reduction=reduction,\n from_logits=from_logits,\n alpha=alpha,\n gamma=gamma,\n )\n\n\[email protected]_keras_serializable(package=\"Addons\")\[email protected]\ndef sigmoid_focal_crossentropy(\n y_true: TensorLike,\n y_pred: TensorLike,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n from_logits: bool = False,\n) -> tf.Tensor:\n \"\"\"\n Args\n y_true: true targets tensor.\n y_pred: predictions tensor.\n alpha: balancing factor.\n gamma: modulating factor.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the\n same shape as `y_true`; otherwise, it is scalar.\n \"\"\"\n if gamma and gamma < 0:\n raise ValueError(\"Value of gamma should be greater than or equal to zero\")\n\n y_pred = tf.convert_to_tensor(y_pred)\n y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)\n\n # Get the cross_entropy for each entry\n ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)\n\n # If logits are provided then convert the predictions into probabilities\n if from_logits:\n pred_prob = tf.sigmoid(y_pred)\n else:\n pred_prob = y_pred\n\n p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))\n alpha_factor = 1.0\n modulating_factor = 1.0\n\n if alpha:\n alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())\n alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)\n\n if gamma:\n gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())\n modulating_factor = tf.pow((1.0 - p_t), gamma)\n\n # compute the final loss and return\n return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)\n", "path": "tensorflow_addons/losses/focal_loss.py"}]}
| 2,213 | 302 |
gh_patches_debug_26268
|
rasdani/github-patches
|
git_diff
|
sanic-org__sanic-2937
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Response streaming produces [ERROR] Invalid response type None (need HTTPResponse)
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
The "response streaming" [feature of Sanic](https://sanic.dev/en/guide/advanced/streaming.html#response-streaming) produces error messages when running from Uvicorn.
When accessing a page using the `await request.respond()` API, it produces error messages after each request.
```
[2024-01-31 19:37:14 +0000] [694830] [INFO]
┌─────────────────────────────────────────────────────────────────────────────────┐
│ Sanic v23.6.0 │
│ │
├───────────────────────┬─────────────────────────────────────────────────────────┤
│ │ mode: production, ASGI │
│ ▄███ █████ ██ │ server: ASGI │
│ ██ │ python: 3.11.6 │
│ ▀███████ ███▄ │ platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 │
│ ██ │ packages: sanic-routing==23.12.0, sanic-testing==23.6.0 │
│ ████ ████████▀ │ │
│ │ │
│ Build Fast. Run Fast. │ │
└───────────────────────┴─────────────────────────────────────────────────────────┘
INFO: Application startup complete.
INFO: 127.0.0.1:42186 - "GET / HTTP/1.1" 200 OK
[2024-01-31 19:38:19 +0000] [694830] [ERROR] Invalid response type None (need HTTPResponse)
Traceback (most recent call last):
File "handle_request", line 144, in handle_request
"_inspector",
^^^^^
sanic.exceptions.ServerError: Invalid response type None (need HTTPResponse)
[2024-01-31 19:38:19 +0000] [694830] [ERROR] The error response will not be sent to the client for the following exception:"Invalid response type None (need HTTPResponse)". A previous response has at least partially been sent.
```
### Code snippet
```python
from sanic import Sanic
app = Sanic("my-hello-world-app")
@app.route("/")
async def test(request):
response = await request.respond(content_type="text/plain")
await response.send("hello world")
await response.eof()
if __name__ == "__main__":
app.run()
```
### Expected Behavior
Sanic should not produce error messages when using the response streaming API.
### How do you run Sanic?
ASGI
### Operating System
Linux
### Sanic Version
23.6.0
### Additional context
Possibly related to #2572, but it seems like a different issue. I can reproduce this without using WebSockets or SSE.
</issue>
<code>
[start of sanic/asgi.py]
1 from __future__ import annotations
2
3 import warnings
4
5 from typing import TYPE_CHECKING, Optional
6
7 from sanic.compat import Header
8 from sanic.exceptions import BadRequest, ServerError
9 from sanic.helpers import Default
10 from sanic.http import Stage
11 from sanic.log import error_logger, logger
12 from sanic.models.asgi import ASGIReceive, ASGIScope, ASGISend, MockTransport
13 from sanic.request import Request
14 from sanic.response import BaseHTTPResponse
15 from sanic.server import ConnInfo
16 from sanic.server.websockets.connection import WebSocketConnection
17
18
19 if TYPE_CHECKING:
20 from sanic import Sanic
21
22
23 class Lifespan:
24 def __init__(
25 self, sanic_app, scope: ASGIScope, receive: ASGIReceive, send: ASGISend
26 ) -> None:
27 self.sanic_app = sanic_app
28 self.scope = scope
29 self.receive = receive
30 self.send = send
31
32 if "server.init.before" in self.sanic_app.signal_router.name_index:
33 logger.debug(
34 'You have set a listener for "before_server_start" '
35 "in ASGI mode. "
36 "It will be executed as early as possible, but not before "
37 "the ASGI server is started.",
38 extra={"verbosity": 1},
39 )
40 if "server.shutdown.after" in self.sanic_app.signal_router.name_index:
41 logger.debug(
42 'You have set a listener for "after_server_stop" '
43 "in ASGI mode. "
44 "It will be executed as late as possible, but not after "
45 "the ASGI server is stopped.",
46 extra={"verbosity": 1},
47 )
48
49 async def startup(self) -> None:
50 """
51 Gather the listeners to fire on server start.
52 Because we are using a third-party server and not Sanic server, we do
53 not have access to fire anything BEFORE the server starts.
54 Therefore, we fire before_server_start and after_server_start
55 in sequence since the ASGI lifespan protocol only supports a single
56 startup event.
57 """
58 await self.sanic_app._startup()
59 await self.sanic_app._server_event("init", "before")
60 await self.sanic_app._server_event("init", "after")
61
62 if not isinstance(self.sanic_app.config.USE_UVLOOP, Default):
63 warnings.warn(
64 "You have set the USE_UVLOOP configuration option, but Sanic "
65 "cannot control the event loop when running in ASGI mode."
66 "This option will be ignored."
67 )
68
69 async def shutdown(self) -> None:
70 """
71 Gather the listeners to fire on server stop.
72 Because we are using a third-party server and not Sanic server, we do
73 not have access to fire anything AFTER the server stops.
74 Therefore, we fire before_server_stop and after_server_stop
75 in sequence since the ASGI lifespan protocol only supports a single
76 shutdown event.
77 """
78 await self.sanic_app._server_event("shutdown", "before")
79 await self.sanic_app._server_event("shutdown", "after")
80
81 async def __call__(self) -> None:
82 while True:
83 message = await self.receive()
84 if message["type"] == "lifespan.startup":
85 try:
86 await self.startup()
87 except Exception as e:
88 error_logger.exception(e)
89 await self.send(
90 {"type": "lifespan.startup.failed", "message": str(e)}
91 )
92 else:
93 await self.send({"type": "lifespan.startup.complete"})
94 elif message["type"] == "lifespan.shutdown":
95 try:
96 await self.shutdown()
97 except Exception as e:
98 error_logger.exception(e)
99 await self.send(
100 {"type": "lifespan.shutdown.failed", "message": str(e)}
101 )
102 else:
103 await self.send({"type": "lifespan.shutdown.complete"})
104 return
105
106
107 class ASGIApp:
108 sanic_app: Sanic
109 request: Request
110 transport: MockTransport
111 lifespan: Lifespan
112 ws: Optional[WebSocketConnection]
113 stage: Stage
114 response: Optional[BaseHTTPResponse]
115
116 @classmethod
117 async def create(
118 cls,
119 sanic_app: Sanic,
120 scope: ASGIScope,
121 receive: ASGIReceive,
122 send: ASGISend,
123 ) -> ASGIApp:
124 instance = cls()
125 instance.ws = None
126 instance.sanic_app = sanic_app
127 instance.transport = MockTransport(scope, receive, send)
128 instance.transport.loop = sanic_app.loop
129 instance.stage = Stage.IDLE
130 instance.response = None
131 instance.sanic_app.state.is_started = True
132 setattr(instance.transport, "add_task", sanic_app.loop.create_task)
133
134 try:
135 headers = Header(
136 [
137 (
138 key.decode("ASCII"),
139 value.decode(errors="surrogateescape"),
140 )
141 for key, value in scope.get("headers", [])
142 ]
143 )
144 except UnicodeDecodeError:
145 raise BadRequest(
146 "Header names can only contain US-ASCII characters"
147 )
148
149 if scope["type"] == "http":
150 version = scope["http_version"]
151 method = scope["method"]
152 elif scope["type"] == "websocket":
153 version = "1.1"
154 method = "GET"
155
156 instance.ws = instance.transport.create_websocket_connection(
157 send, receive
158 )
159 else:
160 raise ServerError("Received unknown ASGI scope")
161
162 url_bytes, query = scope["raw_path"], scope["query_string"]
163 if query:
164 # httpx ASGI client sends query string as part of raw_path
165 url_bytes = url_bytes.split(b"?", 1)[0]
166 # All servers send them separately
167 url_bytes = b"%b?%b" % (url_bytes, query)
168
169 request_class = sanic_app.request_class or Request # type: ignore
170 instance.request = request_class(
171 url_bytes,
172 headers,
173 version,
174 method,
175 instance.transport,
176 sanic_app,
177 )
178 request_class._current.set(instance.request)
179 instance.request.stream = instance # type: ignore
180 instance.request_body = True
181 instance.request.conn_info = ConnInfo(instance.transport)
182
183 await instance.sanic_app.dispatch(
184 "http.lifecycle.request",
185 inline=True,
186 context={"request": instance.request},
187 fail_not_found=False,
188 )
189
190 return instance
191
192 async def read(self) -> Optional[bytes]:
193 """
194 Read and stream the body in chunks from an incoming ASGI message.
195 """
196 if self.stage is Stage.IDLE:
197 self.stage = Stage.REQUEST
198 message = await self.transport.receive()
199 body = message.get("body", b"")
200 if not message.get("more_body", False):
201 self.request_body = False
202 if not body:
203 return None
204 return body
205
206 async def __aiter__(self):
207 while self.request_body:
208 data = await self.read()
209 if data:
210 yield data
211
212 def respond(self, response: BaseHTTPResponse):
213 if self.stage is not Stage.HANDLER:
214 self.stage = Stage.FAILED
215 raise RuntimeError("Response already started")
216 if self.response is not None:
217 self.response.stream = None
218 response.stream, self.response = self, response
219 return response
220
221 async def send(self, data, end_stream):
222 self.stage = Stage.IDLE if end_stream else Stage.RESPONSE
223 if self.response:
224 response, self.response = self.response, None
225 await self.transport.send(
226 {
227 "type": "http.response.start",
228 "status": response.status,
229 "headers": response.processed_headers,
230 }
231 )
232 response_body = getattr(response, "body", None)
233 if response_body:
234 data = response_body + data if data else response_body
235 await self.transport.send(
236 {
237 "type": "http.response.body",
238 "body": data.encode() if hasattr(data, "encode") else data,
239 "more_body": not end_stream,
240 }
241 )
242
243 _asgi_single_callable = True # We conform to ASGI 3.0 single-callable
244
245 async def __call__(self) -> None:
246 """
247 Handle the incoming request.
248 """
249 try:
250 self.stage = Stage.HANDLER
251 await self.sanic_app.handle_request(self.request)
252 except Exception as e:
253 try:
254 await self.sanic_app.handle_exception(self.request, e)
255 except Exception as exc:
256 await self.sanic_app.handle_exception(self.request, exc, False)
257
[end of sanic/asgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -219,19 +219,26 @@
return response
async def send(self, data, end_stream):
- self.stage = Stage.IDLE if end_stream else Stage.RESPONSE
- if self.response:
- response, self.response = self.response, None
+ if self.stage is Stage.IDLE:
+ if not end_stream or data:
+ raise RuntimeError(
+ "There is no request to respond to, either the "
+ "response has already been sent or the "
+ "request has not been received yet."
+ )
+ return
+ if self.response and self.stage is Stage.HANDLER:
await self.transport.send(
{
"type": "http.response.start",
- "status": response.status,
- "headers": response.processed_headers,
+ "status": self.response.status,
+ "headers": self.response.processed_headers,
}
)
- response_body = getattr(response, "body", None)
+ response_body = getattr(self.response, "body", None)
if response_body:
data = response_body + data if data else response_body
+ self.stage = Stage.IDLE if end_stream else Stage.RESPONSE
await self.transport.send(
{
"type": "http.response.body",
|
{"golden_diff": "diff --git a/sanic/asgi.py b/sanic/asgi.py\n--- a/sanic/asgi.py\n+++ b/sanic/asgi.py\n@@ -219,19 +219,26 @@\n return response\n \n async def send(self, data, end_stream):\n- self.stage = Stage.IDLE if end_stream else Stage.RESPONSE\n- if self.response:\n- response, self.response = self.response, None\n+ if self.stage is Stage.IDLE:\n+ if not end_stream or data:\n+ raise RuntimeError(\n+ \"There is no request to respond to, either the \"\n+ \"response has already been sent or the \"\n+ \"request has not been received yet.\"\n+ )\n+ return\n+ if self.response and self.stage is Stage.HANDLER:\n await self.transport.send(\n {\n \"type\": \"http.response.start\",\n- \"status\": response.status,\n- \"headers\": response.processed_headers,\n+ \"status\": self.response.status,\n+ \"headers\": self.response.processed_headers,\n }\n )\n- response_body = getattr(response, \"body\", None)\n+ response_body = getattr(self.response, \"body\", None)\n if response_body:\n data = response_body + data if data else response_body\n+ self.stage = Stage.IDLE if end_stream else Stage.RESPONSE\n await self.transport.send(\n {\n \"type\": \"http.response.body\",\n", "issue": "Response streaming produces [ERROR] Invalid response type None (need HTTPResponse)\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Describe the bug\r\n\r\nThe \"response streaming\" [feature of Sanic](https://sanic.dev/en/guide/advanced/streaming.html#response-streaming) produces error messages when running from Uvicorn.\r\n\r\nWhen accessing a page using the `await request.respond()` API, it produces error messages after each request.\r\n\r\n```\r\n[2024-01-31 19:37:14 +0000] [694830] [INFO] \r\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n \u2502 Sanic v23.6.0 \u2502\r\n \u2502 \u2502\r\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\r\n \u2502 \u2502 mode: production, ASGI \u2502\r\n \u2502 \u2584\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588 \u2588\u2588 \u2502 server: ASGI \u2502\r\n \u2502 \u2588\u2588 \u2502 python: 3.11.6 \u2502\r\n \u2502 \u2580\u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2584 \u2502 platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 \u2502\r\n \u2502 \u2588\u2588 \u2502 packages: sanic-routing==23.12.0, sanic-testing==23.6.0 \u2502\r\n \u2502 \u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2580 \u2502 \u2502\r\n \u2502 \u2502 \u2502\r\n \u2502 Build Fast. Run Fast. \u2502 \u2502\r\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\r\nINFO: Application startup complete.\r\nINFO: 127.0.0.1:42186 - \"GET / HTTP/1.1\" 200 OK\r\n[2024-01-31 19:38:19 +0000] [694830] [ERROR] Invalid response type None (need HTTPResponse)\r\nTraceback (most recent call last):\r\n File \"handle_request\", line 144, in handle_request\r\n \"_inspector\",\r\n ^^^^^\r\nsanic.exceptions.ServerError: Invalid response type None (need HTTPResponse)\r\n[2024-01-31 19:38:19 +0000] [694830] [ERROR] The error response will not be sent to the client for the following exception:\"Invalid response type None (need HTTPResponse)\". A previous response has at least partially been sent.\r\n```\r\n\r\n### Code snippet\r\n\r\n```python\r\n\r\nfrom sanic import Sanic\r\n\r\napp = Sanic(\"my-hello-world-app\")\r\n\r\n\r\[email protected](\"/\")\r\nasync def test(request):\r\n response = await request.respond(content_type=\"text/plain\")\r\n await response.send(\"hello world\")\r\n await response.eof()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app.run()\r\n```\r\n\r\n### Expected Behavior\r\n\r\nSanic should not produce error messages when using the response streaming API.\r\n\r\n### How do you run Sanic?\r\n\r\nASGI\r\n\r\n### Operating System\r\n\r\nLinux\r\n\r\n### Sanic Version\r\n\r\n23.6.0\r\n\r\n### Additional context\r\n\r\nPossibly related to #2572, but it seems like a different issue. I can reproduce this without using WebSockets or SSE.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport warnings\n\nfrom typing import TYPE_CHECKING, Optional\n\nfrom sanic.compat import Header\nfrom sanic.exceptions import BadRequest, ServerError\nfrom sanic.helpers import Default\nfrom sanic.http import Stage\nfrom sanic.log import error_logger, logger\nfrom sanic.models.asgi import ASGIReceive, ASGIScope, ASGISend, MockTransport\nfrom sanic.request import Request\nfrom sanic.response import BaseHTTPResponse\nfrom sanic.server import ConnInfo\nfrom sanic.server.websockets.connection import WebSocketConnection\n\n\nif TYPE_CHECKING:\n from sanic import Sanic\n\n\nclass Lifespan:\n def __init__(\n self, sanic_app, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> None:\n self.sanic_app = sanic_app\n self.scope = scope\n self.receive = receive\n self.send = send\n\n if \"server.init.before\" in self.sanic_app.signal_router.name_index:\n logger.debug(\n 'You have set a listener for \"before_server_start\" '\n \"in ASGI mode. \"\n \"It will be executed as early as possible, but not before \"\n \"the ASGI server is started.\",\n extra={\"verbosity\": 1},\n )\n if \"server.shutdown.after\" in self.sanic_app.signal_router.name_index:\n logger.debug(\n 'You have set a listener for \"after_server_stop\" '\n \"in ASGI mode. \"\n \"It will be executed as late as possible, but not after \"\n \"the ASGI server is stopped.\",\n extra={\"verbosity\": 1},\n )\n\n async def startup(self) -> None:\n \"\"\"\n Gather the listeners to fire on server start.\n Because we are using a third-party server and not Sanic server, we do\n not have access to fire anything BEFORE the server starts.\n Therefore, we fire before_server_start and after_server_start\n in sequence since the ASGI lifespan protocol only supports a single\n startup event.\n \"\"\"\n await self.sanic_app._startup()\n await self.sanic_app._server_event(\"init\", \"before\")\n await self.sanic_app._server_event(\"init\", \"after\")\n\n if not isinstance(self.sanic_app.config.USE_UVLOOP, Default):\n warnings.warn(\n \"You have set the USE_UVLOOP configuration option, but Sanic \"\n \"cannot control the event loop when running in ASGI mode.\"\n \"This option will be ignored.\"\n )\n\n async def shutdown(self) -> None:\n \"\"\"\n Gather the listeners to fire on server stop.\n Because we are using a third-party server and not Sanic server, we do\n not have access to fire anything AFTER the server stops.\n Therefore, we fire before_server_stop and after_server_stop\n in sequence since the ASGI lifespan protocol only supports a single\n shutdown event.\n \"\"\"\n await self.sanic_app._server_event(\"shutdown\", \"before\")\n await self.sanic_app._server_event(\"shutdown\", \"after\")\n\n async def __call__(self) -> None:\n while True:\n message = await self.receive()\n if message[\"type\"] == \"lifespan.startup\":\n try:\n await self.startup()\n except Exception as e:\n error_logger.exception(e)\n await self.send(\n {\"type\": \"lifespan.startup.failed\", \"message\": str(e)}\n )\n else:\n await self.send({\"type\": \"lifespan.startup.complete\"})\n elif message[\"type\"] == \"lifespan.shutdown\":\n try:\n await self.shutdown()\n except Exception as e:\n error_logger.exception(e)\n await self.send(\n {\"type\": \"lifespan.shutdown.failed\", \"message\": str(e)}\n )\n else:\n await self.send({\"type\": \"lifespan.shutdown.complete\"})\n return\n\n\nclass ASGIApp:\n sanic_app: Sanic\n request: Request\n transport: MockTransport\n lifespan: Lifespan\n ws: Optional[WebSocketConnection]\n stage: Stage\n response: Optional[BaseHTTPResponse]\n\n @classmethod\n async def create(\n cls,\n sanic_app: Sanic,\n scope: ASGIScope,\n receive: ASGIReceive,\n send: ASGISend,\n ) -> ASGIApp:\n instance = cls()\n instance.ws = None\n instance.sanic_app = sanic_app\n instance.transport = MockTransport(scope, receive, send)\n instance.transport.loop = sanic_app.loop\n instance.stage = Stage.IDLE\n instance.response = None\n instance.sanic_app.state.is_started = True\n setattr(instance.transport, \"add_task\", sanic_app.loop.create_task)\n\n try:\n headers = Header(\n [\n (\n key.decode(\"ASCII\"),\n value.decode(errors=\"surrogateescape\"),\n )\n for key, value in scope.get(\"headers\", [])\n ]\n )\n except UnicodeDecodeError:\n raise BadRequest(\n \"Header names can only contain US-ASCII characters\"\n )\n\n if scope[\"type\"] == \"http\":\n version = scope[\"http_version\"]\n method = scope[\"method\"]\n elif scope[\"type\"] == \"websocket\":\n version = \"1.1\"\n method = \"GET\"\n\n instance.ws = instance.transport.create_websocket_connection(\n send, receive\n )\n else:\n raise ServerError(\"Received unknown ASGI scope\")\n\n url_bytes, query = scope[\"raw_path\"], scope[\"query_string\"]\n if query:\n # httpx ASGI client sends query string as part of raw_path\n url_bytes = url_bytes.split(b\"?\", 1)[0]\n # All servers send them separately\n url_bytes = b\"%b?%b\" % (url_bytes, query)\n\n request_class = sanic_app.request_class or Request # type: ignore\n instance.request = request_class(\n url_bytes,\n headers,\n version,\n method,\n instance.transport,\n sanic_app,\n )\n request_class._current.set(instance.request)\n instance.request.stream = instance # type: ignore\n instance.request_body = True\n instance.request.conn_info = ConnInfo(instance.transport)\n\n await instance.sanic_app.dispatch(\n \"http.lifecycle.request\",\n inline=True,\n context={\"request\": instance.request},\n fail_not_found=False,\n )\n\n return instance\n\n async def read(self) -> Optional[bytes]:\n \"\"\"\n Read and stream the body in chunks from an incoming ASGI message.\n \"\"\"\n if self.stage is Stage.IDLE:\n self.stage = Stage.REQUEST\n message = await self.transport.receive()\n body = message.get(\"body\", b\"\")\n if not message.get(\"more_body\", False):\n self.request_body = False\n if not body:\n return None\n return body\n\n async def __aiter__(self):\n while self.request_body:\n data = await self.read()\n if data:\n yield data\n\n def respond(self, response: BaseHTTPResponse):\n if self.stage is not Stage.HANDLER:\n self.stage = Stage.FAILED\n raise RuntimeError(\"Response already started\")\n if self.response is not None:\n self.response.stream = None\n response.stream, self.response = self, response\n return response\n\n async def send(self, data, end_stream):\n self.stage = Stage.IDLE if end_stream else Stage.RESPONSE\n if self.response:\n response, self.response = self.response, None\n await self.transport.send(\n {\n \"type\": \"http.response.start\",\n \"status\": response.status,\n \"headers\": response.processed_headers,\n }\n )\n response_body = getattr(response, \"body\", None)\n if response_body:\n data = response_body + data if data else response_body\n await self.transport.send(\n {\n \"type\": \"http.response.body\",\n \"body\": data.encode() if hasattr(data, \"encode\") else data,\n \"more_body\": not end_stream,\n }\n )\n\n _asgi_single_callable = True # We conform to ASGI 3.0 single-callable\n\n async def __call__(self) -> None:\n \"\"\"\n Handle the incoming request.\n \"\"\"\n try:\n self.stage = Stage.HANDLER\n await self.sanic_app.handle_request(self.request)\n except Exception as e:\n try:\n await self.sanic_app.handle_exception(self.request, e)\n except Exception as exc:\n await self.sanic_app.handle_exception(self.request, exc, False)\n", "path": "sanic/asgi.py"}]}
| 3,797 | 316 |
gh_patches_debug_17458
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-9806
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
read-only flagged translation strings trigger reused QA check for empty translations
### Describe the issue
In a project using gettext po files I have marked some strings that are for technical reasons included in the translation files but should actually stay unchanged as `read-only`. Now these string also consider the "reused" warning with other read-only strings as they all share the empty translation.
As a workaround I can use of course `ignore-reused` on those strings.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Add read-only to multiple not yet translated strings
### Expected behavior
Setting to `read-only` should not trigger reused if empty.
### Screenshots
_No response_
### Exception traceback
_No response_
### How do you run Weblate?
self-hosted
### Weblate versions
Weblate 4.18.2
### Weblate deploy checks
_No response_
### Additional context
_No response_
</issue>
<code>
[start of weblate/checks/consistency.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from __future__ import annotations
6
7 from functools import reduce
8
9 from django.db.models import Count, Prefetch, Q
10 from django.utils.translation import gettext, gettext_lazy, ngettext
11
12 from weblate.checks.base import TargetCheck
13 from weblate.utils.state import STATE_TRANSLATED
14
15
16 class PluralsCheck(TargetCheck):
17 """Check for incomplete plural forms."""
18
19 check_id = "plurals"
20 name = gettext_lazy("Missing plurals")
21 description = gettext_lazy("Some plural forms are untranslated")
22
23 def should_skip(self, unit):
24 if unit.translation.component.is_multivalue:
25 return True
26 return super().should_skip(unit)
27
28 def check_target_unit(self, sources, targets, unit):
29 # Is this plural?
30 if len(sources) == 1:
31 return False
32 # Is at least something translated?
33 if targets == len(targets) * [""]:
34 return False
35 # Check for empty translation
36 return "" in targets
37
38 def check_single(self, source, target, unit):
39 """Target strings are checked in check_target_unit."""
40 return False
41
42
43 class SamePluralsCheck(TargetCheck):
44 """Check for same plural forms."""
45
46 check_id = "same-plurals"
47 name = gettext_lazy("Same plurals")
48 description = gettext_lazy("Some plural forms are translated in the same way")
49
50 def check_target_unit(self, sources, targets, unit):
51 # Is this plural?
52 if len(sources) == 1 or len(targets) == 1:
53 return False
54 if not targets[0]:
55 return False
56 return len(set(targets)) == 1
57
58 def check_single(self, source, target, unit):
59 """Target strings are checked in check_target_unit."""
60 return False
61
62
63 class ConsistencyCheck(TargetCheck):
64 """Check for inconsistent translations."""
65
66 check_id = "inconsistent"
67 name = gettext_lazy("Inconsistent")
68 description = gettext_lazy(
69 "This string has more than one translation in this project "
70 "or is untranslated in some components."
71 )
72 ignore_untranslated = False
73 propagates = True
74 batch_project_wide = True
75 skip_suggestions = True
76
77 def get_propagated_value(self, unit):
78 return unit.target
79
80 def get_propagated_units(self, unit, target: str | None = None):
81 return unit.same_source_units
82
83 def check_target_unit(self, sources, targets, unit):
84 component = unit.translation.component
85 if not component.allow_translation_propagation:
86 return False
87
88 # Use last result if checks are batched
89 if component.batch_checks:
90 return self.handle_batch(unit, component)
91
92 for other in self.get_propagated_units(unit):
93 if unit.target == other.target:
94 continue
95 if unit.translated or other.translated:
96 return True
97 return False
98
99 def check_single(self, source, target, unit):
100 """Target strings are checked in check_target_unit."""
101 return False
102
103 def check_component(self, component):
104 from weblate.trans.models import Unit
105
106 units = Unit.objects.filter(
107 translation__component__project=component.project,
108 translation__component__allow_translation_propagation=True,
109 )
110
111 # List strings with different targets
112 # Limit this to 100 strings, otherwise the resulting query is way too complex
113 matches = (
114 units.values("id_hash", "translation__language", "translation__plural")
115 .annotate(Count("target", distinct=True))
116 .filter(target__count__gt=1)
117 .order_by("id_hash")[:100]
118 )
119
120 if not matches:
121 return []
122
123 return (
124 units.filter(
125 reduce(
126 lambda x, y: x
127 | (
128 Q(id_hash=y["id_hash"])
129 & Q(translation__language=y["translation__language"])
130 & Q(translation__plural=y["translation__plural"])
131 ),
132 matches,
133 Q(),
134 )
135 )
136 .prefetch()
137 .prefetch_bulk()
138 )
139
140
141 class ReusedCheck(TargetCheck):
142 """
143 Check for reused translations.
144
145 This is skipped for languages with a single plural form as that causes too
146 many false positives, see https://github.com/WeblateOrg/weblate/issues/9450
147 """
148
149 check_id = "reused"
150 name = gettext_lazy("Reused translation")
151 description = gettext_lazy("Different strings are translated the same.")
152 propagates = True
153 batch_project_wide = True
154 skip_suggestions = True
155
156 def get_propagated_value(self, unit):
157 return unit.source
158
159 def get_propagated_units(self, unit, target: str | None = None):
160 from weblate.trans.models import Unit
161
162 if target is None:
163 return unit.same_target_units
164 return Unit.objects.same_target(unit, target)
165
166 def should_skip(self, unit):
167 if unit.translation.plural.number <= 1:
168 return True
169 return super().should_skip(unit)
170
171 def check_target_unit(self, sources, targets, unit):
172 translation = unit.translation
173 component = translation.component
174
175 # Use last result if checks are batched
176 if component.batch_checks:
177 return self.handle_batch(unit, component)
178
179 return self.get_propagated_units(unit).exists()
180
181 def get_description(self, check_obj):
182 other_sources = (
183 self.get_propagated_units(check_obj.unit)
184 .values_list("source", flat=True)
185 .distinct()
186 )
187
188 return ngettext(
189 "Other source string: %s", "Other source strings: %s", len(other_sources)
190 ) % ", ".join(gettext("“%s”") % source for source in other_sources)
191
192 def check_single(self, source, target, unit):
193 """Target strings are checked in check_target_unit."""
194 return False
195
196 def check_component(self, component):
197 from weblate.trans.models import Unit
198
199 units = Unit.objects.filter(
200 translation__component__project=component.project,
201 translation__component__allow_translation_propagation=True,
202 state__gte=STATE_TRANSLATED,
203 )
204
205 # List strings with different sources
206 # Limit this to 100 strings, otherwise the resulting query is way too complex
207 matches = (
208 units.values("target__md5", "translation__language", "translation__plural")
209 .annotate(source__count=Count("source", distinct=True))
210 .filter(source__count__gt=1)
211 .order_by("target__md5")[:100]
212 )
213
214 if not matches:
215 return []
216
217 return (
218 units.filter(
219 reduce(
220 lambda x, y: x
221 | (
222 Q(target__md5=y["target__md5"])
223 & Q(translation__language=y["translation__language"])
224 & Q(translation__plural=y["translation__plural"])
225 ),
226 matches,
227 Q(),
228 )
229 )
230 .prefetch()
231 .prefetch_bulk()
232 )
233
234
235 class TranslatedCheck(TargetCheck):
236 """Check for inconsistent translations."""
237
238 check_id = "translated"
239 name = gettext_lazy("Has been translated")
240 description = gettext_lazy("This string has been translated in the past")
241 ignore_untranslated = False
242 skip_suggestions = True
243
244 def get_description(self, check_obj):
245 unit = check_obj.unit
246 target = self.check_target_unit(unit.source, unit.target, unit)
247 if not target:
248 return super().get_description(check_obj)
249 return gettext('Previous translation was "%s".') % target
250
251 def should_skip_change(self, change, unit):
252 from weblate.trans.models import Change
253
254 # Skip automatic translation entries adding needs editing string
255 return (
256 change.action == Change.ACTION_AUTO
257 and change.details.get("state", STATE_TRANSLATED) < STATE_TRANSLATED
258 )
259
260 @staticmethod
261 def should_break_changes(change):
262 from weblate.trans.models import Change
263
264 # Stop changes processin on source string change or on
265 # intentional marking as needing edit
266 return change.action in (Change.ACTION_SOURCE_CHANGE, Change.ACTION_MARKED_EDIT)
267
268 def check_target_unit(self, sources, targets, unit):
269 if unit.translated:
270 return False
271
272 component = unit.translation.component
273
274 if component.batch_checks:
275 return self.handle_batch(unit, component)
276
277 from weblate.trans.models import Change
278
279 changes = unit.change_set.filter(action__in=Change.ACTIONS_CONTENT).order()
280
281 for change in changes:
282 if self.should_break_changes(change):
283 break
284 if self.should_skip_change(change, unit):
285 continue
286 if change.target and change.target != unit.target:
287 return change.target
288
289 return False
290
291 def check_single(self, source, target, unit):
292 """Target strings are checked in check_target_unit."""
293 return False
294
295 def get_fixup(self, unit):
296 target = self.check_target_unit(unit.source, unit.target, unit)
297 if not target:
298 return None
299 return [(".*", target, "u")]
300
301 def check_component(self, component):
302 from weblate.trans.models import Change, Unit
303
304 units = (
305 Unit.objects.filter(
306 translation__component=component,
307 change__action__in=Change.ACTIONS_CONTENT,
308 state__lt=STATE_TRANSLATED,
309 )
310 .prefetch_related(
311 Prefetch(
312 "change_set",
313 queryset=Change.objects.filter(
314 action__in=Change.ACTIONS_CONTENT,
315 ).order(),
316 to_attr="recent_consistency_changes",
317 )
318 )
319 .prefetch()
320 .prefetch_bulk()
321 )
322
323 for unit in units:
324 for change in unit.recent_consistency_changes:
325 if self.should_break_changes(change):
326 break
327 if self.should_skip_change(change, unit):
328 continue
329 if change.target:
330 yield unit
331
[end of weblate/checks/consistency.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/weblate/checks/consistency.py b/weblate/checks/consistency.py
--- a/weblate/checks/consistency.py
+++ b/weblate/checks/consistency.py
@@ -164,7 +164,7 @@
return Unit.objects.same_target(unit, target)
def should_skip(self, unit):
- if unit.translation.plural.number <= 1:
+ if unit.translation.plural.number <= 1 or not any(unit.get_target_plurals()):
return True
return super().should_skip(unit)
@@ -201,6 +201,7 @@
translation__component__allow_translation_propagation=True,
state__gte=STATE_TRANSLATED,
)
+ units = units.exclude(target="")
# List strings with different sources
# Limit this to 100 strings, otherwise the resulting query is way too complex
|
{"golden_diff": "diff --git a/weblate/checks/consistency.py b/weblate/checks/consistency.py\n--- a/weblate/checks/consistency.py\n+++ b/weblate/checks/consistency.py\n@@ -164,7 +164,7 @@\n return Unit.objects.same_target(unit, target)\n \n def should_skip(self, unit):\n- if unit.translation.plural.number <= 1:\n+ if unit.translation.plural.number <= 1 or not any(unit.get_target_plurals()):\n return True\n return super().should_skip(unit)\n \n@@ -201,6 +201,7 @@\n translation__component__allow_translation_propagation=True,\n state__gte=STATE_TRANSLATED,\n )\n+ units = units.exclude(target=\"\")\n \n # List strings with different sources\n # Limit this to 100 strings, otherwise the resulting query is way too complex\n", "issue": "read-only flagged translation strings trigger reused QA check for empty translations\n### Describe the issue\r\n\r\nIn a project using gettext po files I have marked some strings that are for technical reasons included in the translation files but should actually stay unchanged as `read-only`. Now these string also consider the \"reused\" warning with other read-only strings as they all share the empty translation.\r\n\r\nAs a workaround I can use of course `ignore-reused` on those strings.\r\n\r\n\r\n### I already tried\r\n\r\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\r\n- [X] I've searched for similar issues in this repository.\r\n\r\n### Steps to reproduce the behavior\r\n\r\n1. Add read-only to multiple not yet translated strings\r\n\r\n### Expected behavior\r\n\r\nSetting to `read-only` should not trigger reused if empty.\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Exception traceback\r\n\r\n_No response_\r\n\r\n### How do you run Weblate?\r\n\r\nself-hosted\r\n\r\n### Weblate versions\r\n\r\nWeblate 4.18.2\r\n\r\n### Weblate deploy checks\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import annotations\n\nfrom functools import reduce\n\nfrom django.db.models import Count, Prefetch, Q\nfrom django.utils.translation import gettext, gettext_lazy, ngettext\n\nfrom weblate.checks.base import TargetCheck\nfrom weblate.utils.state import STATE_TRANSLATED\n\n\nclass PluralsCheck(TargetCheck):\n \"\"\"Check for incomplete plural forms.\"\"\"\n\n check_id = \"plurals\"\n name = gettext_lazy(\"Missing plurals\")\n description = gettext_lazy(\"Some plural forms are untranslated\")\n\n def should_skip(self, unit):\n if unit.translation.component.is_multivalue:\n return True\n return super().should_skip(unit)\n\n def check_target_unit(self, sources, targets, unit):\n # Is this plural?\n if len(sources) == 1:\n return False\n # Is at least something translated?\n if targets == len(targets) * [\"\"]:\n return False\n # Check for empty translation\n return \"\" in targets\n\n def check_single(self, source, target, unit):\n \"\"\"Target strings are checked in check_target_unit.\"\"\"\n return False\n\n\nclass SamePluralsCheck(TargetCheck):\n \"\"\"Check for same plural forms.\"\"\"\n\n check_id = \"same-plurals\"\n name = gettext_lazy(\"Same plurals\")\n description = gettext_lazy(\"Some plural forms are translated in the same way\")\n\n def check_target_unit(self, sources, targets, unit):\n # Is this plural?\n if len(sources) == 1 or len(targets) == 1:\n return False\n if not targets[0]:\n return False\n return len(set(targets)) == 1\n\n def check_single(self, source, target, unit):\n \"\"\"Target strings are checked in check_target_unit.\"\"\"\n return False\n\n\nclass ConsistencyCheck(TargetCheck):\n \"\"\"Check for inconsistent translations.\"\"\"\n\n check_id = \"inconsistent\"\n name = gettext_lazy(\"Inconsistent\")\n description = gettext_lazy(\n \"This string has more than one translation in this project \"\n \"or is untranslated in some components.\"\n )\n ignore_untranslated = False\n propagates = True\n batch_project_wide = True\n skip_suggestions = True\n\n def get_propagated_value(self, unit):\n return unit.target\n\n def get_propagated_units(self, unit, target: str | None = None):\n return unit.same_source_units\n\n def check_target_unit(self, sources, targets, unit):\n component = unit.translation.component\n if not component.allow_translation_propagation:\n return False\n\n # Use last result if checks are batched\n if component.batch_checks:\n return self.handle_batch(unit, component)\n\n for other in self.get_propagated_units(unit):\n if unit.target == other.target:\n continue\n if unit.translated or other.translated:\n return True\n return False\n\n def check_single(self, source, target, unit):\n \"\"\"Target strings are checked in check_target_unit.\"\"\"\n return False\n\n def check_component(self, component):\n from weblate.trans.models import Unit\n\n units = Unit.objects.filter(\n translation__component__project=component.project,\n translation__component__allow_translation_propagation=True,\n )\n\n # List strings with different targets\n # Limit this to 100 strings, otherwise the resulting query is way too complex\n matches = (\n units.values(\"id_hash\", \"translation__language\", \"translation__plural\")\n .annotate(Count(\"target\", distinct=True))\n .filter(target__count__gt=1)\n .order_by(\"id_hash\")[:100]\n )\n\n if not matches:\n return []\n\n return (\n units.filter(\n reduce(\n lambda x, y: x\n | (\n Q(id_hash=y[\"id_hash\"])\n & Q(translation__language=y[\"translation__language\"])\n & Q(translation__plural=y[\"translation__plural\"])\n ),\n matches,\n Q(),\n )\n )\n .prefetch()\n .prefetch_bulk()\n )\n\n\nclass ReusedCheck(TargetCheck):\n \"\"\"\n Check for reused translations.\n\n This is skipped for languages with a single plural form as that causes too\n many false positives, see https://github.com/WeblateOrg/weblate/issues/9450\n \"\"\"\n\n check_id = \"reused\"\n name = gettext_lazy(\"Reused translation\")\n description = gettext_lazy(\"Different strings are translated the same.\")\n propagates = True\n batch_project_wide = True\n skip_suggestions = True\n\n def get_propagated_value(self, unit):\n return unit.source\n\n def get_propagated_units(self, unit, target: str | None = None):\n from weblate.trans.models import Unit\n\n if target is None:\n return unit.same_target_units\n return Unit.objects.same_target(unit, target)\n\n def should_skip(self, unit):\n if unit.translation.plural.number <= 1:\n return True\n return super().should_skip(unit)\n\n def check_target_unit(self, sources, targets, unit):\n translation = unit.translation\n component = translation.component\n\n # Use last result if checks are batched\n if component.batch_checks:\n return self.handle_batch(unit, component)\n\n return self.get_propagated_units(unit).exists()\n\n def get_description(self, check_obj):\n other_sources = (\n self.get_propagated_units(check_obj.unit)\n .values_list(\"source\", flat=True)\n .distinct()\n )\n\n return ngettext(\n \"Other source string: %s\", \"Other source strings: %s\", len(other_sources)\n ) % \", \".join(gettext(\"\u201c%s\u201d\") % source for source in other_sources)\n\n def check_single(self, source, target, unit):\n \"\"\"Target strings are checked in check_target_unit.\"\"\"\n return False\n\n def check_component(self, component):\n from weblate.trans.models import Unit\n\n units = Unit.objects.filter(\n translation__component__project=component.project,\n translation__component__allow_translation_propagation=True,\n state__gte=STATE_TRANSLATED,\n )\n\n # List strings with different sources\n # Limit this to 100 strings, otherwise the resulting query is way too complex\n matches = (\n units.values(\"target__md5\", \"translation__language\", \"translation__plural\")\n .annotate(source__count=Count(\"source\", distinct=True))\n .filter(source__count__gt=1)\n .order_by(\"target__md5\")[:100]\n )\n\n if not matches:\n return []\n\n return (\n units.filter(\n reduce(\n lambda x, y: x\n | (\n Q(target__md5=y[\"target__md5\"])\n & Q(translation__language=y[\"translation__language\"])\n & Q(translation__plural=y[\"translation__plural\"])\n ),\n matches,\n Q(),\n )\n )\n .prefetch()\n .prefetch_bulk()\n )\n\n\nclass TranslatedCheck(TargetCheck):\n \"\"\"Check for inconsistent translations.\"\"\"\n\n check_id = \"translated\"\n name = gettext_lazy(\"Has been translated\")\n description = gettext_lazy(\"This string has been translated in the past\")\n ignore_untranslated = False\n skip_suggestions = True\n\n def get_description(self, check_obj):\n unit = check_obj.unit\n target = self.check_target_unit(unit.source, unit.target, unit)\n if not target:\n return super().get_description(check_obj)\n return gettext('Previous translation was \"%s\".') % target\n\n def should_skip_change(self, change, unit):\n from weblate.trans.models import Change\n\n # Skip automatic translation entries adding needs editing string\n return (\n change.action == Change.ACTION_AUTO\n and change.details.get(\"state\", STATE_TRANSLATED) < STATE_TRANSLATED\n )\n\n @staticmethod\n def should_break_changes(change):\n from weblate.trans.models import Change\n\n # Stop changes processin on source string change or on\n # intentional marking as needing edit\n return change.action in (Change.ACTION_SOURCE_CHANGE, Change.ACTION_MARKED_EDIT)\n\n def check_target_unit(self, sources, targets, unit):\n if unit.translated:\n return False\n\n component = unit.translation.component\n\n if component.batch_checks:\n return self.handle_batch(unit, component)\n\n from weblate.trans.models import Change\n\n changes = unit.change_set.filter(action__in=Change.ACTIONS_CONTENT).order()\n\n for change in changes:\n if self.should_break_changes(change):\n break\n if self.should_skip_change(change, unit):\n continue\n if change.target and change.target != unit.target:\n return change.target\n\n return False\n\n def check_single(self, source, target, unit):\n \"\"\"Target strings are checked in check_target_unit.\"\"\"\n return False\n\n def get_fixup(self, unit):\n target = self.check_target_unit(unit.source, unit.target, unit)\n if not target:\n return None\n return [(\".*\", target, \"u\")]\n\n def check_component(self, component):\n from weblate.trans.models import Change, Unit\n\n units = (\n Unit.objects.filter(\n translation__component=component,\n change__action__in=Change.ACTIONS_CONTENT,\n state__lt=STATE_TRANSLATED,\n )\n .prefetch_related(\n Prefetch(\n \"change_set\",\n queryset=Change.objects.filter(\n action__in=Change.ACTIONS_CONTENT,\n ).order(),\n to_attr=\"recent_consistency_changes\",\n )\n )\n .prefetch()\n .prefetch_bulk()\n )\n\n for unit in units:\n for change in unit.recent_consistency_changes:\n if self.should_break_changes(change):\n break\n if self.should_skip_change(change, unit):\n continue\n if change.target:\n yield unit\n", "path": "weblate/checks/consistency.py"}]}
| 3,850 | 200 |
gh_patches_debug_14685
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-370
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ruby hooks failing with rbenv installed
Pre-commit has been failing for the past few weeks.
https://gist.github.com/ThatGerber/d6533155848076b25e5e0d5cb02e20eb
Seems to be an issue with the ruby (rbenv) environment.
Tried running `pre-commit clean && pre-commit` but it returns the same issue. Setting `rbenv global 2.2.4` and `rbenv shell 2.2.4` does help either.
</issue>
<code>
[start of pre_commit/languages/ruby.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import os.path
6 import shutil
7
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import Var
10 from pre_commit.languages import helpers
11 from pre_commit.util import CalledProcessError
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.util import resource_filename
14 from pre_commit.util import tarfile_open
15 from pre_commit.xargs import xargs
16
17
18 ENVIRONMENT_DIR = 'rbenv'
19
20
21 def get_env_patch(venv, language_version):
22 return (
23 ('GEM_HOME', os.path.join(venv, 'gems')),
24 ('RBENV_ROOT', venv),
25 ('RBENV_VERSION', language_version),
26 ('PATH', (
27 os.path.join(venv, 'gems', 'bin'), os.pathsep,
28 os.path.join(venv, 'shims'), os.pathsep,
29 os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
30 )),
31 )
32
33
34 @contextlib.contextmanager
35 def in_env(repo_cmd_runner, language_version):
36 envdir = os.path.join(
37 repo_cmd_runner.prefix_dir,
38 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
39 )
40 with envcontext(get_env_patch(envdir, language_version)):
41 yield
42
43
44 def _install_rbenv(repo_cmd_runner, version='default'):
45 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
46
47 with tarfile_open(resource_filename('rbenv.tar.gz')) as tf:
48 tf.extractall(repo_cmd_runner.path('.'))
49 shutil.move(
50 repo_cmd_runner.path('rbenv'), repo_cmd_runner.path(directory),
51 )
52
53 # Only install ruby-build if the version is specified
54 if version != 'default':
55 # ruby-download
56 with tarfile_open(resource_filename('ruby-download.tar.gz')) as tf:
57 tf.extractall(repo_cmd_runner.path(directory, 'plugins'))
58
59 # ruby-build
60 with tarfile_open(resource_filename('ruby-build.tar.gz')) as tf:
61 tf.extractall(repo_cmd_runner.path(directory, 'plugins'))
62
63 activate_path = repo_cmd_runner.path(directory, 'bin', 'activate')
64 with io.open(activate_path, 'w') as activate_file:
65 # This is similar to how you would install rbenv to your home directory
66 # However we do a couple things to make the executables exposed and
67 # configure it to work in our directory.
68 # We also modify the PS1 variable for manual debugging sake.
69 activate_file.write(
70 '#!/usr/bin/env bash\n'
71 "export RBENV_ROOT='{0}'\n"
72 'export PATH="$RBENV_ROOT/bin:$PATH"\n'
73 'eval "$(rbenv init -)"\n'
74 'export PS1="(rbenv)$PS1"\n'
75 # This lets us install gems in an isolated and repeatable
76 # directory
77 "export GEM_HOME='{0}/gems'\n"
78 'export PATH="$GEM_HOME/bin:$PATH"\n'
79 '\n'.format(repo_cmd_runner.path(directory))
80 )
81
82 # If we aren't using the system ruby, add a version here
83 if version != 'default':
84 activate_file.write('export RBENV_VERSION="{0}"\n'.format(version))
85
86
87 def _install_ruby(runner, version):
88 try:
89 helpers.run_setup_cmd(runner, ('rbenv', 'download', version))
90 except CalledProcessError: # pragma: no cover (usually find with download)
91 # Failed to download from mirror for some reason, build it instead
92 helpers.run_setup_cmd(runner, ('rbenv', 'install', version))
93
94
95 def install_environment(
96 repo_cmd_runner,
97 version='default',
98 additional_dependencies=(),
99 ):
100 additional_dependencies = tuple(additional_dependencies)
101 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
102 with clean_path_on_failure(repo_cmd_runner.path(directory)):
103 # TODO: this currently will fail if there's no version specified and
104 # there's no system ruby installed. Is this ok?
105 _install_rbenv(repo_cmd_runner, version=version)
106 with in_env(repo_cmd_runner, version):
107 # Need to call this before installing so rbenv's directories are
108 # set up
109 helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'init', '-'))
110 if version != 'default':
111 _install_ruby(repo_cmd_runner, version)
112 # Need to call this after installing to set up the shims
113 helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'rehash'))
114 helpers.run_setup_cmd(
115 repo_cmd_runner,
116 ('gem', 'build') + repo_cmd_runner.star('.gemspec'),
117 )
118 helpers.run_setup_cmd(
119 repo_cmd_runner,
120 (
121 ('gem', 'install', '--no-ri', '--no-rdoc') +
122 repo_cmd_runner.star('.gem') + additional_dependencies
123 ),
124 )
125
126
127 def run_hook(repo_cmd_runner, hook, file_args):
128 with in_env(repo_cmd_runner, hook['language_version']):
129 return xargs((hook['entry'],) + tuple(hook['args']), file_args)
130
[end of pre_commit/languages/ruby.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -19,16 +19,18 @@
def get_env_patch(venv, language_version):
- return (
+ patches = (
('GEM_HOME', os.path.join(venv, 'gems')),
('RBENV_ROOT', venv),
- ('RBENV_VERSION', language_version),
('PATH', (
os.path.join(venv, 'gems', 'bin'), os.pathsep,
os.path.join(venv, 'shims'), os.pathsep,
os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
)),
)
+ if language_version != 'default':
+ patches += (('RBENV_VERSION', language_version),)
+ return patches
@contextlib.contextmanager
|
{"golden_diff": "diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py\n--- a/pre_commit/languages/ruby.py\n+++ b/pre_commit/languages/ruby.py\n@@ -19,16 +19,18 @@\n \n \n def get_env_patch(venv, language_version):\n- return (\n+ patches = (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('RBENV_ROOT', venv),\n- ('RBENV_VERSION', language_version),\n ('PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n )),\n )\n+ if language_version != 'default':\n+ patches += (('RBENV_VERSION', language_version),)\n+ return patches\n \n \n @contextlib.contextmanager\n", "issue": "Ruby hooks failing with rbenv installed\nPre-commit has been failing for the past few weeks.\n\nhttps://gist.github.com/ThatGerber/d6533155848076b25e5e0d5cb02e20eb\n\nSeems to be an issue with the ruby (rbenv) environment.\n\nTried running `pre-commit clean && pre-commit` but it returns the same issue. Setting `rbenv global 2.2.4` and `rbenv shell 2.2.4` does help either.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport os.path\nimport shutil\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import resource_filename\nfrom pre_commit.util import tarfile_open\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'rbenv'\n\n\ndef get_env_patch(venv, language_version):\n return (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('RBENV_ROOT', venv),\n ('RBENV_VERSION', language_version),\n ('PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n )),\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir, language_version)):\n yield\n\n\ndef _install_rbenv(repo_cmd_runner, version='default'):\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n with tarfile_open(resource_filename('rbenv.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path('.'))\n shutil.move(\n repo_cmd_runner.path('rbenv'), repo_cmd_runner.path(directory),\n )\n\n # Only install ruby-build if the version is specified\n if version != 'default':\n # ruby-download\n with tarfile_open(resource_filename('ruby-download.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path(directory, 'plugins'))\n\n # ruby-build\n with tarfile_open(resource_filename('ruby-build.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path(directory, 'plugins'))\n\n activate_path = repo_cmd_runner.path(directory, 'bin', 'activate')\n with io.open(activate_path, 'w') as activate_file:\n # This is similar to how you would install rbenv to your home directory\n # However we do a couple things to make the executables exposed and\n # configure it to work in our directory.\n # We also modify the PS1 variable for manual debugging sake.\n activate_file.write(\n '#!/usr/bin/env bash\\n'\n \"export RBENV_ROOT='{0}'\\n\"\n 'export PATH=\"$RBENV_ROOT/bin:$PATH\"\\n'\n 'eval \"$(rbenv init -)\"\\n'\n 'export PS1=\"(rbenv)$PS1\"\\n'\n # This lets us install gems in an isolated and repeatable\n # directory\n \"export GEM_HOME='{0}/gems'\\n\"\n 'export PATH=\"$GEM_HOME/bin:$PATH\"\\n'\n '\\n'.format(repo_cmd_runner.path(directory))\n )\n\n # If we aren't using the system ruby, add a version here\n if version != 'default':\n activate_file.write('export RBENV_VERSION=\"{0}\"\\n'.format(version))\n\n\ndef _install_ruby(runner, version):\n try:\n helpers.run_setup_cmd(runner, ('rbenv', 'download', version))\n except CalledProcessError: # pragma: no cover (usually find with download)\n # Failed to download from mirror for some reason, build it instead\n helpers.run_setup_cmd(runner, ('rbenv', 'install', version))\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n # TODO: this currently will fail if there's no version specified and\n # there's no system ruby installed. Is this ok?\n _install_rbenv(repo_cmd_runner, version=version)\n with in_env(repo_cmd_runner, version):\n # Need to call this before installing so rbenv's directories are\n # set up\n helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'init', '-'))\n if version != 'default':\n _install_ruby(repo_cmd_runner, version)\n # Need to call this after installing to set up the shims\n helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'rehash'))\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('gem', 'build') + repo_cmd_runner.star('.gemspec'),\n )\n helpers.run_setup_cmd(\n repo_cmd_runner,\n (\n ('gem', 'install', '--no-ri', '--no-rdoc') +\n repo_cmd_runner.star('.gem') + additional_dependencies\n ),\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/ruby.py"}]}
| 2,041 | 206 |
gh_patches_debug_7386
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-3499
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSRF Failed Error
## Description
The error message
"CSRF Failed: Origin checking failed - https://internal.mathesar.org/ does not match any trusted origins."
is appearing when performing certain actions in Mathesar like creating a new internal db or saving a record.
## Additional context
This happened while testing internal.mathesar.org
</issue>
<code>
[start of config/settings/production.py]
1 from config.settings.common_settings import * # noqa
2
3 # Override default settings
4 DEBUG = False
5 MATHESAR_MODE = 'PRODUCTION'
6 # Use a local.py module for settings that shouldn't be version tracked
7 try:
8 from .local import * # noqa
9 except ImportError:
10 pass
11
[end of config/settings/production.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/config/settings/production.py b/config/settings/production.py
--- a/config/settings/production.py
+++ b/config/settings/production.py
@@ -3,6 +3,14 @@
# Override default settings
DEBUG = False
MATHESAR_MODE = 'PRODUCTION'
+
+'''
+This tells Django to trust the X-Forwarded-Proto header that comes from our proxy,
+and any time its value is 'https', then the request is guaranteed to be secure
+(i.e., it originally came in via HTTPS).
+'''
+SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
+
# Use a local.py module for settings that shouldn't be version tracked
try:
from .local import * # noqa
|
{"golden_diff": "diff --git a/config/settings/production.py b/config/settings/production.py\n--- a/config/settings/production.py\n+++ b/config/settings/production.py\n@@ -3,6 +3,14 @@\n # Override default settings\n DEBUG = False\n MATHESAR_MODE = 'PRODUCTION'\n+\n+'''\n+This tells Django to trust the X-Forwarded-Proto header that comes from our proxy,\n+and any time its value is 'https', then the request is guaranteed to be secure\n+(i.e., it originally came in via HTTPS).\n+'''\n+SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n+\n # Use a local.py module for settings that shouldn't be version tracked\n try:\n from .local import * # noqa\n", "issue": "CSRF Failed Error\n## Description\r\nThe error message \r\n\"CSRF Failed: Origin checking failed - https://internal.mathesar.org/ does not match any trusted origins.\"\r\nis appearing when performing certain actions in Mathesar like creating a new internal db or saving a record. \r\n\r\n## Additional context\r\nThis happened while testing internal.mathesar.org\n", "before_files": [{"content": "from config.settings.common_settings import * # noqa\n\n# Override default settings\nDEBUG = False\nMATHESAR_MODE = 'PRODUCTION'\n# Use a local.py module for settings that shouldn't be version tracked\ntry:\n from .local import * # noqa \nexcept ImportError:\n pass\n", "path": "config/settings/production.py"}]}
| 679 | 161 |
gh_patches_debug_3509
|
rasdani/github-patches
|
git_diff
|
psf__black-2791
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Black specifies a requirement for click>=7.1.2 but actually requires click>=8.0
**Describe the bug**
Installing `black` from the head of the `main` branch and `click<8.0` results in a stack trace whenever the tool is run.
**To Reproduce**
```bash
python3 -m venv black-click-compat-bug
source black-click-compat-bug/bin/activate
python -m pip install \
"click<8.0" \
git+https://github.com/psf/black
black --help
```
Resulting traceback:
```
Traceback (most recent call last):
File "/home/enpaul/black-click-compat-bug/bin/black", line 5, in <module>
from black import patched_main
File "/home/enpaul/black-click-compat-bug/lib64/python3.10/site-packages/black/__init__.py", line 34, in <module>
from click.core import ParameterSource
ImportError: cannot import name 'ParameterSource' from 'click.core' (/home/enpaul/black-click-compat-bug/lib64/python3.10/site-packages/click/core.py)
```
**Expected behavior**
Black should specify a requirement for `click>=8.0.0` ([actual specification](https://github.com/psf/black/blob/main/setup.py#L100) is for `>=7.1.2`)
**Environment**
<!-- Please complete the following information: -->
- Black's version: `21.12b1.dev40+g565f9c9`
- OS: Linux (Fedora 35)
- Python: `Python 3.10.1`
</issue>
<code>
[start of setup.py]
1 # Copyright (C) 2020 Łukasz Langa
2 from setuptools import setup, find_packages
3 import sys
4 import os
5
6 assert sys.version_info >= (3, 6, 2), "black requires Python 3.6.2+"
7 from pathlib import Path # noqa E402
8 from typing import List # noqa: E402
9
10 CURRENT_DIR = Path(__file__).parent
11 sys.path.insert(0, str(CURRENT_DIR)) # for setuptools.build_meta
12
13
14 def get_long_description() -> str:
15 return (
16 (CURRENT_DIR / "README.md").read_text(encoding="utf8")
17 + "\n\n"
18 + (CURRENT_DIR / "CHANGES.md").read_text(encoding="utf8")
19 )
20
21
22 def find_python_files(base: Path) -> List[Path]:
23 files = []
24 for entry in base.iterdir():
25 if entry.is_file() and entry.suffix == ".py":
26 files.append(entry)
27 elif entry.is_dir():
28 files.extend(find_python_files(entry))
29
30 return files
31
32
33 USE_MYPYC = False
34 # To compile with mypyc, a mypyc checkout must be present on the PYTHONPATH
35 if len(sys.argv) > 1 and sys.argv[1] == "--use-mypyc":
36 sys.argv.pop(1)
37 USE_MYPYC = True
38 if os.getenv("BLACK_USE_MYPYC", None) == "1":
39 USE_MYPYC = True
40
41 if USE_MYPYC:
42 from mypyc.build import mypycify
43
44 src = CURRENT_DIR / "src"
45 # TIP: filepaths are normalized to use forward slashes and are relative to ./src/
46 # before being checked against.
47 blocklist = [
48 # Not performance sensitive, so save bytes + compilation time:
49 "blib2to3/__init__.py",
50 "blib2to3/pgen2/__init__.py",
51 "black/output.py",
52 "black/concurrency.py",
53 "black/files.py",
54 "black/report.py",
55 # Breaks the test suite when compiled (and is also useless):
56 "black/debug.py",
57 # Compiled modules can't be run directly and that's a problem here:
58 "black/__main__.py",
59 ]
60 discovered = []
61 # black-primer and blackd have no good reason to be compiled.
62 discovered.extend(find_python_files(src / "black"))
63 discovered.extend(find_python_files(src / "blib2to3"))
64 mypyc_targets = [
65 str(p) for p in discovered if p.relative_to(src).as_posix() not in blocklist
66 ]
67
68 opt_level = os.getenv("MYPYC_OPT_LEVEL", "3")
69 ext_modules = mypycify(mypyc_targets, opt_level=opt_level, verbose=True)
70 else:
71 ext_modules = []
72
73 setup(
74 name="black",
75 use_scm_version={
76 "write_to": "src/_black_version.py",
77 "write_to_template": 'version = "{version}"\n',
78 },
79 description="The uncompromising code formatter.",
80 long_description=get_long_description(),
81 long_description_content_type="text/markdown",
82 keywords="automation formatter yapf autopep8 pyfmt gofmt rustfmt",
83 author="Łukasz Langa",
84 author_email="[email protected]",
85 url="https://github.com/psf/black",
86 project_urls={"Changelog": "https://github.com/psf/black/blob/main/CHANGES.md"},
87 license="MIT",
88 py_modules=["_black_version"],
89 ext_modules=ext_modules,
90 packages=find_packages(where="src"),
91 package_dir={"": "src"},
92 package_data={
93 "blib2to3": ["*.txt"],
94 "black": ["py.typed"],
95 "black_primer": ["primer.json"],
96 },
97 python_requires=">=3.6.2",
98 zip_safe=False,
99 install_requires=[
100 "click>=7.1.2",
101 "platformdirs>=2",
102 "tomli>=1.1.0",
103 "typed-ast>=1.4.2; python_version < '3.8' and implementation_name == 'cpython'",
104 "pathspec>=0.9.0",
105 "dataclasses>=0.6; python_version < '3.7'",
106 "typing_extensions>=3.10.0.0; python_version < '3.10'",
107 "mypy_extensions>=0.4.3",
108 ],
109 extras_require={
110 "d": ["aiohttp>=3.7.4"],
111 "colorama": ["colorama>=0.4.3"],
112 "uvloop": ["uvloop>=0.15.2"],
113 "jupyter": ["ipython>=7.8.0", "tokenize-rt>=3.2.0"],
114 },
115 test_suite="tests.test_black",
116 classifiers=[
117 "Development Status :: 4 - Beta",
118 "Environment :: Console",
119 "Intended Audience :: Developers",
120 "License :: OSI Approved :: MIT License",
121 "Operating System :: OS Independent",
122 "Programming Language :: Python",
123 "Programming Language :: Python :: 3.6",
124 "Programming Language :: Python :: 3.7",
125 "Programming Language :: Python :: 3.8",
126 "Programming Language :: Python :: 3.9",
127 "Programming Language :: Python :: 3.10",
128 "Programming Language :: Python :: 3 :: Only",
129 "Topic :: Software Development :: Libraries :: Python Modules",
130 "Topic :: Software Development :: Quality Assurance",
131 ],
132 entry_points={
133 "console_scripts": [
134 "black=black:patched_main",
135 "blackd=blackd:patched_main [d]",
136 "black-primer=black_primer.cli:main",
137 ]
138 },
139 )
140
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -97,7 +97,7 @@
python_requires=">=3.6.2",
zip_safe=False,
install_requires=[
- "click>=7.1.2",
+ "click>=8.0.0",
"platformdirs>=2",
"tomli>=1.1.0",
"typed-ast>=1.4.2; python_version < '3.8' and implementation_name == 'cpython'",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -97,7 +97,7 @@\n python_requires=\">=3.6.2\",\n zip_safe=False,\n install_requires=[\n- \"click>=7.1.2\",\n+ \"click>=8.0.0\",\n \"platformdirs>=2\",\n \"tomli>=1.1.0\",\n \"typed-ast>=1.4.2; python_version < '3.8' and implementation_name == 'cpython'\",\n", "issue": "Black specifies a requirement for click>=7.1.2 but actually requires click>=8.0\n**Describe the bug**\r\n\r\nInstalling `black` from the head of the `main` branch and `click<8.0` results in a stack trace whenever the tool is run.\r\n\r\n**To Reproduce**\r\n\r\n```bash\r\npython3 -m venv black-click-compat-bug\r\nsource black-click-compat-bug/bin/activate\r\npython -m pip install \\\r\n \"click<8.0\" \\\r\n git+https://github.com/psf/black\r\nblack --help\r\n```\r\n\r\nResulting traceback:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/enpaul/black-click-compat-bug/bin/black\", line 5, in <module>\r\n from black import patched_main\r\n File \"/home/enpaul/black-click-compat-bug/lib64/python3.10/site-packages/black/__init__.py\", line 34, in <module>\r\n from click.core import ParameterSource\r\nImportError: cannot import name 'ParameterSource' from 'click.core' (/home/enpaul/black-click-compat-bug/lib64/python3.10/site-packages/click/core.py)\r\n```\r\n\r\n**Expected behavior**\r\n\r\nBlack should specify a requirement for `click>=8.0.0` ([actual specification](https://github.com/psf/black/blob/main/setup.py#L100) is for `>=7.1.2`)\r\n\r\n**Environment**\r\n\r\n<!-- Please complete the following information: -->\r\n\r\n- Black's version: `21.12b1.dev40+g565f9c9`\r\n- OS: Linux (Fedora 35)\r\n- Python: `Python 3.10.1`\r\n\n", "before_files": [{"content": "# Copyright (C) 2020 \u0141ukasz Langa\nfrom setuptools import setup, find_packages\nimport sys\nimport os\n\nassert sys.version_info >= (3, 6, 2), \"black requires Python 3.6.2+\"\nfrom pathlib import Path # noqa E402\nfrom typing import List # noqa: E402\n\nCURRENT_DIR = Path(__file__).parent\nsys.path.insert(0, str(CURRENT_DIR)) # for setuptools.build_meta\n\n\ndef get_long_description() -> str:\n return (\n (CURRENT_DIR / \"README.md\").read_text(encoding=\"utf8\")\n + \"\\n\\n\"\n + (CURRENT_DIR / \"CHANGES.md\").read_text(encoding=\"utf8\")\n )\n\n\ndef find_python_files(base: Path) -> List[Path]:\n files = []\n for entry in base.iterdir():\n if entry.is_file() and entry.suffix == \".py\":\n files.append(entry)\n elif entry.is_dir():\n files.extend(find_python_files(entry))\n\n return files\n\n\nUSE_MYPYC = False\n# To compile with mypyc, a mypyc checkout must be present on the PYTHONPATH\nif len(sys.argv) > 1 and sys.argv[1] == \"--use-mypyc\":\n sys.argv.pop(1)\n USE_MYPYC = True\nif os.getenv(\"BLACK_USE_MYPYC\", None) == \"1\":\n USE_MYPYC = True\n\nif USE_MYPYC:\n from mypyc.build import mypycify\n\n src = CURRENT_DIR / \"src\"\n # TIP: filepaths are normalized to use forward slashes and are relative to ./src/\n # before being checked against.\n blocklist = [\n # Not performance sensitive, so save bytes + compilation time:\n \"blib2to3/__init__.py\",\n \"blib2to3/pgen2/__init__.py\",\n \"black/output.py\",\n \"black/concurrency.py\",\n \"black/files.py\",\n \"black/report.py\",\n # Breaks the test suite when compiled (and is also useless):\n \"black/debug.py\",\n # Compiled modules can't be run directly and that's a problem here:\n \"black/__main__.py\",\n ]\n discovered = []\n # black-primer and blackd have no good reason to be compiled.\n discovered.extend(find_python_files(src / \"black\"))\n discovered.extend(find_python_files(src / \"blib2to3\"))\n mypyc_targets = [\n str(p) for p in discovered if p.relative_to(src).as_posix() not in blocklist\n ]\n\n opt_level = os.getenv(\"MYPYC_OPT_LEVEL\", \"3\")\n ext_modules = mypycify(mypyc_targets, opt_level=opt_level, verbose=True)\nelse:\n ext_modules = []\n\nsetup(\n name=\"black\",\n use_scm_version={\n \"write_to\": \"src/_black_version.py\",\n \"write_to_template\": 'version = \"{version}\"\\n',\n },\n description=\"The uncompromising code formatter.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n keywords=\"automation formatter yapf autopep8 pyfmt gofmt rustfmt\",\n author=\"\u0141ukasz Langa\",\n author_email=\"[email protected]\",\n url=\"https://github.com/psf/black\",\n project_urls={\"Changelog\": \"https://github.com/psf/black/blob/main/CHANGES.md\"},\n license=\"MIT\",\n py_modules=[\"_black_version\"],\n ext_modules=ext_modules,\n packages=find_packages(where=\"src\"),\n package_dir={\"\": \"src\"},\n package_data={\n \"blib2to3\": [\"*.txt\"],\n \"black\": [\"py.typed\"],\n \"black_primer\": [\"primer.json\"],\n },\n python_requires=\">=3.6.2\",\n zip_safe=False,\n install_requires=[\n \"click>=7.1.2\",\n \"platformdirs>=2\",\n \"tomli>=1.1.0\",\n \"typed-ast>=1.4.2; python_version < '3.8' and implementation_name == 'cpython'\",\n \"pathspec>=0.9.0\",\n \"dataclasses>=0.6; python_version < '3.7'\",\n \"typing_extensions>=3.10.0.0; python_version < '3.10'\",\n \"mypy_extensions>=0.4.3\",\n ],\n extras_require={\n \"d\": [\"aiohttp>=3.7.4\"],\n \"colorama\": [\"colorama>=0.4.3\"],\n \"uvloop\": [\"uvloop>=0.15.2\"],\n \"jupyter\": [\"ipython>=7.8.0\", \"tokenize-rt>=3.2.0\"],\n },\n test_suite=\"tests.test_black\",\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Quality Assurance\",\n ],\n entry_points={\n \"console_scripts\": [\n \"black=black:patched_main\",\n \"blackd=blackd:patched_main [d]\",\n \"black-primer=black_primer.cli:main\",\n ]\n },\n)\n", "path": "setup.py"}]}
| 2,480 | 120 |
gh_patches_debug_20541
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-1554
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
opengl\items\GLScatterPlotItem.py, Argument pos
### Description
pos is defined as an **empty list** in line 17 instead of an array which causes for exceptions when `pose.shape`, `pos.transpose()`, and `pos.size` are used.
### Behavior
```
|==============================>>
| Traceback (most recent call last):
| File "filename.py", line 358, in <module>
| sys.exit(app.exec_())
| File "...\AppData\Roaming\Python\Python38\site-packages\pyqtgraph\opengl\GLViewWidget.py", line 193, in paintGL
| self.drawItemTree(useItemNames=useItemNames)
| File "...\AppData\Roaming\Python\Python38\site-packages\pyqtgraph\opengl\GLViewWidget.py", line 233, in drawItemTree
| self.drawItemTree(i, useItemNames=useItemNames)
| File "...\AppData\Roaming\Python\Python38\site-packages\pyqtgraph\opengl\GLViewWidget.py", line 214, in drawItemTree
| debug.printExc()
| --- exception caught here ---
| File "...\AppData\Roaming\Python\Python38\site-packages\pyqtgraph\opengl\GLViewWidget.py", line 211, in drawItemTree
| i.paint()
| File "...\AppData\Roaming\Python\Python38\site-packages\pyqtgraph\opengl\items\GLScatterPlotItem.py", line 152, in paint
| glDrawArrays(GL_POINTS, 0, int(pos.size / pos.shape[-1]))
| AttributeError: 'list' object has no attribute 'size'
|==============================<<
Error while drawing item <pyqtgraph.opengl.items.GLScatterPlotItem.GLScatterPlotItem(0xf20aa30) at 0x101A63A8>.
```
### Additional context
I honestly have no idea how to fix it since I tried some of the trivia fixes, and they did not solve the problem. (
newbie)
</issue>
<code>
[start of pyqtgraph/opengl/items/GLScatterPlotItem.py]
1 from OpenGL.GL import *
2 from OpenGL.arrays import vbo
3 from .. GLGraphicsItem import GLGraphicsItem
4 from .. import shaders
5 from ... import QtGui
6 import numpy as np
7
8 __all__ = ['GLScatterPlotItem']
9
10 class GLScatterPlotItem(GLGraphicsItem):
11 """Draws points at a list of 3D positions."""
12
13 def __init__(self, **kwds):
14 GLGraphicsItem.__init__(self)
15 glopts = kwds.pop('glOptions', 'additive')
16 self.setGLOptions(glopts)
17 self.pos = []
18 self.size = 10
19 self.color = [1.0,1.0,1.0,0.5]
20 self.pxMode = True
21 #self.vbo = {} ## VBO does not appear to improve performance very much.
22 self.setData(**kwds)
23 self.shader = None
24
25 def setData(self, **kwds):
26 """
27 Update the data displayed by this item. All arguments are optional;
28 for example it is allowed to update spot positions while leaving
29 colors unchanged, etc.
30
31 ==================== ==================================================
32 **Arguments:**
33 pos (N,3) array of floats specifying point locations.
34 color (N,4) array of floats (0.0-1.0) specifying
35 spot colors OR a tuple of floats specifying
36 a single color for all spots.
37 size (N,) array of floats specifying spot sizes or
38 a single value to apply to all spots.
39 pxMode If True, spot sizes are expressed in pixels.
40 Otherwise, they are expressed in item coordinates.
41 ==================== ==================================================
42 """
43 args = ['pos', 'color', 'size', 'pxMode']
44 for k in kwds.keys():
45 if k not in args:
46 raise Exception('Invalid keyword argument: %s (allowed arguments are %s)' % (k, str(args)))
47
48 args.remove('pxMode')
49 for arg in args:
50 if arg in kwds:
51 setattr(self, arg, kwds[arg])
52 #self.vbo.pop(arg, None)
53
54 self.pxMode = kwds.get('pxMode', self.pxMode)
55 self.update()
56
57 def initializeGL(self):
58 if self.shader is not None:
59 return
60
61 ## Generate texture for rendering points
62 w = 64
63 def fn(x,y):
64 r = ((x-(w-1)/2.)**2 + (y-(w-1)/2.)**2) ** 0.5
65 return 255 * (w/2. - np.clip(r, w/2.-1.0, w/2.))
66 pData = np.empty((w, w, 4))
67 pData[:] = 255
68 pData[:,:,3] = np.fromfunction(fn, pData.shape[:2])
69 #print pData.shape, pData.min(), pData.max()
70 pData = pData.astype(np.ubyte)
71
72 if getattr(self, "pointTexture", None) is None:
73 self.pointTexture = glGenTextures(1)
74 glActiveTexture(GL_TEXTURE0)
75 glEnable(GL_TEXTURE_2D)
76 glBindTexture(GL_TEXTURE_2D, self.pointTexture)
77 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pData.shape[0], pData.shape[1], 0, GL_RGBA, GL_UNSIGNED_BYTE, pData)
78
79 self.shader = shaders.getShaderProgram('pointSprite')
80
81 #def getVBO(self, name):
82 #if name not in self.vbo:
83 #self.vbo[name] = vbo.VBO(getattr(self, name).astype('f'))
84 #return self.vbo[name]
85
86 #def setupGLState(self):
87 #"""Prepare OpenGL state for drawing. This function is called immediately before painting."""
88 ##glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) ## requires z-sorting to render properly.
89 #glBlendFunc(GL_SRC_ALPHA, GL_ONE)
90 #glEnable( GL_BLEND )
91 #glEnable( GL_ALPHA_TEST )
92 #glDisable( GL_DEPTH_TEST )
93
94 ##glEnable( GL_POINT_SMOOTH )
95
96 ##glHint(GL_POINT_SMOOTH_HINT, GL_NICEST)
97 ##glPointParameterfv(GL_POINT_DISTANCE_ATTENUATION, (0, 0, -1e-3))
98 ##glPointParameterfv(GL_POINT_SIZE_MAX, (65500,))
99 ##glPointParameterfv(GL_POINT_SIZE_MIN, (0,))
100
101 def paint(self):
102 self.setupGLState()
103
104 glEnable(GL_POINT_SPRITE)
105
106 glActiveTexture(GL_TEXTURE0)
107 glEnable( GL_TEXTURE_2D )
108 glBindTexture(GL_TEXTURE_2D, self.pointTexture)
109
110 glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE)
111 #glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE) ## use texture color exactly
112 #glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE ) ## texture modulates current color
113 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
114 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
115 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
116 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
117 glEnable(GL_PROGRAM_POINT_SIZE)
118
119
120 with self.shader:
121 #glUniform1i(self.shader.uniform('texture'), 0) ## inform the shader which texture to use
122 glEnableClientState(GL_VERTEX_ARRAY)
123 try:
124 pos = self.pos
125 #if pos.ndim > 2:
126 #pos = pos.reshape((-1, pos.shape[-1]))
127 glVertexPointerf(pos)
128
129 if isinstance(self.color, np.ndarray):
130 glEnableClientState(GL_COLOR_ARRAY)
131 glColorPointerf(self.color)
132 else:
133 if isinstance(self.color, QtGui.QColor):
134 glColor4f(*fn.glColor(self.color))
135 else:
136 glColor4f(*self.color)
137
138 if not self.pxMode or isinstance(self.size, np.ndarray):
139 glEnableClientState(GL_NORMAL_ARRAY)
140 norm = np.empty(pos.shape)
141 if self.pxMode:
142 norm[...,0] = self.size
143 else:
144 gpos = self.mapToView(pos.transpose()).transpose()
145 pxSize = self.view().pixelSize(gpos)
146 norm[...,0] = self.size / pxSize
147
148 glNormalPointerf(norm)
149 else:
150 glNormal3f(self.size, 0, 0) ## vertex shader uses norm.x to determine point size
151 #glPointSize(self.size)
152 glDrawArrays(GL_POINTS, 0, int(pos.size / pos.shape[-1]))
153 finally:
154 glDisableClientState(GL_NORMAL_ARRAY)
155 glDisableClientState(GL_VERTEX_ARRAY)
156 glDisableClientState(GL_COLOR_ARRAY)
157 #posVBO.unbind()
158 ##fixes #145
159 glDisable( GL_TEXTURE_2D )
160
161 #for i in range(len(self.pos)):
162 #pos = self.pos[i]
163
164 #if isinstance(self.color, np.ndarray):
165 #color = self.color[i]
166 #else:
167 #color = self.color
168 #if isinstance(self.color, QtGui.QColor):
169 #color = fn.glColor(self.color)
170
171 #if isinstance(self.size, np.ndarray):
172 #size = self.size[i]
173 #else:
174 #size = self.size
175
176 #pxSize = self.view().pixelSize(QtGui.QVector3D(*pos))
177
178 #glPointSize(size / pxSize)
179 #glBegin( GL_POINTS )
180 #glColor4f(*color) # x is blue
181 ##glNormal3f(size, 0, 0)
182 #glVertex3f(*pos)
183 #glEnd()
184
185
186
187
188
189
[end of pyqtgraph/opengl/items/GLScatterPlotItem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/opengl/items/GLScatterPlotItem.py b/pyqtgraph/opengl/items/GLScatterPlotItem.py
--- a/pyqtgraph/opengl/items/GLScatterPlotItem.py
+++ b/pyqtgraph/opengl/items/GLScatterPlotItem.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
from OpenGL.GL import *
from OpenGL.arrays import vbo
from .. GLGraphicsItem import GLGraphicsItem
@@ -14,7 +15,7 @@
GLGraphicsItem.__init__(self)
glopts = kwds.pop('glOptions', 'additive')
self.setGLOptions(glopts)
- self.pos = []
+ self.pos = None
self.size = 10
self.color = [1.0,1.0,1.0,0.5]
self.pxMode = True
@@ -99,6 +100,9 @@
##glPointParameterfv(GL_POINT_SIZE_MIN, (0,))
def paint(self):
+ if self.pos is None:
+ return
+
self.setupGLState()
glEnable(GL_POINT_SPRITE)
|
{"golden_diff": "diff --git a/pyqtgraph/opengl/items/GLScatterPlotItem.py b/pyqtgraph/opengl/items/GLScatterPlotItem.py\n--- a/pyqtgraph/opengl/items/GLScatterPlotItem.py\n+++ b/pyqtgraph/opengl/items/GLScatterPlotItem.py\n@@ -1,3 +1,4 @@\n+# -*- coding: utf-8 -*-\n from OpenGL.GL import *\n from OpenGL.arrays import vbo\n from .. GLGraphicsItem import GLGraphicsItem\n@@ -14,7 +15,7 @@\n GLGraphicsItem.__init__(self)\n glopts = kwds.pop('glOptions', 'additive')\n self.setGLOptions(glopts)\n- self.pos = []\n+ self.pos = None\n self.size = 10\n self.color = [1.0,1.0,1.0,0.5]\n self.pxMode = True\n@@ -99,6 +100,9 @@\n ##glPointParameterfv(GL_POINT_SIZE_MIN, (0,))\n \n def paint(self):\n+ if self.pos is None:\n+ return\n+\n self.setupGLState()\n \n glEnable(GL_POINT_SPRITE)\n", "issue": "opengl\\items\\GLScatterPlotItem.py, Argument pos\n### Description\r\npos is defined as an **empty list** in line 17 instead of an array which causes for exceptions when `pose.shape`, `pos.transpose()`, and `pos.size` are used.\r\n\r\n\r\n### Behavior\r\n```\r\n |==============================>>\r\n | Traceback (most recent call last):\r\n | File \"filename.py\", line 358, in <module>\r\n | sys.exit(app.exec_())\r\n | File \"...\\AppData\\Roaming\\Python\\Python38\\site-packages\\pyqtgraph\\opengl\\GLViewWidget.py\", line 193, in paintGL \r\n | self.drawItemTree(useItemNames=useItemNames)\r\n | File \"...\\AppData\\Roaming\\Python\\Python38\\site-packages\\pyqtgraph\\opengl\\GLViewWidget.py\", line 233, in drawItemTree \r\n | self.drawItemTree(i, useItemNames=useItemNames)\r\n | File \"...\\AppData\\Roaming\\Python\\Python38\\site-packages\\pyqtgraph\\opengl\\GLViewWidget.py\", line 214, in drawItemTree \r\n | debug.printExc()\r\n | --- exception caught here ---\r\n | File \"...\\AppData\\Roaming\\Python\\Python38\\site-packages\\pyqtgraph\\opengl\\GLViewWidget.py\", line 211, in drawItemTree \r\n | i.paint()\r\n | File \"...\\AppData\\Roaming\\Python\\Python38\\site-packages\\pyqtgraph\\opengl\\items\\GLScatterPlotItem.py\", line 152, in paint\r\n | glDrawArrays(GL_POINTS, 0, int(pos.size / pos.shape[-1]))\r\n | AttributeError: 'list' object has no attribute 'size'\r\n |==============================<<\r\nError while drawing item <pyqtgraph.opengl.items.GLScatterPlotItem.GLScatterPlotItem(0xf20aa30) at 0x101A63A8>.\r\n```\r\n\r\n### Additional context\r\nI honestly have no idea how to fix it since I tried some of the trivia fixes, and they did not solve the problem. (\r\nnewbie)\n", "before_files": [{"content": "from OpenGL.GL import *\nfrom OpenGL.arrays import vbo\nfrom .. GLGraphicsItem import GLGraphicsItem\nfrom .. import shaders\nfrom ... import QtGui\nimport numpy as np\n\n__all__ = ['GLScatterPlotItem']\n\nclass GLScatterPlotItem(GLGraphicsItem):\n \"\"\"Draws points at a list of 3D positions.\"\"\"\n \n def __init__(self, **kwds):\n GLGraphicsItem.__init__(self)\n glopts = kwds.pop('glOptions', 'additive')\n self.setGLOptions(glopts)\n self.pos = []\n self.size = 10\n self.color = [1.0,1.0,1.0,0.5]\n self.pxMode = True\n #self.vbo = {} ## VBO does not appear to improve performance very much.\n self.setData(**kwds)\n self.shader = None\n \n def setData(self, **kwds):\n \"\"\"\n Update the data displayed by this item. All arguments are optional; \n for example it is allowed to update spot positions while leaving \n colors unchanged, etc.\n \n ==================== ==================================================\n **Arguments:**\n pos (N,3) array of floats specifying point locations.\n color (N,4) array of floats (0.0-1.0) specifying\n spot colors OR a tuple of floats specifying\n a single color for all spots.\n size (N,) array of floats specifying spot sizes or \n a single value to apply to all spots.\n pxMode If True, spot sizes are expressed in pixels. \n Otherwise, they are expressed in item coordinates.\n ==================== ==================================================\n \"\"\"\n args = ['pos', 'color', 'size', 'pxMode']\n for k in kwds.keys():\n if k not in args:\n raise Exception('Invalid keyword argument: %s (allowed arguments are %s)' % (k, str(args)))\n \n args.remove('pxMode')\n for arg in args:\n if arg in kwds:\n setattr(self, arg, kwds[arg])\n #self.vbo.pop(arg, None)\n \n self.pxMode = kwds.get('pxMode', self.pxMode)\n self.update()\n\n def initializeGL(self):\n if self.shader is not None:\n return\n \n ## Generate texture for rendering points\n w = 64\n def fn(x,y):\n r = ((x-(w-1)/2.)**2 + (y-(w-1)/2.)**2) ** 0.5\n return 255 * (w/2. - np.clip(r, w/2.-1.0, w/2.))\n pData = np.empty((w, w, 4))\n pData[:] = 255\n pData[:,:,3] = np.fromfunction(fn, pData.shape[:2])\n #print pData.shape, pData.min(), pData.max()\n pData = pData.astype(np.ubyte)\n \n if getattr(self, \"pointTexture\", None) is None:\n self.pointTexture = glGenTextures(1)\n glActiveTexture(GL_TEXTURE0)\n glEnable(GL_TEXTURE_2D)\n glBindTexture(GL_TEXTURE_2D, self.pointTexture)\n glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pData.shape[0], pData.shape[1], 0, GL_RGBA, GL_UNSIGNED_BYTE, pData)\n \n self.shader = shaders.getShaderProgram('pointSprite')\n \n #def getVBO(self, name):\n #if name not in self.vbo:\n #self.vbo[name] = vbo.VBO(getattr(self, name).astype('f'))\n #return self.vbo[name]\n \n #def setupGLState(self):\n #\"\"\"Prepare OpenGL state for drawing. This function is called immediately before painting.\"\"\"\n ##glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) ## requires z-sorting to render properly.\n #glBlendFunc(GL_SRC_ALPHA, GL_ONE)\n #glEnable( GL_BLEND )\n #glEnable( GL_ALPHA_TEST )\n #glDisable( GL_DEPTH_TEST )\n \n ##glEnable( GL_POINT_SMOOTH )\n\n ##glHint(GL_POINT_SMOOTH_HINT, GL_NICEST)\n ##glPointParameterfv(GL_POINT_DISTANCE_ATTENUATION, (0, 0, -1e-3))\n ##glPointParameterfv(GL_POINT_SIZE_MAX, (65500,))\n ##glPointParameterfv(GL_POINT_SIZE_MIN, (0,))\n \n def paint(self):\n self.setupGLState()\n \n glEnable(GL_POINT_SPRITE)\n \n glActiveTexture(GL_TEXTURE0)\n glEnable( GL_TEXTURE_2D )\n glBindTexture(GL_TEXTURE_2D, self.pointTexture)\n \n glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE)\n #glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE) ## use texture color exactly\n #glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE ) ## texture modulates current color\n glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)\n glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)\n glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)\n glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)\n glEnable(GL_PROGRAM_POINT_SIZE)\n \n \n with self.shader:\n #glUniform1i(self.shader.uniform('texture'), 0) ## inform the shader which texture to use\n glEnableClientState(GL_VERTEX_ARRAY)\n try:\n pos = self.pos\n #if pos.ndim > 2:\n #pos = pos.reshape((-1, pos.shape[-1]))\n glVertexPointerf(pos)\n \n if isinstance(self.color, np.ndarray):\n glEnableClientState(GL_COLOR_ARRAY)\n glColorPointerf(self.color)\n else:\n if isinstance(self.color, QtGui.QColor):\n glColor4f(*fn.glColor(self.color))\n else:\n glColor4f(*self.color)\n \n if not self.pxMode or isinstance(self.size, np.ndarray):\n glEnableClientState(GL_NORMAL_ARRAY)\n norm = np.empty(pos.shape)\n if self.pxMode:\n norm[...,0] = self.size\n else:\n gpos = self.mapToView(pos.transpose()).transpose()\n pxSize = self.view().pixelSize(gpos)\n norm[...,0] = self.size / pxSize\n \n glNormalPointerf(norm)\n else:\n glNormal3f(self.size, 0, 0) ## vertex shader uses norm.x to determine point size\n #glPointSize(self.size)\n glDrawArrays(GL_POINTS, 0, int(pos.size / pos.shape[-1]))\n finally:\n glDisableClientState(GL_NORMAL_ARRAY)\n glDisableClientState(GL_VERTEX_ARRAY)\n glDisableClientState(GL_COLOR_ARRAY)\n #posVBO.unbind()\n ##fixes #145\n glDisable( GL_TEXTURE_2D )\n \n #for i in range(len(self.pos)):\n #pos = self.pos[i]\n \n #if isinstance(self.color, np.ndarray):\n #color = self.color[i]\n #else:\n #color = self.color\n #if isinstance(self.color, QtGui.QColor):\n #color = fn.glColor(self.color)\n \n #if isinstance(self.size, np.ndarray):\n #size = self.size[i]\n #else:\n #size = self.size\n \n #pxSize = self.view().pixelSize(QtGui.QVector3D(*pos))\n \n #glPointSize(size / pxSize)\n #glBegin( GL_POINTS )\n #glColor4f(*color) # x is blue\n ##glNormal3f(size, 0, 0)\n #glVertex3f(*pos)\n #glEnd()\n\n \n \n \n \n", "path": "pyqtgraph/opengl/items/GLScatterPlotItem.py"}]}
| 3,216 | 256 |
gh_patches_debug_17401
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1778
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Expand sqlalchemy pool.name to follow the semantic conventions
Spec ref: https://github.com/open-telemetry/opentelemetry-specification/pull/3050
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15 import re
16
17 from sqlalchemy.event import ( # pylint: disable=no-name-in-module
18 listen,
19 remove,
20 )
21
22 from opentelemetry import trace
23 from opentelemetry.instrumentation.sqlalchemy.version import __version__
24 from opentelemetry.instrumentation.sqlcommenter_utils import _add_sql_comment
25 from opentelemetry.instrumentation.utils import _get_opentelemetry_values
26 from opentelemetry.semconv.trace import NetTransportValues, SpanAttributes
27 from opentelemetry.trace.status import Status, StatusCode
28
29
30 def _normalize_vendor(vendor):
31 """Return a canonical name for a type of database."""
32 if not vendor:
33 return "db" # should this ever happen?
34
35 if "sqlite" in vendor:
36 return "sqlite"
37
38 if "postgres" in vendor or vendor == "psycopg2":
39 return "postgresql"
40
41 return vendor
42
43
44 def _wrap_create_async_engine(
45 tracer, connections_usage, enable_commenter=False
46 ):
47 # pylint: disable=unused-argument
48 def _wrap_create_async_engine_internal(func, module, args, kwargs):
49 """Trace the SQLAlchemy engine, creating an `EngineTracer`
50 object that will listen to SQLAlchemy events.
51 """
52 engine = func(*args, **kwargs)
53 EngineTracer(
54 tracer, engine.sync_engine, connections_usage, enable_commenter
55 )
56 return engine
57
58 return _wrap_create_async_engine_internal
59
60
61 def _wrap_create_engine(tracer, connections_usage, enable_commenter=False):
62 def _wrap_create_engine_internal(func, _module, args, kwargs):
63 """Trace the SQLAlchemy engine, creating an `EngineTracer`
64 object that will listen to SQLAlchemy events.
65 """
66 engine = func(*args, **kwargs)
67 EngineTracer(tracer, engine, connections_usage, enable_commenter)
68 return engine
69
70 return _wrap_create_engine_internal
71
72
73 def _wrap_connect(tracer):
74 # pylint: disable=unused-argument
75 def _wrap_connect_internal(func, module, args, kwargs):
76 with tracer.start_as_current_span(
77 "connect", kind=trace.SpanKind.CLIENT
78 ) as span:
79 if span.is_recording():
80 attrs, _ = _get_attributes_from_url(module.url)
81 span.set_attributes(attrs)
82 span.set_attribute(
83 SpanAttributes.DB_SYSTEM, _normalize_vendor(module.name)
84 )
85 return func(*args, **kwargs)
86
87 return _wrap_connect_internal
88
89
90 class EngineTracer:
91 _remove_event_listener_params = []
92
93 def __init__(
94 self,
95 tracer,
96 engine,
97 connections_usage,
98 enable_commenter=False,
99 commenter_options=None,
100 ):
101 self.tracer = tracer
102 self.engine = engine
103 self.connections_usage = connections_usage
104 self.vendor = _normalize_vendor(engine.name)
105 self.enable_commenter = enable_commenter
106 self.commenter_options = commenter_options if commenter_options else {}
107 self._leading_comment_remover = re.compile(r"^/\*.*?\*/")
108
109 self._register_event_listener(
110 engine, "before_cursor_execute", self._before_cur_exec, retval=True
111 )
112 self._register_event_listener(
113 engine, "after_cursor_execute", _after_cur_exec
114 )
115 self._register_event_listener(engine, "handle_error", _handle_error)
116 self._register_event_listener(engine, "connect", self._pool_connect)
117 self._register_event_listener(engine, "close", self._pool_close)
118 self._register_event_listener(engine, "checkin", self._pool_checkin)
119 self._register_event_listener(engine, "checkout", self._pool_checkout)
120
121 def _get_pool_name(self):
122 return self.engine.pool.logging_name or ""
123
124 def _add_idle_to_connection_usage(self, value):
125 self.connections_usage.add(
126 value,
127 attributes={
128 "pool.name": self._get_pool_name(),
129 "state": "idle",
130 },
131 )
132
133 def _add_used_to_connection_usage(self, value):
134 self.connections_usage.add(
135 value,
136 attributes={
137 "pool.name": self._get_pool_name(),
138 "state": "used",
139 },
140 )
141
142 def _pool_connect(self, _dbapi_connection, _connection_record):
143 self._add_idle_to_connection_usage(1)
144
145 def _pool_close(self, _dbapi_connection, _connection_record):
146 self._add_idle_to_connection_usage(-1)
147
148 # Called when a connection returns to the pool.
149 def _pool_checkin(self, _dbapi_connection, _connection_record):
150 self._add_used_to_connection_usage(-1)
151 self._add_idle_to_connection_usage(1)
152
153 # Called when a connection is retrieved from the Pool.
154 def _pool_checkout(
155 self, _dbapi_connection, _connection_record, _connection_proxy
156 ):
157 self._add_idle_to_connection_usage(-1)
158 self._add_used_to_connection_usage(1)
159
160 @classmethod
161 def _register_event_listener(cls, target, identifier, func, *args, **kw):
162 listen(target, identifier, func, *args, **kw)
163 cls._remove_event_listener_params.append((target, identifier, func))
164
165 @classmethod
166 def remove_all_event_listeners(cls):
167 for remove_params in cls._remove_event_listener_params:
168 remove(*remove_params)
169 cls._remove_event_listener_params.clear()
170
171 def _operation_name(self, db_name, statement):
172 parts = []
173 if isinstance(statement, str):
174 # otel spec recommends against parsing SQL queries. We are not trying to parse SQL
175 # but simply truncating the statement to the first word. This covers probably >95%
176 # use cases and uses the SQL statement in span name correctly as per the spec.
177 # For some very special cases it might not record the correct statement if the SQL
178 # dialect is too weird but in any case it shouldn't break anything.
179 # Strip leading comments so we get the operation name.
180 parts.append(
181 self._leading_comment_remover.sub("", statement).split()[0]
182 )
183 if db_name:
184 parts.append(db_name)
185 if not parts:
186 return self.vendor
187 return " ".join(parts)
188
189 def _before_cur_exec(
190 self, conn, cursor, statement, params, context, _executemany
191 ):
192 attrs, found = _get_attributes_from_url(conn.engine.url)
193 if not found:
194 attrs = _get_attributes_from_cursor(self.vendor, cursor, attrs)
195
196 db_name = attrs.get(SpanAttributes.DB_NAME, "")
197 span = self.tracer.start_span(
198 self._operation_name(db_name, statement),
199 kind=trace.SpanKind.CLIENT,
200 )
201 with trace.use_span(span, end_on_exit=False):
202 if span.is_recording():
203 span.set_attribute(SpanAttributes.DB_STATEMENT, statement)
204 span.set_attribute(SpanAttributes.DB_SYSTEM, self.vendor)
205 for key, value in attrs.items():
206 span.set_attribute(key, value)
207 if self.enable_commenter:
208 commenter_data = dict(
209 db_driver=conn.engine.driver,
210 # Driver/framework centric information.
211 db_framework=f"sqlalchemy:{__version__}",
212 )
213
214 if self.commenter_options.get("opentelemetry_values", True):
215 commenter_data.update(**_get_opentelemetry_values())
216
217 # Filter down to just the requested attributes.
218 commenter_data = {
219 k: v
220 for k, v in commenter_data.items()
221 if self.commenter_options.get(k, True)
222 }
223
224 statement = _add_sql_comment(statement, **commenter_data)
225
226 context._otel_span = span
227
228 return statement, params
229
230
231 # pylint: disable=unused-argument
232 def _after_cur_exec(conn, cursor, statement, params, context, executemany):
233 span = getattr(context, "_otel_span", None)
234 if span is None:
235 return
236
237 span.end()
238
239
240 def _handle_error(context):
241 span = getattr(context.execution_context, "_otel_span", None)
242 if span is None:
243 return
244
245 if span.is_recording():
246 span.set_status(
247 Status(
248 StatusCode.ERROR,
249 str(context.original_exception),
250 )
251 )
252 span.end()
253
254
255 def _get_attributes_from_url(url):
256 """Set connection tags from the url. return true if successful."""
257 attrs = {}
258 if url.host:
259 attrs[SpanAttributes.NET_PEER_NAME] = url.host
260 if url.port:
261 attrs[SpanAttributes.NET_PEER_PORT] = url.port
262 if url.database:
263 attrs[SpanAttributes.DB_NAME] = url.database
264 if url.username:
265 attrs[SpanAttributes.DB_USER] = url.username
266 return attrs, bool(url.host)
267
268
269 def _get_attributes_from_cursor(vendor, cursor, attrs):
270 """Attempt to set db connection attributes by introspecting the cursor."""
271 if vendor == "postgresql":
272 info = getattr(getattr(cursor, "connection", None), "info", None)
273 if not info:
274 return attrs
275
276 attrs[SpanAttributes.DB_NAME] = info.dbname
277 is_unix_socket = info.host and info.host.startswith("/")
278
279 if is_unix_socket:
280 attrs[SpanAttributes.NET_TRANSPORT] = NetTransportValues.UNIX.value
281 if info.port:
282 # postgresql enforces this pattern on all socket names
283 attrs[SpanAttributes.NET_PEER_NAME] = os.path.join(
284 info.host, f".s.PGSQL.{info.port}"
285 )
286 else:
287 attrs[
288 SpanAttributes.NET_TRANSPORT
289 ] = NetTransportValues.IP_TCP.value
290 attrs[SpanAttributes.NET_PEER_NAME] = info.host
291 if info.port:
292 attrs[SpanAttributes.NET_PEER_PORT] = int(info.port)
293 return attrs
294
[end of instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py
--- a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py
+++ b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py
@@ -118,8 +118,17 @@
self._register_event_listener(engine, "checkin", self._pool_checkin)
self._register_event_listener(engine, "checkout", self._pool_checkout)
+ def _get_connection_string(self):
+ drivername = self.engine.url.drivername or ""
+ host = self.engine.url.host or ""
+ port = self.engine.url.port or ""
+ database = self.engine.url.database or ""
+ return f"{drivername}://{host}:{port}/{database}"
+
def _get_pool_name(self):
- return self.engine.pool.logging_name or ""
+ if self.engine.pool.logging_name is not None:
+ return self.engine.pool.logging_name
+ return self._get_connection_string()
def _add_idle_to_connection_usage(self, value):
self.connections_usage.add(
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py\n--- a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py\n+++ b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py\n@@ -118,8 +118,17 @@\n self._register_event_listener(engine, \"checkin\", self._pool_checkin)\n self._register_event_listener(engine, \"checkout\", self._pool_checkout)\n \n+ def _get_connection_string(self):\n+ drivername = self.engine.url.drivername or \"\"\n+ host = self.engine.url.host or \"\"\n+ port = self.engine.url.port or \"\"\n+ database = self.engine.url.database or \"\"\n+ return f\"{drivername}://{host}:{port}/{database}\"\n+\n def _get_pool_name(self):\n- return self.engine.pool.logging_name or \"\"\n+ if self.engine.pool.logging_name is not None:\n+ return self.engine.pool.logging_name\n+ return self._get_connection_string()\n \n def _add_idle_to_connection_usage(self, value):\n self.connections_usage.add(\n", "issue": "Expand sqlalchemy pool.name to follow the semantic conventions\nSpec ref: https://github.com/open-telemetry/opentelemetry-specification/pull/3050\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nimport re\n\nfrom sqlalchemy.event import ( # pylint: disable=no-name-in-module\n listen,\n remove,\n)\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.sqlalchemy.version import __version__\nfrom opentelemetry.instrumentation.sqlcommenter_utils import _add_sql_comment\nfrom opentelemetry.instrumentation.utils import _get_opentelemetry_values\nfrom opentelemetry.semconv.trace import NetTransportValues, SpanAttributes\nfrom opentelemetry.trace.status import Status, StatusCode\n\n\ndef _normalize_vendor(vendor):\n \"\"\"Return a canonical name for a type of database.\"\"\"\n if not vendor:\n return \"db\" # should this ever happen?\n\n if \"sqlite\" in vendor:\n return \"sqlite\"\n\n if \"postgres\" in vendor or vendor == \"psycopg2\":\n return \"postgresql\"\n\n return vendor\n\n\ndef _wrap_create_async_engine(\n tracer, connections_usage, enable_commenter=False\n):\n # pylint: disable=unused-argument\n def _wrap_create_async_engine_internal(func, module, args, kwargs):\n \"\"\"Trace the SQLAlchemy engine, creating an `EngineTracer`\n object that will listen to SQLAlchemy events.\n \"\"\"\n engine = func(*args, **kwargs)\n EngineTracer(\n tracer, engine.sync_engine, connections_usage, enable_commenter\n )\n return engine\n\n return _wrap_create_async_engine_internal\n\n\ndef _wrap_create_engine(tracer, connections_usage, enable_commenter=False):\n def _wrap_create_engine_internal(func, _module, args, kwargs):\n \"\"\"Trace the SQLAlchemy engine, creating an `EngineTracer`\n object that will listen to SQLAlchemy events.\n \"\"\"\n engine = func(*args, **kwargs)\n EngineTracer(tracer, engine, connections_usage, enable_commenter)\n return engine\n\n return _wrap_create_engine_internal\n\n\ndef _wrap_connect(tracer):\n # pylint: disable=unused-argument\n def _wrap_connect_internal(func, module, args, kwargs):\n with tracer.start_as_current_span(\n \"connect\", kind=trace.SpanKind.CLIENT\n ) as span:\n if span.is_recording():\n attrs, _ = _get_attributes_from_url(module.url)\n span.set_attributes(attrs)\n span.set_attribute(\n SpanAttributes.DB_SYSTEM, _normalize_vendor(module.name)\n )\n return func(*args, **kwargs)\n\n return _wrap_connect_internal\n\n\nclass EngineTracer:\n _remove_event_listener_params = []\n\n def __init__(\n self,\n tracer,\n engine,\n connections_usage,\n enable_commenter=False,\n commenter_options=None,\n ):\n self.tracer = tracer\n self.engine = engine\n self.connections_usage = connections_usage\n self.vendor = _normalize_vendor(engine.name)\n self.enable_commenter = enable_commenter\n self.commenter_options = commenter_options if commenter_options else {}\n self._leading_comment_remover = re.compile(r\"^/\\*.*?\\*/\")\n\n self._register_event_listener(\n engine, \"before_cursor_execute\", self._before_cur_exec, retval=True\n )\n self._register_event_listener(\n engine, \"after_cursor_execute\", _after_cur_exec\n )\n self._register_event_listener(engine, \"handle_error\", _handle_error)\n self._register_event_listener(engine, \"connect\", self._pool_connect)\n self._register_event_listener(engine, \"close\", self._pool_close)\n self._register_event_listener(engine, \"checkin\", self._pool_checkin)\n self._register_event_listener(engine, \"checkout\", self._pool_checkout)\n\n def _get_pool_name(self):\n return self.engine.pool.logging_name or \"\"\n\n def _add_idle_to_connection_usage(self, value):\n self.connections_usage.add(\n value,\n attributes={\n \"pool.name\": self._get_pool_name(),\n \"state\": \"idle\",\n },\n )\n\n def _add_used_to_connection_usage(self, value):\n self.connections_usage.add(\n value,\n attributes={\n \"pool.name\": self._get_pool_name(),\n \"state\": \"used\",\n },\n )\n\n def _pool_connect(self, _dbapi_connection, _connection_record):\n self._add_idle_to_connection_usage(1)\n\n def _pool_close(self, _dbapi_connection, _connection_record):\n self._add_idle_to_connection_usage(-1)\n\n # Called when a connection returns to the pool.\n def _pool_checkin(self, _dbapi_connection, _connection_record):\n self._add_used_to_connection_usage(-1)\n self._add_idle_to_connection_usage(1)\n\n # Called when a connection is retrieved from the Pool.\n def _pool_checkout(\n self, _dbapi_connection, _connection_record, _connection_proxy\n ):\n self._add_idle_to_connection_usage(-1)\n self._add_used_to_connection_usage(1)\n\n @classmethod\n def _register_event_listener(cls, target, identifier, func, *args, **kw):\n listen(target, identifier, func, *args, **kw)\n cls._remove_event_listener_params.append((target, identifier, func))\n\n @classmethod\n def remove_all_event_listeners(cls):\n for remove_params in cls._remove_event_listener_params:\n remove(*remove_params)\n cls._remove_event_listener_params.clear()\n\n def _operation_name(self, db_name, statement):\n parts = []\n if isinstance(statement, str):\n # otel spec recommends against parsing SQL queries. We are not trying to parse SQL\n # but simply truncating the statement to the first word. This covers probably >95%\n # use cases and uses the SQL statement in span name correctly as per the spec.\n # For some very special cases it might not record the correct statement if the SQL\n # dialect is too weird but in any case it shouldn't break anything.\n # Strip leading comments so we get the operation name.\n parts.append(\n self._leading_comment_remover.sub(\"\", statement).split()[0]\n )\n if db_name:\n parts.append(db_name)\n if not parts:\n return self.vendor\n return \" \".join(parts)\n\n def _before_cur_exec(\n self, conn, cursor, statement, params, context, _executemany\n ):\n attrs, found = _get_attributes_from_url(conn.engine.url)\n if not found:\n attrs = _get_attributes_from_cursor(self.vendor, cursor, attrs)\n\n db_name = attrs.get(SpanAttributes.DB_NAME, \"\")\n span = self.tracer.start_span(\n self._operation_name(db_name, statement),\n kind=trace.SpanKind.CLIENT,\n )\n with trace.use_span(span, end_on_exit=False):\n if span.is_recording():\n span.set_attribute(SpanAttributes.DB_STATEMENT, statement)\n span.set_attribute(SpanAttributes.DB_SYSTEM, self.vendor)\n for key, value in attrs.items():\n span.set_attribute(key, value)\n if self.enable_commenter:\n commenter_data = dict(\n db_driver=conn.engine.driver,\n # Driver/framework centric information.\n db_framework=f\"sqlalchemy:{__version__}\",\n )\n\n if self.commenter_options.get(\"opentelemetry_values\", True):\n commenter_data.update(**_get_opentelemetry_values())\n\n # Filter down to just the requested attributes.\n commenter_data = {\n k: v\n for k, v in commenter_data.items()\n if self.commenter_options.get(k, True)\n }\n\n statement = _add_sql_comment(statement, **commenter_data)\n\n context._otel_span = span\n\n return statement, params\n\n\n# pylint: disable=unused-argument\ndef _after_cur_exec(conn, cursor, statement, params, context, executemany):\n span = getattr(context, \"_otel_span\", None)\n if span is None:\n return\n\n span.end()\n\n\ndef _handle_error(context):\n span = getattr(context.execution_context, \"_otel_span\", None)\n if span is None:\n return\n\n if span.is_recording():\n span.set_status(\n Status(\n StatusCode.ERROR,\n str(context.original_exception),\n )\n )\n span.end()\n\n\ndef _get_attributes_from_url(url):\n \"\"\"Set connection tags from the url. return true if successful.\"\"\"\n attrs = {}\n if url.host:\n attrs[SpanAttributes.NET_PEER_NAME] = url.host\n if url.port:\n attrs[SpanAttributes.NET_PEER_PORT] = url.port\n if url.database:\n attrs[SpanAttributes.DB_NAME] = url.database\n if url.username:\n attrs[SpanAttributes.DB_USER] = url.username\n return attrs, bool(url.host)\n\n\ndef _get_attributes_from_cursor(vendor, cursor, attrs):\n \"\"\"Attempt to set db connection attributes by introspecting the cursor.\"\"\"\n if vendor == \"postgresql\":\n info = getattr(getattr(cursor, \"connection\", None), \"info\", None)\n if not info:\n return attrs\n\n attrs[SpanAttributes.DB_NAME] = info.dbname\n is_unix_socket = info.host and info.host.startswith(\"/\")\n\n if is_unix_socket:\n attrs[SpanAttributes.NET_TRANSPORT] = NetTransportValues.UNIX.value\n if info.port:\n # postgresql enforces this pattern on all socket names\n attrs[SpanAttributes.NET_PEER_NAME] = os.path.join(\n info.host, f\".s.PGSQL.{info.port}\"\n )\n else:\n attrs[\n SpanAttributes.NET_TRANSPORT\n ] = NetTransportValues.IP_TCP.value\n attrs[SpanAttributes.NET_PEER_NAME] = info.host\n if info.port:\n attrs[SpanAttributes.NET_PEER_PORT] = int(info.port)\n return attrs\n", "path": "instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py"}]}
| 3,581 | 288 |
gh_patches_debug_17630
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-1164
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
error 404 when searching for a package name with underscores
when I try to search for a package name which contains underscores I get a 404 error
example: https://pypi.io/search/?q=test_test
error 404 when searching for a package name with underscores
when I try to search for a package name which contains underscores I get a 404 error
example: https://pypi.io/search/?q=test_test
</issue>
<code>
[start of warehouse/utils/paginate.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from paginate import Page
14
15
16 class _ElasticsearchWrapper:
17
18 max_results = 10000
19
20 def __init__(self, query):
21 self.query = query
22 self.results = None
23 self.best_guess = None
24
25 def __getitem__(self, range):
26 # If we're asking for a range that extends past our maximum results,
27 # then we need to clamp the start of our slice to our maximum results
28 # size, and make sure that the end of our slice >= to that to ensure a
29 # consistent slice.
30 if range.start > self.max_results:
31 range = slice(
32 self.max_results,
33 max(range.stop, self.max_results),
34 range.step,
35 )
36
37 # If we're being asked for a range that extends past our maximum result
38 # then we'll clamp it to the maximum result size and stop there.
39 if range.stop > self.max_results:
40 range = slice(range.start, self.max_results, range.step)
41
42 if self.results is not None:
43 raise RuntimeError("Cannot reslice after having already sliced.")
44 self.results = self.query[range].execute()
45
46 if hasattr(self.results, "suggest"):
47 suggestion = self.results.suggest.name_suggestion[0]
48 if suggestion.options:
49 self.best_guess = suggestion.options[0]
50
51 return list(self.results)
52
53 def __len__(self):
54 if self.results is None:
55 raise RuntimeError("Cannot get length until a slice.")
56 return min(self.results.hits.total, self.max_results)
57
58
59 def ElasticsearchPage(*args, **kwargs): # noqa
60 kwargs.setdefault("wrapper_class", _ElasticsearchWrapper)
61 return Page(*args, **kwargs)
62
63
64 def paginate_url_factory(request, query_arg="page"):
65 def make_url(page):
66 query_seq = [
67 (k, v)
68 for k, vs in request.GET.dict_of_lists().items()
69 for v in vs
70 if k != query_arg
71 ]
72 query_seq += [(query_arg, page)]
73 return request.current_route_path(_query=query_seq)
74 return make_url
75
[end of warehouse/utils/paginate.py]
[start of warehouse/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14
15 from pyramid.httpexceptions import (
16 HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,
17 )
18 from pyramid.view import (
19 notfound_view_config, forbidden_view_config, view_config,
20 )
21 from sqlalchemy import func
22 from sqlalchemy.orm import aliased, joinedload
23
24 from warehouse.accounts import REDIRECT_FIELD_NAME
25 from warehouse.accounts.models import User
26 from warehouse.cache.origin import origin_cache
27 from warehouse.cache.http import cache_control
28 from warehouse.classifiers.models import Classifier
29 from warehouse.packaging.models import Project, Release, File
30 from warehouse.utils.row_counter import RowCount
31 from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
32
33
34 @view_config(context=HTTPException)
35 @notfound_view_config(append_slash=HTTPMovedPermanently)
36 def httpexception_view(exc, request):
37 return exc
38
39
40 @forbidden_view_config()
41 def forbidden(exc, request):
42 # If the forbidden error is because the user isn't logged in, then we'll
43 # redirect them to the log in page.
44 if request.authenticated_userid is None:
45 url = request.route_url(
46 "accounts.login",
47 _query={REDIRECT_FIELD_NAME: request.path_qs},
48 )
49 return HTTPSeeOther(url)
50
51 # If we've reached here, then the user is logged in and they are genuinely
52 # not allowed to access this page.
53 # TODO: Style the forbidden page.
54 return exc
55
56
57 @view_config(
58 route_name="robots.txt",
59 renderer="robots.txt",
60 decorator=[
61 cache_control(1 * 24 * 60 * 60), # 1 day
62 origin_cache(
63 1 * 24 * 60 * 60, # 1 day
64 stale_while_revalidate=6 * 60 * 60, # 6 hours
65 stale_if_error=1 * 24 * 60 * 60, # 1 day
66 ),
67 ],
68 )
69 def robotstxt(request):
70 request.response.content_type = "text/plain"
71 return {}
72
73
74 @view_config(
75 route_name="index",
76 renderer="index.html",
77 decorator=[
78 origin_cache(
79 1 * 60 * 60, # 1 hour
80 stale_while_revalidate=10 * 60, # 10 minutes
81 stale_if_error=1 * 24 * 60 * 60, # 1 day
82 keys=["all-projects"],
83 ),
84 ]
85 )
86 def index(request):
87 project_names = [
88 r[0] for r in (
89 request.db.query(File.name)
90 .group_by(File.name)
91 .order_by(func.sum(File.downloads).desc())
92 .limit(5)
93 .all())
94 ]
95 release_a = aliased(
96 Release,
97 request.db.query(Release)
98 .distinct(Release.name)
99 .filter(Release.name.in_(project_names))
100 .order_by(Release.name, Release._pypi_ordering.desc())
101 .subquery(),
102 )
103 top_projects = (
104 request.db.query(release_a)
105 .options(joinedload(release_a.project),
106 joinedload(release_a.uploader))
107 .order_by(func.array_idx(project_names, release_a.name))
108 .all()
109 )
110
111 latest_releases = (
112 request.db.query(Release)
113 .options(joinedload(Release.project),
114 joinedload(Release.uploader))
115 .order_by(Release.created.desc())
116 .limit(5)
117 .all()
118 )
119
120 counts = dict(
121 request.db.query(RowCount.table_name, RowCount.count)
122 .filter(
123 RowCount.table_name.in_([
124 Project.__tablename__,
125 Release.__tablename__,
126 File.__tablename__,
127 User.__tablename__,
128 ]))
129 .all()
130 )
131
132 return {
133 "latest_releases": latest_releases,
134 "top_projects": top_projects,
135 "num_projects": counts.get(Project.__tablename__, 0),
136 "num_releases": counts.get(Release.__tablename__, 0),
137 "num_files": counts.get(File.__tablename__, 0),
138 "num_users": counts.get(User.__tablename__, 0),
139 }
140
141
142 @view_config(
143 route_name="search",
144 renderer="search/results.html",
145 decorator=[
146 origin_cache(
147 1 * 60 * 60, # 1 hour
148 stale_while_revalidate=10 * 60, # 10 minutes
149 stale_if_error=1 * 24 * 60 * 60, # 1 day
150 keys=["all-projects"],
151 )
152 ],
153 )
154 def search(request):
155 if request.params.get("q"):
156 query = request.es.query(
157 "multi_match",
158 query=request.params["q"],
159 fields=[
160 "name^2", "version", "author", "author_email", "maintainer",
161 "maintainer_email", "home_page", "license", "summary",
162 "description", "keywords", "platform", "download_url",
163 ],
164 ).suggest(
165 name="name_suggestion",
166 text=request.params["q"],
167 term={"field": "name"}
168 )
169 else:
170 query = request.es.query()
171
172 if request.params.get("o"):
173 query = query.sort(request.params["o"])
174
175 if request.params.getall("c"):
176 query = query.filter('terms', classifiers=request.params.getall("c"))
177
178 page_num = int(request.params.get("page", 1))
179 page = ElasticsearchPage(
180 query,
181 page=page_num,
182 url_maker=paginate_url_factory(request),
183 )
184
185 if page_num > page.page_count:
186 raise HTTPNotFound
187
188 available_filters = collections.defaultdict(list)
189
190 for cls in request.db.query(Classifier).order_by(Classifier.classifier):
191 first, *_ = cls.classifier.split(' :: ')
192 available_filters[first].append(cls.classifier)
193
194 return {
195 "page": page,
196 "term": request.params.get("q", ''),
197 "order": request.params.get("o", ''),
198 "available_filters": sorted(available_filters.items()),
199 "applied_filters": request.params.getall("c"),
200 }
201
202
203 @view_config(
204 route_name="includes.current-user-indicator",
205 renderer="includes/current-user-indicator.html",
206 uses_session=True,
207 )
208 def current_user_indicator(request):
209 return {}
210
211
212 @view_config(route_name="health", renderer="string")
213 def health(request):
214 # This will ensure that we can access the database and run queries against
215 # it without doing anything that will take a lock or block other queries.
216 request.db.execute("SELECT 1")
217
218 # Nothing will actually check this, but it's a little nicer to have
219 # something to return besides an empty body.
220 return "OK"
221
[end of warehouse/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/utils/paginate.py b/warehouse/utils/paginate.py
--- a/warehouse/utils/paginate.py
+++ b/warehouse/utils/paginate.py
@@ -44,9 +44,10 @@
self.results = self.query[range].execute()
if hasattr(self.results, "suggest"):
- suggestion = self.results.suggest.name_suggestion[0]
- if suggestion.options:
- self.best_guess = suggestion.options[0]
+ if self.results.suggest.name_suggestion:
+ suggestion = self.results.suggest.name_suggestion[0]
+ if suggestion.options:
+ self.best_guess = suggestion.options[0]
return list(self.results)
diff --git a/warehouse/views.py b/warehouse/views.py
--- a/warehouse/views.py
+++ b/warehouse/views.py
@@ -182,7 +182,7 @@
url_maker=paginate_url_factory(request),
)
- if page_num > page.page_count:
+ if page.page_count and page_num > page.page_count:
raise HTTPNotFound
available_filters = collections.defaultdict(list)
|
{"golden_diff": "diff --git a/warehouse/utils/paginate.py b/warehouse/utils/paginate.py\n--- a/warehouse/utils/paginate.py\n+++ b/warehouse/utils/paginate.py\n@@ -44,9 +44,10 @@\n self.results = self.query[range].execute()\n \n if hasattr(self.results, \"suggest\"):\n- suggestion = self.results.suggest.name_suggestion[0]\n- if suggestion.options:\n- self.best_guess = suggestion.options[0]\n+ if self.results.suggest.name_suggestion:\n+ suggestion = self.results.suggest.name_suggestion[0]\n+ if suggestion.options:\n+ self.best_guess = suggestion.options[0]\n \n return list(self.results)\n \ndiff --git a/warehouse/views.py b/warehouse/views.py\n--- a/warehouse/views.py\n+++ b/warehouse/views.py\n@@ -182,7 +182,7 @@\n url_maker=paginate_url_factory(request),\n )\n \n- if page_num > page.page_count:\n+ if page.page_count and page_num > page.page_count:\n raise HTTPNotFound\n \n available_filters = collections.defaultdict(list)\n", "issue": "error 404 when searching for a package name with underscores\nwhen I try to search for a package name which contains underscores I get a 404 error\n\nexample: https://pypi.io/search/?q=test_test\n\nerror 404 when searching for a package name with underscores\nwhen I try to search for a package name which contains underscores I get a 404 error\n\nexample: https://pypi.io/search/?q=test_test\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom paginate import Page\n\n\nclass _ElasticsearchWrapper:\n\n max_results = 10000\n\n def __init__(self, query):\n self.query = query\n self.results = None\n self.best_guess = None\n\n def __getitem__(self, range):\n # If we're asking for a range that extends past our maximum results,\n # then we need to clamp the start of our slice to our maximum results\n # size, and make sure that the end of our slice >= to that to ensure a\n # consistent slice.\n if range.start > self.max_results:\n range = slice(\n self.max_results,\n max(range.stop, self.max_results),\n range.step,\n )\n\n # If we're being asked for a range that extends past our maximum result\n # then we'll clamp it to the maximum result size and stop there.\n if range.stop > self.max_results:\n range = slice(range.start, self.max_results, range.step)\n\n if self.results is not None:\n raise RuntimeError(\"Cannot reslice after having already sliced.\")\n self.results = self.query[range].execute()\n\n if hasattr(self.results, \"suggest\"):\n suggestion = self.results.suggest.name_suggestion[0]\n if suggestion.options:\n self.best_guess = suggestion.options[0]\n\n return list(self.results)\n\n def __len__(self):\n if self.results is None:\n raise RuntimeError(\"Cannot get length until a slice.\")\n return min(self.results.hits.total, self.max_results)\n\n\ndef ElasticsearchPage(*args, **kwargs): # noqa\n kwargs.setdefault(\"wrapper_class\", _ElasticsearchWrapper)\n return Page(*args, **kwargs)\n\n\ndef paginate_url_factory(request, query_arg=\"page\"):\n def make_url(page):\n query_seq = [\n (k, v)\n for k, vs in request.GET.dict_of_lists().items()\n for v in vs\n if k != query_arg\n ]\n query_seq += [(query_arg, page)]\n return request.current_route_path(_query=query_seq)\n return make_url\n", "path": "warehouse/utils/paginate.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.classifiers.models import Classifier\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n@view_config(context=HTTPException)\n@notfound_view_config(append_slash=HTTPMovedPermanently)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n if request.params.getall(\"c\"):\n query = query.filter('terms', classifiers=request.params.getall(\"c\"))\n\n page_num = int(request.params.get(\"page\", 1))\n page = ElasticsearchPage(\n query,\n page=page_num,\n url_maker=paginate_url_factory(request),\n )\n\n if page_num > page.page_count:\n raise HTTPNotFound\n\n available_filters = collections.defaultdict(list)\n\n for cls in request.db.query(Classifier).order_by(Classifier.classifier):\n first, *_ = cls.classifier.split(' :: ')\n available_filters[first].append(cls.classifier)\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\", ''),\n \"order\": request.params.get(\"o\", ''),\n \"available_filters\": sorted(available_filters.items()),\n \"applied_filters\": request.params.getall(\"c\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n\n\n@view_config(route_name=\"health\", renderer=\"string\")\ndef health(request):\n # This will ensure that we can access the database and run queries against\n # it without doing anything that will take a lock or block other queries.\n request.db.execute(\"SELECT 1\")\n\n # Nothing will actually check this, but it's a little nicer to have\n # something to return besides an empty body.\n return \"OK\"\n", "path": "warehouse/views.py"}]}
| 3,508 | 248 |
gh_patches_debug_23677
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2429
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OTLP Exporter: Add support for Histogram point types
</issue>
<code>
[start of exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import logging
15 from os import environ
16 from typing import Optional, Sequence
17 from grpc import ChannelCredentials, Compression
18 from opentelemetry.exporter.otlp.proto.grpc.exporter import (
19 OTLPExporterMixin,
20 get_resource_data,
21 )
22 from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (
23 ExportMetricsServiceRequest,
24 )
25 from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (
26 MetricsServiceStub,
27 )
28 from opentelemetry.proto.common.v1.common_pb2 import InstrumentationLibrary
29 from opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2
30 from opentelemetry.sdk.environment_variables import (
31 OTEL_EXPORTER_OTLP_METRICS_INSECURE,
32 )
33 from opentelemetry.sdk._metrics.point import (
34 Gauge,
35 Histogram,
36 Metric,
37 Sum,
38 )
39
40 from opentelemetry.sdk._metrics.export import (
41 MetricExporter,
42 MetricExportResult,
43 )
44
45 logger = logging.getLogger(__name__)
46
47
48 class OTLPMetricExporter(
49 MetricExporter,
50 OTLPExporterMixin[Metric, ExportMetricsServiceRequest, MetricExportResult],
51 ):
52 _result = MetricExportResult
53 _stub = MetricsServiceStub
54
55 def __init__(
56 self,
57 endpoint: Optional[str] = None,
58 insecure: Optional[bool] = None,
59 credentials: Optional[ChannelCredentials] = None,
60 headers: Optional[Sequence] = None,
61 timeout: Optional[int] = None,
62 compression: Optional[Compression] = None,
63 ):
64
65 if insecure is None:
66 insecure = environ.get(OTEL_EXPORTER_OTLP_METRICS_INSECURE)
67 if insecure is not None:
68 insecure = insecure.lower() == "true"
69
70 super().__init__(
71 **{
72 "endpoint": endpoint,
73 "insecure": insecure,
74 "credentials": credentials,
75 "headers": headers,
76 "timeout": timeout,
77 "compression": compression,
78 }
79 )
80
81 def _translate_data(
82 self, data: Sequence[Metric]
83 ) -> ExportMetricsServiceRequest:
84 sdk_resource_instrumentation_library_metrics = {}
85
86 for metric in data:
87 resource = metric.resource
88 instrumentation_library_map = (
89 sdk_resource_instrumentation_library_metrics.get(resource, {})
90 )
91 if not instrumentation_library_map:
92 sdk_resource_instrumentation_library_metrics[
93 resource
94 ] = instrumentation_library_map
95
96 instrumentation_library_metrics = instrumentation_library_map.get(
97 metric.instrumentation_info
98 )
99
100 if not instrumentation_library_metrics:
101 if metric.instrumentation_info is not None:
102 instrumentation_library_map[
103 metric.instrumentation_info
104 ] = pb2.InstrumentationLibraryMetrics(
105 instrumentation_library=InstrumentationLibrary(
106 name=metric.instrumentation_info.name,
107 version=metric.instrumentation_info.version,
108 )
109 )
110 else:
111 instrumentation_library_map[
112 metric.instrumentation_info
113 ] = pb2.InstrumentationLibraryMetrics()
114
115 instrumentation_library_metrics = instrumentation_library_map.get(
116 metric.instrumentation_info
117 )
118
119 pbmetric = pb2.Metric(
120 name=metric.name,
121 description=metric.description,
122 unit=metric.unit,
123 )
124 if isinstance(metric.point, Gauge):
125 pt = pb2.NumberDataPoint(
126 attributes=self._translate_attributes(metric.attributes),
127 time_unix_nano=metric.point.time_unix_nano,
128 )
129 if isinstance(metric.point.value, int):
130 pt.as_int = metric.point.value
131 else:
132 pt.as_double = metric.point.value
133 pbmetric.gauge.data_points.append(pt)
134 elif isinstance(metric.point, Histogram):
135 # TODO: implement histogram
136 pbmetric.histogram = pb2.Histogram(
137 data_points=[],
138 )
139 elif isinstance(metric.point, Sum):
140 pt = pb2.NumberDataPoint(
141 attributes=self._translate_attributes(metric.attributes),
142 start_time_unix_nano=metric.point.start_time_unix_nano,
143 time_unix_nano=metric.point.time_unix_nano,
144 )
145 if isinstance(metric.point.value, int):
146 pt.as_int = metric.point.value
147 else:
148 pt.as_double = metric.point.value
149 # note that because sum is a message type, the fields must be
150 # set individually rather than instantiating a pb2.Sum and setting
151 # it once
152 pbmetric.sum.aggregation_temporality = (
153 metric.point.aggregation_temporality
154 )
155 pbmetric.sum.is_monotonic = metric.point.is_monotonic
156 pbmetric.sum.data_points.append(pt)
157 else:
158 logger.warn("unsupported datapoint type %s", metric.point)
159 continue
160
161 instrumentation_library_metrics.metrics.append(
162 pbmetric,
163 )
164 return ExportMetricsServiceRequest(
165 resource_metrics=get_resource_data(
166 sdk_resource_instrumentation_library_metrics,
167 pb2.ResourceMetrics,
168 "metrics",
169 )
170 )
171
172 def export(self, metrics: Sequence[Metric]) -> MetricExportResult:
173 return self._export(metrics)
174
175 def shutdown(self):
176 pass
177
[end of exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py
--- a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py
@@ -132,10 +132,18 @@
pt.as_double = metric.point.value
pbmetric.gauge.data_points.append(pt)
elif isinstance(metric.point, Histogram):
- # TODO: implement histogram
- pbmetric.histogram = pb2.Histogram(
- data_points=[],
+ pt = pb2.HistogramDataPoint(
+ attributes=self._translate_attributes(metric.attributes),
+ time_unix_nano=metric.point.time_unix_nano,
+ start_time_unix_nano=metric.point.start_time_unix_nano,
+ count=sum(metric.point.bucket_counts),
+ bucket_counts=metric.point.bucket_counts,
+ explicit_bounds=metric.point.explicit_bounds,
+ )
+ pbmetric.histogram.aggregation_temporality = (
+ metric.point.aggregation_temporality
)
+ pbmetric.histogram.data_points.append(pt)
elif isinstance(metric.point, Sum):
pt = pb2.NumberDataPoint(
attributes=self._translate_attributes(metric.attributes),
|
{"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py\n--- a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py\n+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py\n@@ -132,10 +132,18 @@\n pt.as_double = metric.point.value\n pbmetric.gauge.data_points.append(pt)\n elif isinstance(metric.point, Histogram):\n- # TODO: implement histogram\n- pbmetric.histogram = pb2.Histogram(\n- data_points=[],\n+ pt = pb2.HistogramDataPoint(\n+ attributes=self._translate_attributes(metric.attributes),\n+ time_unix_nano=metric.point.time_unix_nano,\n+ start_time_unix_nano=metric.point.start_time_unix_nano,\n+ count=sum(metric.point.bucket_counts),\n+ bucket_counts=metric.point.bucket_counts,\n+ explicit_bounds=metric.point.explicit_bounds,\n+ )\n+ pbmetric.histogram.aggregation_temporality = (\n+ metric.point.aggregation_temporality\n )\n+ pbmetric.histogram.data_points.append(pt)\n elif isinstance(metric.point, Sum):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n", "issue": "OTLP Exporter: Add support for Histogram point types\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom os import environ\nfrom typing import Optional, Sequence\nfrom grpc import ChannelCredentials, Compression\nfrom opentelemetry.exporter.otlp.proto.grpc.exporter import (\n OTLPExporterMixin,\n get_resource_data,\n)\nfrom opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (\n ExportMetricsServiceRequest,\n)\nfrom opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (\n MetricsServiceStub,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import InstrumentationLibrary\nfrom opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_EXPORTER_OTLP_METRICS_INSECURE,\n)\nfrom opentelemetry.sdk._metrics.point import (\n Gauge,\n Histogram,\n Metric,\n Sum,\n)\n\nfrom opentelemetry.sdk._metrics.export import (\n MetricExporter,\n MetricExportResult,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass OTLPMetricExporter(\n MetricExporter,\n OTLPExporterMixin[Metric, ExportMetricsServiceRequest, MetricExportResult],\n):\n _result = MetricExportResult\n _stub = MetricsServiceStub\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: Optional[Compression] = None,\n ):\n\n if insecure is None:\n insecure = environ.get(OTEL_EXPORTER_OTLP_METRICS_INSECURE)\n if insecure is not None:\n insecure = insecure.lower() == \"true\"\n\n super().__init__(\n **{\n \"endpoint\": endpoint,\n \"insecure\": insecure,\n \"credentials\": credentials,\n \"headers\": headers,\n \"timeout\": timeout,\n \"compression\": compression,\n }\n )\n\n def _translate_data(\n self, data: Sequence[Metric]\n ) -> ExportMetricsServiceRequest:\n sdk_resource_instrumentation_library_metrics = {}\n\n for metric in data:\n resource = metric.resource\n instrumentation_library_map = (\n sdk_resource_instrumentation_library_metrics.get(resource, {})\n )\n if not instrumentation_library_map:\n sdk_resource_instrumentation_library_metrics[\n resource\n ] = instrumentation_library_map\n\n instrumentation_library_metrics = instrumentation_library_map.get(\n metric.instrumentation_info\n )\n\n if not instrumentation_library_metrics:\n if metric.instrumentation_info is not None:\n instrumentation_library_map[\n metric.instrumentation_info\n ] = pb2.InstrumentationLibraryMetrics(\n instrumentation_library=InstrumentationLibrary(\n name=metric.instrumentation_info.name,\n version=metric.instrumentation_info.version,\n )\n )\n else:\n instrumentation_library_map[\n metric.instrumentation_info\n ] = pb2.InstrumentationLibraryMetrics()\n\n instrumentation_library_metrics = instrumentation_library_map.get(\n metric.instrumentation_info\n )\n\n pbmetric = pb2.Metric(\n name=metric.name,\n description=metric.description,\n unit=metric.unit,\n )\n if isinstance(metric.point, Gauge):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n time_unix_nano=metric.point.time_unix_nano,\n )\n if isinstance(metric.point.value, int):\n pt.as_int = metric.point.value\n else:\n pt.as_double = metric.point.value\n pbmetric.gauge.data_points.append(pt)\n elif isinstance(metric.point, Histogram):\n # TODO: implement histogram\n pbmetric.histogram = pb2.Histogram(\n data_points=[],\n )\n elif isinstance(metric.point, Sum):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n start_time_unix_nano=metric.point.start_time_unix_nano,\n time_unix_nano=metric.point.time_unix_nano,\n )\n if isinstance(metric.point.value, int):\n pt.as_int = metric.point.value\n else:\n pt.as_double = metric.point.value\n # note that because sum is a message type, the fields must be\n # set individually rather than instantiating a pb2.Sum and setting\n # it once\n pbmetric.sum.aggregation_temporality = (\n metric.point.aggregation_temporality\n )\n pbmetric.sum.is_monotonic = metric.point.is_monotonic\n pbmetric.sum.data_points.append(pt)\n else:\n logger.warn(\"unsupported datapoint type %s\", metric.point)\n continue\n\n instrumentation_library_metrics.metrics.append(\n pbmetric,\n )\n return ExportMetricsServiceRequest(\n resource_metrics=get_resource_data(\n sdk_resource_instrumentation_library_metrics,\n pb2.ResourceMetrics,\n \"metrics\",\n )\n )\n\n def export(self, metrics: Sequence[Metric]) -> MetricExportResult:\n return self._export(metrics)\n\n def shutdown(self):\n pass\n", "path": "exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py"}]}
| 2,206 | 356 |
gh_patches_debug_18113
|
rasdani/github-patches
|
git_diff
|
nltk__nltk-2608
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong result for word more'n in Word Tokenizer and PTB Tokenizer
Let sentence: `It's more'n enough`
If I'm not mistaken, the PTB Tokenizer should result something like this:
`["It", "'s", "more", "'n", "enough"]`
But, it's not. It returns:
`["It", "'s", "more", "'", "n", "enough"]`
Since Word Tokenizer trying to implement PTB contraction, the result should be like that, right?
**PS**: Word Tokenizer contraction is `mor'n`, while original PTB contraction is `more'n`. Need clarification.
</issue>
<code>
[start of nltk/tokenize/destructive.py]
1 # Natural Language Toolkit: NLTK's very own tokenizer.
2 #
3 # Copyright (C) 2001-2020 NLTK Project
4 # Author:
5 # URL: <http://nltk.sourceforge.net>
6 # For license information, see LICENSE.TXT
7
8
9 import re
10 from nltk.tokenize.api import TokenizerI
11
12
13 class MacIntyreContractions:
14 """
15 List of contractions adapted from Robert MacIntyre's tokenizer.
16 """
17
18 CONTRACTIONS2 = [
19 r"(?i)\b(can)(?#X)(not)\b",
20 r"(?i)\b(d)(?#X)('ye)\b",
21 r"(?i)\b(gim)(?#X)(me)\b",
22 r"(?i)\b(gon)(?#X)(na)\b",
23 r"(?i)\b(got)(?#X)(ta)\b",
24 r"(?i)\b(lem)(?#X)(me)\b",
25 r"(?i)\b(mor)(?#X)('n)\b",
26 r"(?i)\b(wan)(?#X)(na)\s",
27 ]
28 CONTRACTIONS3 = [r"(?i) ('t)(?#X)(is)\b", r"(?i) ('t)(?#X)(was)\b"]
29 CONTRACTIONS4 = [r"(?i)\b(whad)(dd)(ya)\b", r"(?i)\b(wha)(t)(cha)\b"]
30
31
32 class NLTKWordTokenizer(TokenizerI):
33 """
34 The NLTK tokenizer that has improved upon the TreebankWordTokenizer.
35
36 The tokenizer is "destructive" such that the regexes applied will munge the
37 input string to a state beyond re-construction. It is possible to apply
38 `TreebankWordDetokenizer.detokenize` to the tokenized outputs of
39 `NLTKDestructiveWordTokenizer.tokenize` but there's no guarantees to
40 revert to the original string.
41 """
42
43 # Starting quotes.
44 STARTING_QUOTES = [
45 (re.compile(u"([«“‘„]|[`]+)", re.U), r" \1 "),
46 (re.compile(r"^\""), r"``"),
47 (re.compile(r"(``)"), r" \1 "),
48 (re.compile(r"([ \(\[{<])(\"|\'{2})"), r"\1 `` "),
49 (re.compile(r"(?i)(\')(?!re|ve|ll|m|t|s|d)(\w)\b", re.U), r"\1 \2"),
50 ]
51
52 # Ending quotes.
53 ENDING_QUOTES = [
54 (re.compile(u"([»”’])", re.U), r" \1 "),
55 (re.compile(r'"'), " '' "),
56 (re.compile(r"(\S)(\'\')"), r"\1 \2 "),
57 (re.compile(r"([^' ])('[sS]|'[mM]|'[dD]|') "), r"\1 \2 "),
58 (re.compile(r"([^' ])('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) "), r"\1 \2 "),
59 ]
60
61 # For improvements for starting/closing quotes from TreebankWordTokenizer,
62 # see discussion on https://github.com/nltk/nltk/pull/1437
63 # Adding to TreebankWordTokenizer, nltk.word_tokenize now splits on
64 # - chervon quotes u'\xab' and u'\xbb' .
65 # - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'
66 # See https://github.com/nltk/nltk/issues/1995#issuecomment-376741608
67 # Also, behavior of splitting on clitics now follows Stanford CoreNLP
68 # - clitics covered (?!re|ve|ll|m|t|s|d)(\w)\b
69
70 # Punctuation.
71 PUNCTUATION = [
72 (re.compile(r'([^\.])(\.)([\]\)}>"\'' u"»”’ " r"]*)\s*$", re.U), r"\1 \2 \3 "),
73 (re.compile(r"([:,])([^\d])"), r" \1 \2"),
74 (re.compile(r"([:,])$"), r" \1 "),
75 (re.compile(r"\.{2,}", re.U), r" \g<0> "), # See https://github.com/nltk/nltk/pull/2322
76 (re.compile(r"[;@#$%&]"), r" \g<0> "),
77 (
78 re.compile(r'([^\.])(\.)([\]\)}>"\']*)\s*$'),
79 r"\1 \2\3 ",
80 ), # Handles the final period.
81 (re.compile(r"[?!]"), r" \g<0> "),
82 (re.compile(r"([^'])' "), r"\1 ' "),
83 (re.compile(r"[*]", re.U), r" \g<0> "), # See https://github.com/nltk/nltk/pull/2322
84 ]
85
86 # Pads parentheses
87 PARENS_BRACKETS = (re.compile(r"[\]\[\(\)\{\}\<\>]"), r" \g<0> ")
88
89 # Optionally: Convert parentheses, brackets and converts them to PTB symbols.
90 CONVERT_PARENTHESES = [
91 (re.compile(r"\("), "-LRB-"),
92 (re.compile(r"\)"), "-RRB-"),
93 (re.compile(r"\["), "-LSB-"),
94 (re.compile(r"\]"), "-RSB-"),
95 (re.compile(r"\{"), "-LCB-"),
96 (re.compile(r"\}"), "-RCB-"),
97 ]
98
99 DOUBLE_DASHES = (re.compile(r"--"), r" -- ")
100
101 # List of contractions adapted from Robert MacIntyre's tokenizer.
102 _contractions = MacIntyreContractions()
103 CONTRACTIONS2 = list(map(re.compile, _contractions.CONTRACTIONS2))
104 CONTRACTIONS3 = list(map(re.compile, _contractions.CONTRACTIONS3))
105
106 def tokenize(self, text, convert_parentheses=False, return_str=False):
107 for regexp, substitution in self.STARTING_QUOTES:
108 text = regexp.sub(substitution, text)
109
110 for regexp, substitution in self.PUNCTUATION:
111 text = regexp.sub(substitution, text)
112
113 # Handles parentheses.
114 regexp, substitution = self.PARENS_BRACKETS
115 text = regexp.sub(substitution, text)
116 # Optionally convert parentheses
117 if convert_parentheses:
118 for regexp, substitution in self.CONVERT_PARENTHESES:
119 text = regexp.sub(substitution, text)
120
121 # Handles double dash.
122 regexp, substitution = self.DOUBLE_DASHES
123 text = regexp.sub(substitution, text)
124
125 # add extra space to make things easier
126 text = " " + text + " "
127
128 for regexp, substitution in self.ENDING_QUOTES:
129 text = regexp.sub(substitution, text)
130
131 for regexp in self.CONTRACTIONS2:
132 text = regexp.sub(r" \1 \2 ", text)
133 for regexp in self.CONTRACTIONS3:
134 text = regexp.sub(r" \1 \2 ", text)
135
136 # We are not using CONTRACTIONS4 since
137 # they are also commented out in the SED scripts
138 # for regexp in self._contractions.CONTRACTIONS4:
139 # text = regexp.sub(r' \1 \2 \3 ', text)
140
141 return text if return_str else text.split()
142
[end of nltk/tokenize/destructive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nltk/tokenize/destructive.py b/nltk/tokenize/destructive.py
--- a/nltk/tokenize/destructive.py
+++ b/nltk/tokenize/destructive.py
@@ -22,7 +22,7 @@
r"(?i)\b(gon)(?#X)(na)\b",
r"(?i)\b(got)(?#X)(ta)\b",
r"(?i)\b(lem)(?#X)(me)\b",
- r"(?i)\b(mor)(?#X)('n)\b",
+ r"(?i)\b(more)(?#X)('n)\b",
r"(?i)\b(wan)(?#X)(na)\s",
]
CONTRACTIONS3 = [r"(?i) ('t)(?#X)(is)\b", r"(?i) ('t)(?#X)(was)\b"]
@@ -46,7 +46,7 @@
(re.compile(r"^\""), r"``"),
(re.compile(r"(``)"), r" \1 "),
(re.compile(r"([ \(\[{<])(\"|\'{2})"), r"\1 `` "),
- (re.compile(r"(?i)(\')(?!re|ve|ll|m|t|s|d)(\w)\b", re.U), r"\1 \2"),
+ (re.compile(r"(?i)(\')(?!re|ve|ll|m|t|s|d|n)(\w)\b", re.U), r"\1 \2"),
]
# Ending quotes.
|
{"golden_diff": "diff --git a/nltk/tokenize/destructive.py b/nltk/tokenize/destructive.py\n--- a/nltk/tokenize/destructive.py\n+++ b/nltk/tokenize/destructive.py\n@@ -22,7 +22,7 @@\n r\"(?i)\\b(gon)(?#X)(na)\\b\",\n r\"(?i)\\b(got)(?#X)(ta)\\b\",\n r\"(?i)\\b(lem)(?#X)(me)\\b\",\n- r\"(?i)\\b(mor)(?#X)('n)\\b\",\n+ r\"(?i)\\b(more)(?#X)('n)\\b\",\n r\"(?i)\\b(wan)(?#X)(na)\\s\",\n ]\n CONTRACTIONS3 = [r\"(?i) ('t)(?#X)(is)\\b\", r\"(?i) ('t)(?#X)(was)\\b\"]\n@@ -46,7 +46,7 @@\n (re.compile(r\"^\\\"\"), r\"``\"),\n (re.compile(r\"(``)\"), r\" \\1 \"),\n (re.compile(r\"([ \\(\\[{<])(\\\"|\\'{2})\"), r\"\\1 `` \"),\n- (re.compile(r\"(?i)(\\')(?!re|ve|ll|m|t|s|d)(\\w)\\b\", re.U), r\"\\1 \\2\"),\n+ (re.compile(r\"(?i)(\\')(?!re|ve|ll|m|t|s|d|n)(\\w)\\b\", re.U), r\"\\1 \\2\"),\n ]\n \n # Ending quotes.\n", "issue": "Wrong result for word more'n in Word Tokenizer and PTB Tokenizer\nLet sentence: `It's more'n enough`\r\n\r\nIf I'm not mistaken, the PTB Tokenizer should result something like this:\r\n\r\n`[\"It\", \"'s\", \"more\", \"'n\", \"enough\"]`\r\n\r\nBut, it's not. It returns:\r\n\r\n`[\"It\", \"'s\", \"more\", \"'\", \"n\", \"enough\"]`\r\n\r\nSince Word Tokenizer trying to implement PTB contraction, the result should be like that, right?\r\n\r\n**PS**: Word Tokenizer contraction is `mor'n`, while original PTB contraction is `more'n`. Need clarification.\n", "before_files": [{"content": "# Natural Language Toolkit: NLTK's very own tokenizer.\n#\n# Copyright (C) 2001-2020 NLTK Project\n# Author:\n# URL: <http://nltk.sourceforge.net>\n# For license information, see LICENSE.TXT\n\n\nimport re\nfrom nltk.tokenize.api import TokenizerI\n\n\nclass MacIntyreContractions:\n \"\"\"\n List of contractions adapted from Robert MacIntyre's tokenizer.\n \"\"\"\n\n CONTRACTIONS2 = [\n r\"(?i)\\b(can)(?#X)(not)\\b\",\n r\"(?i)\\b(d)(?#X)('ye)\\b\",\n r\"(?i)\\b(gim)(?#X)(me)\\b\",\n r\"(?i)\\b(gon)(?#X)(na)\\b\",\n r\"(?i)\\b(got)(?#X)(ta)\\b\",\n r\"(?i)\\b(lem)(?#X)(me)\\b\",\n r\"(?i)\\b(mor)(?#X)('n)\\b\",\n r\"(?i)\\b(wan)(?#X)(na)\\s\",\n ]\n CONTRACTIONS3 = [r\"(?i) ('t)(?#X)(is)\\b\", r\"(?i) ('t)(?#X)(was)\\b\"]\n CONTRACTIONS4 = [r\"(?i)\\b(whad)(dd)(ya)\\b\", r\"(?i)\\b(wha)(t)(cha)\\b\"]\n\n\nclass NLTKWordTokenizer(TokenizerI):\n \"\"\"\n The NLTK tokenizer that has improved upon the TreebankWordTokenizer.\n\n The tokenizer is \"destructive\" such that the regexes applied will munge the\n input string to a state beyond re-construction. It is possible to apply\n `TreebankWordDetokenizer.detokenize` to the tokenized outputs of\n `NLTKDestructiveWordTokenizer.tokenize` but there's no guarantees to\n revert to the original string.\n \"\"\"\n\n # Starting quotes.\n STARTING_QUOTES = [\n (re.compile(u\"([\u00ab\u201c\u2018\u201e]|[`]+)\", re.U), r\" \\1 \"),\n (re.compile(r\"^\\\"\"), r\"``\"),\n (re.compile(r\"(``)\"), r\" \\1 \"),\n (re.compile(r\"([ \\(\\[{<])(\\\"|\\'{2})\"), r\"\\1 `` \"),\n (re.compile(r\"(?i)(\\')(?!re|ve|ll|m|t|s|d)(\\w)\\b\", re.U), r\"\\1 \\2\"),\n ]\n\n # Ending quotes.\n ENDING_QUOTES = [\n (re.compile(u\"([\u00bb\u201d\u2019])\", re.U), r\" \\1 \"),\n (re.compile(r'\"'), \" '' \"),\n (re.compile(r\"(\\S)(\\'\\')\"), r\"\\1 \\2 \"),\n (re.compile(r\"([^' ])('[sS]|'[mM]|'[dD]|') \"), r\"\\1 \\2 \"),\n (re.compile(r\"([^' ])('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) \"), r\"\\1 \\2 \"),\n ]\n\n # For improvements for starting/closing quotes from TreebankWordTokenizer,\n # see discussion on https://github.com/nltk/nltk/pull/1437\n # Adding to TreebankWordTokenizer, nltk.word_tokenize now splits on\n # - chervon quotes u'\\xab' and u'\\xbb' .\n # - unicode quotes u'\\u2018', u'\\u2019', u'\\u201c' and u'\\u201d'\n # See https://github.com/nltk/nltk/issues/1995#issuecomment-376741608\n # Also, behavior of splitting on clitics now follows Stanford CoreNLP\n # - clitics covered (?!re|ve|ll|m|t|s|d)(\\w)\\b\n\n # Punctuation.\n PUNCTUATION = [\n (re.compile(r'([^\\.])(\\.)([\\]\\)}>\"\\'' u\"\u00bb\u201d\u2019 \" r\"]*)\\s*$\", re.U), r\"\\1 \\2 \\3 \"),\n (re.compile(r\"([:,])([^\\d])\"), r\" \\1 \\2\"),\n (re.compile(r\"([:,])$\"), r\" \\1 \"),\n (re.compile(r\"\\.{2,}\", re.U), r\" \\g<0> \"), # See https://github.com/nltk/nltk/pull/2322\n (re.compile(r\"[;@#$%&]\"), r\" \\g<0> \"),\n (\n re.compile(r'([^\\.])(\\.)([\\]\\)}>\"\\']*)\\s*$'),\n r\"\\1 \\2\\3 \",\n ), # Handles the final period.\n (re.compile(r\"[?!]\"), r\" \\g<0> \"),\n (re.compile(r\"([^'])' \"), r\"\\1 ' \"),\n (re.compile(r\"[*]\", re.U), r\" \\g<0> \"), # See https://github.com/nltk/nltk/pull/2322\n ]\n\n # Pads parentheses\n PARENS_BRACKETS = (re.compile(r\"[\\]\\[\\(\\)\\{\\}\\<\\>]\"), r\" \\g<0> \")\n\n # Optionally: Convert parentheses, brackets and converts them to PTB symbols.\n CONVERT_PARENTHESES = [\n (re.compile(r\"\\(\"), \"-LRB-\"),\n (re.compile(r\"\\)\"), \"-RRB-\"),\n (re.compile(r\"\\[\"), \"-LSB-\"),\n (re.compile(r\"\\]\"), \"-RSB-\"),\n (re.compile(r\"\\{\"), \"-LCB-\"),\n (re.compile(r\"\\}\"), \"-RCB-\"),\n ]\n\n DOUBLE_DASHES = (re.compile(r\"--\"), r\" -- \")\n\n # List of contractions adapted from Robert MacIntyre's tokenizer.\n _contractions = MacIntyreContractions()\n CONTRACTIONS2 = list(map(re.compile, _contractions.CONTRACTIONS2))\n CONTRACTIONS3 = list(map(re.compile, _contractions.CONTRACTIONS3))\n\n def tokenize(self, text, convert_parentheses=False, return_str=False):\n for regexp, substitution in self.STARTING_QUOTES:\n text = regexp.sub(substitution, text)\n\n for regexp, substitution in self.PUNCTUATION:\n text = regexp.sub(substitution, text)\n\n # Handles parentheses.\n regexp, substitution = self.PARENS_BRACKETS\n text = regexp.sub(substitution, text)\n # Optionally convert parentheses\n if convert_parentheses:\n for regexp, substitution in self.CONVERT_PARENTHESES:\n text = regexp.sub(substitution, text)\n\n # Handles double dash.\n regexp, substitution = self.DOUBLE_DASHES\n text = regexp.sub(substitution, text)\n\n # add extra space to make things easier\n text = \" \" + text + \" \"\n\n for regexp, substitution in self.ENDING_QUOTES:\n text = regexp.sub(substitution, text)\n\n for regexp in self.CONTRACTIONS2:\n text = regexp.sub(r\" \\1 \\2 \", text)\n for regexp in self.CONTRACTIONS3:\n text = regexp.sub(r\" \\1 \\2 \", text)\n\n # We are not using CONTRACTIONS4 since\n # they are also commented out in the SED scripts\n # for regexp in self._contractions.CONTRACTIONS4:\n # text = regexp.sub(r' \\1 \\2 \\3 ', text)\n\n return text if return_str else text.split()\n", "path": "nltk/tokenize/destructive.py"}]}
| 2,679 | 361 |
gh_patches_debug_2722
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-13420
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
standard_gamma
</issue>
<code>
[start of ivy/functional/frontends/numpy/random/functions.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.numpy.func_wrapper import (
4 to_ivy_arrays_and_back,
5 from_zero_dim_arrays_to_scalar,
6 )
7
8
9 @to_ivy_arrays_and_back
10 @from_zero_dim_arrays_to_scalar
11 def random_sample(size=None):
12 return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype="float64")
13
14
15 @to_ivy_arrays_and_back
16 @from_zero_dim_arrays_to_scalar
17 def dirichlet(alpha, size=None):
18 return ivy.dirichlet(alpha, size=size)
19
20
21 @to_ivy_arrays_and_back
22 @from_zero_dim_arrays_to_scalar
23 def uniform(low=0.0, high=1.0, size=None):
24 return ivy.random_uniform(low=low, high=high, shape=size, dtype="float64")
25
26
27 @to_ivy_arrays_and_back
28 @from_zero_dim_arrays_to_scalar
29 def geometric(p, size=None):
30 if p < 0 or p > 1:
31 raise ValueError("p must be in the interval [0, 1]")
32 oneMinusP = ivy.subtract(1, p)
33 sizeMinusOne = ivy.subtract(size, 1)
34
35 return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)
36
37
38 @to_ivy_arrays_and_back
39 @from_zero_dim_arrays_to_scalar
40 def normal(loc=0.0, scale=1.0, size=None):
41 return ivy.random_normal(mean=loc, std=scale, shape=size, dtype="float64")
42
43
44 @to_ivy_arrays_and_back
45 @from_zero_dim_arrays_to_scalar
46 def poisson(lam=1.0, size=None):
47 return ivy.poisson(lam=lam, shape=size)
48
49
50 @to_ivy_arrays_and_back
51 @from_zero_dim_arrays_to_scalar
52 def multinomial(n, pvals, size=None):
53 assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)
54 batch_size = 1
55 if ivy.exists(size):
56 if len(size) == 2:
57 batch_size = size[0]
58 num_samples = size[1]
59 else:
60 num_samples = size[0]
61 else:
62 num_samples = len(pvals)
63 return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)
64
65
66 @to_ivy_arrays_and_back
67 @from_zero_dim_arrays_to_scalar
68 def permutation(x, /):
69 if isinstance(x, int):
70 x = ivy.arange(x)
71 return ivy.shuffle(x)
72
73
74 @to_ivy_arrays_and_back
75 @from_zero_dim_arrays_to_scalar
76 def beta(a, b, size=None):
77 return ivy.beta(a, b, shape=size)
78
79
80 @to_ivy_arrays_and_back
81 @from_zero_dim_arrays_to_scalar
82 def shuffle(x, /):
83 if isinstance(x, int):
84 x = ivy.arange(x)
85 return ivy.shuffle(x)
86
87
88 @to_ivy_arrays_and_back
89 @from_zero_dim_arrays_to_scalar
90 def standard_normal(size=None):
91 return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype="float64")
92
[end of ivy/functional/frontends/numpy/random/functions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/numpy/random/functions.py b/ivy/functional/frontends/numpy/random/functions.py
--- a/ivy/functional/frontends/numpy/random/functions.py
+++ b/ivy/functional/frontends/numpy/random/functions.py
@@ -89,3 +89,9 @@
@from_zero_dim_arrays_to_scalar
def standard_normal(size=None):
return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype="float64")
+
+
+@to_ivy_arrays_and_back
+@from_zero_dim_arrays_to_scalar
+def standard_gamma(alpha):
+ return ivy.gamma(alpha, beta=1.0, dtype="float64")
|
{"golden_diff": "diff --git a/ivy/functional/frontends/numpy/random/functions.py b/ivy/functional/frontends/numpy/random/functions.py\n--- a/ivy/functional/frontends/numpy/random/functions.py\n+++ b/ivy/functional/frontends/numpy/random/functions.py\n@@ -89,3 +89,9 @@\n @from_zero_dim_arrays_to_scalar\n def standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n+\n+\n+@to_ivy_arrays_and_back\n+@from_zero_dim_arrays_to_scalar\n+def standard_gamma(alpha):\n+ return ivy.gamma(alpha, beta=1.0, dtype=\"float64\")\n", "issue": "standard_gamma\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef random_sample(size=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef dirichlet(alpha, size=None):\n return ivy.dirichlet(alpha, size=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef uniform(low=0.0, high=1.0, size=None):\n return ivy.random_uniform(low=low, high=high, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef geometric(p, size=None):\n if p < 0 or p > 1:\n raise ValueError(\"p must be in the interval [0, 1]\")\n oneMinusP = ivy.subtract(1, p)\n sizeMinusOne = ivy.subtract(size, 1)\n\n return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef normal(loc=0.0, scale=1.0, size=None):\n return ivy.random_normal(mean=loc, std=scale, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef poisson(lam=1.0, size=None):\n return ivy.poisson(lam=lam, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef multinomial(n, pvals, size=None):\n assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)\n batch_size = 1\n if ivy.exists(size):\n if len(size) == 2:\n batch_size = size[0]\n num_samples = size[1]\n else:\n num_samples = size[0]\n else:\n num_samples = len(pvals)\n return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef permutation(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef beta(a, b, size=None):\n return ivy.beta(a, b, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef shuffle(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n", "path": "ivy/functional/frontends/numpy/random/functions.py"}]}
| 1,427 | 155 |
gh_patches_debug_3576
|
rasdani/github-patches
|
git_diff
|
ultrabug__py3status-1795
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xkblayout-state should prefer variant if available
Hi,
I am using xkblayout-state and when I did an update, xkblayout-state is now my preferred command. I switch between us and dvorak often so in order to get it working with my setup I need to have
"xkblayout-state print "%E"
when its querying the keymaps before setting them.
This issue is a follow-up to #1780
</issue>
<code>
[start of py3status/modules/keyboard_layout.py]
1 # -*- coding: utf-8 -*-
2 """
3 Display keyboard layout.
4
5 Configuration parameters:
6 button_next: mouse button to cycle next layout (default 4)
7 button_prev: mouse button to cycle previous layout (default 5)
8 cache_timeout: refresh interval for this module (default 10)
9 format: display format for this module (default '{layout}')
10 layouts: specify a list of layouts to use (default None)
11
12 Format placeholders:
13 {layout} keyboard layout
14
15 Color options:
16 color_<layout>: colorize the layout. eg color_fr = '#729FCF'
17
18 Requires:
19 xkblayout-state:
20 or
21 setxkbmap: and `xset` (works for the first two predefined layouts. overrides `XkbLayout` when switching layout.)
22
23 Examples:
24 ```
25 # define keyboard layouts that can be switched between
26 keyboard_layout {
27 layouts = ['gb', 'fr', 'dvorak']
28 }
29 ```
30
31 @author shadowprince, tuxitop
32 @license Eclipse Public License
33
34 SAMPLE OUTPUT
35 {'full_text': 'gb'}
36
37 fr
38 {'color': '#268BD2', 'full_text': 'fr'}
39
40 ru
41 {'color': '#F75252', 'full_text': 'ru'}
42
43 ua
44 {'color': '#FCE94F', 'full_text': 'ua'}
45
46 us
47 {'color': '#729FCF', 'full_text': 'us'}
48
49 """
50
51 import re
52
53 LAYOUTS_RE = re.compile(r".*layout:\s*((\w+,?)+).*", flags=re.DOTALL)
54 LEDMASK_RE = re.compile(r".*LED\smask:\s*\d{4}([01])\d{3}.*", flags=re.DOTALL)
55 VARIANTS_RE = re.compile(r".*variant:\s*(([\w-]+,?)+).*", flags=re.DOTALL)
56
57
58 class Py3status:
59 """
60 """
61
62 # available configuration parameters
63 button_next = 4
64 button_prev = 5
65 cache_timeout = 10
66 format = "{layout}"
67 layouts = None
68
69 def post_config_hook(self):
70 self.colors = getattr(self, "colors", None) # old config
71
72 avail_command = {
73 "xkblayout-state": (self._set_xkblayout, self._get_xkblayout),
74 "setxkbmap": (self._set_setxkbmap, self._get_setxkbmap),
75 }
76 self._set_command, self._get_command = avail_command[
77 self.py3.check_commands(avail_command.keys())
78 ]
79
80 if not self.layouts:
81 self.layouts = []
82 # We use a copy of layouts so that we can add extra layouts without
83 # affecting the original list
84 self._layouts = self.layouts[:]
85 self._last_layout = None
86
87 self.colors_dict = {}
88 # old compatibility: set default values
89 self.defaults = {
90 "fr": "#268BD2",
91 "ru": "#F75252",
92 "ua": "#FCE94F",
93 "us": "#729FCF",
94 }
95
96 def keyboard_layout(self):
97 layout, variant = self._get_command()
98 # If the current layout is not in our layouts list we need to add it
99 if layout not in self._layouts:
100 self._layouts = [layout] + self.layouts
101 self._active = 0
102 # show new layout if it has been changed externally
103 if layout != self._last_layout:
104 self._active = self._layouts.index(layout)
105 self._last_layout = layout
106 lang = self._layouts[self._active]
107
108 response = {
109 "cached_until": self.py3.time_in(self.cache_timeout),
110 "full_text": self.py3.safe_format(
111 self.format, {"layout": lang, "variant": variant}
112 ),
113 }
114
115 if self.colors and not self.colors_dict:
116 self.colors_dict = dict(
117 (k.strip(), v.strip())
118 for k, v in (layout.split("=") for layout in self.colors.split(","))
119 )
120
121 # colorize languages containing spaces and/or dashes too
122 language = lang.upper()
123 for character in " -":
124 if character in language:
125 language = language.replace(character, "_")
126
127 lang_color = getattr(self.py3, "COLOR_%s" % language)
128 if not lang_color:
129 lang_color = self.colors_dict.get(lang)
130 if not lang_color: # old compatibility: try default value
131 lang_color = self.defaults.get(lang)
132 if lang_color:
133 response["color"] = lang_color
134
135 return response
136
137 def _get_xkblayout(self):
138 layout, variant = [
139 x.strip()
140 for x in self.py3.command_output(
141 ["xkblayout-state", "print", "%s|SEPARATOR|%v"]
142 ).split("|SEPARATOR|")
143 ]
144 return layout, variant
145
146 def _get_setxkbmap(self):
147 # this method works only for the first two predefined layouts.
148 out = self.py3.command_output(["setxkbmap", "-query"])
149 layouts = re.match(LAYOUTS_RE, out).group(1).split(",")
150 if len(layouts) == 1:
151 variant = re.match(VARIANTS_RE, out)
152 if variant:
153 variant = variant.group(1)
154 return "{} {}".format(layouts[0], variant), variant
155 else:
156 return layouts[0], ""
157
158 xset_output = self.py3.command_output(["xset", "-q"])
159 led_mask = re.match(LEDMASK_RE, xset_output).groups(0)[0]
160 return layouts[int(led_mask)], ""
161
162 def _set_setxkbmap(self):
163 layout = self._layouts[self._active]
164 # Note: This will override user-defined layout, keyboard shortcut won't work
165 self.py3.command_run("setxkbmap -layout {}".format(layout))
166
167 def _set_xkblayout(self):
168 layout = self._layouts[self._active]
169 layout_pos = (
170 self.py3.command_output(["xkblayout-state", "print", "%S"])
171 .split()
172 .index(layout)
173 )
174 self.py3.command_run("xkblayout-state set {}".format(layout_pos))
175
176 def _set_active(self, delta):
177 self._active += delta
178 self._active = self._active % len(self._layouts)
179 self._set_command()
180
181 def on_click(self, event):
182 button = event["button"]
183 if button == self.button_next:
184 self._set_active(1)
185 if button == self.button_prev:
186 self._set_active(-1)
187
188
189 if __name__ == "__main__":
190 """
191 Run module in test mode.
192 """
193 from py3status.module_test import module_test
194
195 module_test(Py3status)
196
[end of py3status/modules/keyboard_layout.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py3status/modules/keyboard_layout.py b/py3status/modules/keyboard_layout.py
--- a/py3status/modules/keyboard_layout.py
+++ b/py3status/modules/keyboard_layout.py
@@ -167,7 +167,7 @@
def _set_xkblayout(self):
layout = self._layouts[self._active]
layout_pos = (
- self.py3.command_output(["xkblayout-state", "print", "%S"])
+ self.py3.command_output(["xkblayout-state", "print", "%E"])
.split()
.index(layout)
)
|
{"golden_diff": "diff --git a/py3status/modules/keyboard_layout.py b/py3status/modules/keyboard_layout.py\n--- a/py3status/modules/keyboard_layout.py\n+++ b/py3status/modules/keyboard_layout.py\n@@ -167,7 +167,7 @@\n def _set_xkblayout(self):\n layout = self._layouts[self._active]\n layout_pos = (\n- self.py3.command_output([\"xkblayout-state\", \"print\", \"%S\"])\n+ self.py3.command_output([\"xkblayout-state\", \"print\", \"%E\"])\n .split()\n .index(layout)\n )\n", "issue": "xkblayout-state should prefer variant if available\nHi,\r\n\r\nI am using xkblayout-state and when I did an update, xkblayout-state is now my preferred command. I switch between us and dvorak often so in order to get it working with my setup I need to have \r\n\r\n\"xkblayout-state print \"%E\"\r\n\r\nwhen its querying the keymaps before setting them.\r\n\r\nThis issue is a follow-up to #1780 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDisplay keyboard layout.\n\nConfiguration parameters:\n button_next: mouse button to cycle next layout (default 4)\n button_prev: mouse button to cycle previous layout (default 5)\n cache_timeout: refresh interval for this module (default 10)\n format: display format for this module (default '{layout}')\n layouts: specify a list of layouts to use (default None)\n\nFormat placeholders:\n {layout} keyboard layout\n\nColor options:\n color_<layout>: colorize the layout. eg color_fr = '#729FCF'\n\nRequires:\n xkblayout-state:\n or\n setxkbmap: and `xset` (works for the first two predefined layouts. overrides `XkbLayout` when switching layout.)\n\nExamples:\n```\n# define keyboard layouts that can be switched between\nkeyboard_layout {\n layouts = ['gb', 'fr', 'dvorak']\n}\n```\n\n@author shadowprince, tuxitop\n@license Eclipse Public License\n\nSAMPLE OUTPUT\n{'full_text': 'gb'}\n\nfr\n{'color': '#268BD2', 'full_text': 'fr'}\n\nru\n{'color': '#F75252', 'full_text': 'ru'}\n\nua\n{'color': '#FCE94F', 'full_text': 'ua'}\n\nus\n{'color': '#729FCF', 'full_text': 'us'}\n\n\"\"\"\n\nimport re\n\nLAYOUTS_RE = re.compile(r\".*layout:\\s*((\\w+,?)+).*\", flags=re.DOTALL)\nLEDMASK_RE = re.compile(r\".*LED\\smask:\\s*\\d{4}([01])\\d{3}.*\", flags=re.DOTALL)\nVARIANTS_RE = re.compile(r\".*variant:\\s*(([\\w-]+,?)+).*\", flags=re.DOTALL)\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n\n # available configuration parameters\n button_next = 4\n button_prev = 5\n cache_timeout = 10\n format = \"{layout}\"\n layouts = None\n\n def post_config_hook(self):\n self.colors = getattr(self, \"colors\", None) # old config\n\n avail_command = {\n \"xkblayout-state\": (self._set_xkblayout, self._get_xkblayout),\n \"setxkbmap\": (self._set_setxkbmap, self._get_setxkbmap),\n }\n self._set_command, self._get_command = avail_command[\n self.py3.check_commands(avail_command.keys())\n ]\n\n if not self.layouts:\n self.layouts = []\n # We use a copy of layouts so that we can add extra layouts without\n # affecting the original list\n self._layouts = self.layouts[:]\n self._last_layout = None\n\n self.colors_dict = {}\n # old compatibility: set default values\n self.defaults = {\n \"fr\": \"#268BD2\",\n \"ru\": \"#F75252\",\n \"ua\": \"#FCE94F\",\n \"us\": \"#729FCF\",\n }\n\n def keyboard_layout(self):\n layout, variant = self._get_command()\n # If the current layout is not in our layouts list we need to add it\n if layout not in self._layouts:\n self._layouts = [layout] + self.layouts\n self._active = 0\n # show new layout if it has been changed externally\n if layout != self._last_layout:\n self._active = self._layouts.index(layout)\n self._last_layout = layout\n lang = self._layouts[self._active]\n\n response = {\n \"cached_until\": self.py3.time_in(self.cache_timeout),\n \"full_text\": self.py3.safe_format(\n self.format, {\"layout\": lang, \"variant\": variant}\n ),\n }\n\n if self.colors and not self.colors_dict:\n self.colors_dict = dict(\n (k.strip(), v.strip())\n for k, v in (layout.split(\"=\") for layout in self.colors.split(\",\"))\n )\n\n # colorize languages containing spaces and/or dashes too\n language = lang.upper()\n for character in \" -\":\n if character in language:\n language = language.replace(character, \"_\")\n\n lang_color = getattr(self.py3, \"COLOR_%s\" % language)\n if not lang_color:\n lang_color = self.colors_dict.get(lang)\n if not lang_color: # old compatibility: try default value\n lang_color = self.defaults.get(lang)\n if lang_color:\n response[\"color\"] = lang_color\n\n return response\n\n def _get_xkblayout(self):\n layout, variant = [\n x.strip()\n for x in self.py3.command_output(\n [\"xkblayout-state\", \"print\", \"%s|SEPARATOR|%v\"]\n ).split(\"|SEPARATOR|\")\n ]\n return layout, variant\n\n def _get_setxkbmap(self):\n # this method works only for the first two predefined layouts.\n out = self.py3.command_output([\"setxkbmap\", \"-query\"])\n layouts = re.match(LAYOUTS_RE, out).group(1).split(\",\")\n if len(layouts) == 1:\n variant = re.match(VARIANTS_RE, out)\n if variant:\n variant = variant.group(1)\n return \"{} {}\".format(layouts[0], variant), variant\n else:\n return layouts[0], \"\"\n\n xset_output = self.py3.command_output([\"xset\", \"-q\"])\n led_mask = re.match(LEDMASK_RE, xset_output).groups(0)[0]\n return layouts[int(led_mask)], \"\"\n\n def _set_setxkbmap(self):\n layout = self._layouts[self._active]\n # Note: This will override user-defined layout, keyboard shortcut won't work\n self.py3.command_run(\"setxkbmap -layout {}\".format(layout))\n\n def _set_xkblayout(self):\n layout = self._layouts[self._active]\n layout_pos = (\n self.py3.command_output([\"xkblayout-state\", \"print\", \"%S\"])\n .split()\n .index(layout)\n )\n self.py3.command_run(\"xkblayout-state set {}\".format(layout_pos))\n\n def _set_active(self, delta):\n self._active += delta\n self._active = self._active % len(self._layouts)\n self._set_command()\n\n def on_click(self, event):\n button = event[\"button\"]\n if button == self.button_next:\n self._set_active(1)\n if button == self.button_prev:\n self._set_active(-1)\n\n\nif __name__ == \"__main__\":\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n\n module_test(Py3status)\n", "path": "py3status/modules/keyboard_layout.py"}]}
| 2,620 | 134 |
gh_patches_debug_26368
|
rasdani/github-patches
|
git_diff
|
huggingface__trl-934
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dpo_llama2.py: peft on top of peft?
https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py#L127
https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py#L186
It looks like the model for DPO training is a PeftModel built upon another PeftModel without weight merging. Is this expected?
```
(Pdb) PeftModelForCausalLM(
(base_model): LoraModel(
(model): PeftModelForCausalLM(
(base_model): LoraModel(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096)
(layers): ModuleList(
......
```
</issue>
<code>
[start of examples/research_projects/stack_llama_2/scripts/dpo_llama2.py]
1 # 0. imports
2 import os
3 from dataclasses import dataclass, field
4 from typing import Dict, Optional
5
6 import torch
7 from datasets import Dataset, load_dataset
8 from peft import AutoPeftModelForCausalLM, LoraConfig
9 from transformers import AutoTokenizer, HfArgumentParser, TrainingArguments
10
11 from trl import DPOTrainer
12
13
14 # Define and parse arguments.
15 @dataclass
16 class ScriptArguments:
17 """
18 The arguments for the DPO training script.
19 """
20
21 # data parameters
22 beta: Optional[float] = field(default=0.1, metadata={"help": "the beta parameter for DPO loss"})
23
24 # training parameters
25 model_name_or_path: Optional[str] = field(
26 default="../sft/results/final_checkpoint",
27 metadata={"help": "the location of the SFT model name or path"},
28 )
29 learning_rate: Optional[float] = field(default=5e-4, metadata={"help": "optimizer learning rate"})
30 lr_scheduler_type: Optional[str] = field(default="cosine", metadata={"help": "the lr scheduler type"})
31 warmup_steps: Optional[int] = field(default=100, metadata={"help": "the number of warmup steps"})
32 weight_decay: Optional[float] = field(default=0.05, metadata={"help": "the weight decay"})
33 optimizer_type: Optional[str] = field(default="paged_adamw_32bit", metadata={"help": "the optimizer type"})
34
35 per_device_train_batch_size: Optional[int] = field(default=4, metadata={"help": "train batch size per device"})
36 per_device_eval_batch_size: Optional[int] = field(default=1, metadata={"help": "eval batch size per device"})
37 gradient_accumulation_steps: Optional[int] = field(
38 default=4, metadata={"help": "the number of gradient accumulation steps"}
39 )
40 gradient_checkpointing: Optional[bool] = field(
41 default=True, metadata={"help": "whether to use gradient checkpointing"}
42 )
43
44 lora_alpha: Optional[float] = field(default=16, metadata={"help": "the lora alpha parameter"})
45 lora_dropout: Optional[float] = field(default=0.05, metadata={"help": "the lora dropout parameter"})
46 lora_r: Optional[int] = field(default=8, metadata={"help": "the lora r parameter"})
47
48 max_prompt_length: Optional[int] = field(default=512, metadata={"help": "the maximum prompt length"})
49 max_length: Optional[int] = field(default=1024, metadata={"help": "the maximum sequence length"})
50 max_steps: Optional[int] = field(default=1000, metadata={"help": "max number of training steps"})
51 logging_steps: Optional[int] = field(default=10, metadata={"help": "the logging frequency"})
52 save_steps: Optional[int] = field(default=100, metadata={"help": "the saving frequency"})
53 eval_steps: Optional[int] = field(default=100, metadata={"help": "the evaluation frequency"})
54
55 output_dir: Optional[str] = field(default="./results", metadata={"help": "the output directory"})
56 log_freq: Optional[int] = field(default=1, metadata={"help": "the logging frequency"})
57
58 # instrumentation
59 sanity_check: Optional[bool] = field(default=False, metadata={"help": "only train on 1000 samples"})
60 report_to: Optional[str] = field(
61 default="wandb",
62 metadata={
63 "help": 'The list of integrations to report the results and logs to. Supported platforms are `"azure_ml"`,'
64 '`"comet_ml"`, `"mlflow"`, `"neptune"`, `"tensorboard"`,`"clearml"` and `"wandb"`. '
65 'Use `"all"` to report to all integrations installed, `"none"` for no integrations.'
66 },
67 )
68 # debug argument for distributed training
69 ignore_bias_buffers: Optional[bool] = field(
70 default=False,
71 metadata={
72 "help": "fix for DDP issues with LM bias/mask buffers - invalid scalar type,`inplace operation. See"
73 "https://github.com/huggingface/transformers/issues/22482#issuecomment-1595790992"
74 },
75 )
76
77
78 def get_stack_exchange_paired(
79 data_dir: str = "data/rl",
80 sanity_check: bool = False,
81 cache_dir: str = None,
82 num_proc=24,
83 ) -> Dataset:
84 """Load the stack-exchange-paired dataset from Hugging Face and convert it to the necessary format.
85
86 The dataset is converted to a dictionary with the following structure:
87 {
88 'prompt': List[str],
89 'chosen': List[str],
90 'rejected': List[str],
91 }
92
93 Prompts are structured as follows:
94 "Question: " + <prompt> + "\n\nAnswer: "
95 """
96 dataset = load_dataset(
97 "lvwerra/stack-exchange-paired",
98 split="train",
99 cache_dir=cache_dir,
100 data_dir=data_dir,
101 )
102 original_columns = dataset.column_names
103
104 if sanity_check:
105 dataset = dataset.select(range(min(len(dataset), 1000)))
106
107 def return_prompt_and_responses(samples) -> Dict[str, str]:
108 return {
109 "prompt": ["Question: " + question + "\n\nAnswer: " for question in samples["question"]],
110 "chosen": samples["response_j"],
111 "rejected": samples["response_k"],
112 }
113
114 return dataset.map(
115 return_prompt_and_responses,
116 batched=True,
117 num_proc=num_proc,
118 remove_columns=original_columns,
119 )
120
121
122 if __name__ == "__main__":
123 parser = HfArgumentParser(ScriptArguments)
124 script_args = parser.parse_args_into_dataclasses()[0]
125
126 # 1. load a pretrained model
127 model = AutoPeftModelForCausalLM.from_pretrained(
128 script_args.model_name_or_path,
129 low_cpu_mem_usage=True,
130 torch_dtype=torch.float16,
131 load_in_4bit=True,
132 )
133 model.config.use_cache = False
134
135 if script_args.ignore_bias_buffers:
136 # torch distributed hack
137 model._ddp_params_and_buffers_to_ignore = [
138 name for name, buffer in model.named_buffers() if buffer.dtype == torch.bool
139 ]
140
141 model_ref = AutoPeftModelForCausalLM.from_pretrained(
142 script_args.model_name_or_path,
143 low_cpu_mem_usage=True,
144 torch_dtype=torch.float16,
145 load_in_4bit=True,
146 )
147 tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
148 tokenizer.pad_token = tokenizer.eos_token
149
150 # 2. Load the Stack-exchange paired dataset
151 train_dataset = get_stack_exchange_paired(data_dir="data/rl", sanity_check=script_args.sanity_check)
152 train_dataset = train_dataset.filter(
153 lambda x: len(x["prompt"]) + len(x["chosen"]) <= script_args.max_length
154 and len(x["prompt"]) + len(x["rejected"]) <= script_args.max_length
155 )
156
157 # 3. Load evaluation dataset
158 eval_dataset = get_stack_exchange_paired(data_dir="data/evaluation", sanity_check=True)
159 eval_dataset = eval_dataset.filter(
160 lambda x: len(x["prompt"]) + len(x["chosen"]) <= script_args.max_length
161 and len(x["prompt"]) + len(x["rejected"]) <= script_args.max_length
162 )
163
164 # 4. initialize training arguments:
165 training_args = TrainingArguments(
166 per_device_train_batch_size=script_args.per_device_train_batch_size,
167 per_device_eval_batch_size=script_args.per_device_eval_batch_size,
168 max_steps=script_args.max_steps,
169 logging_steps=script_args.logging_steps,
170 save_steps=script_args.save_steps,
171 gradient_accumulation_steps=script_args.gradient_accumulation_steps,
172 gradient_checkpointing=script_args.gradient_checkpointing,
173 learning_rate=script_args.learning_rate,
174 evaluation_strategy="steps",
175 eval_steps=script_args.eval_steps,
176 output_dir=script_args.output_dir,
177 report_to=script_args.report_to,
178 lr_scheduler_type=script_args.lr_scheduler_type,
179 warmup_steps=script_args.warmup_steps,
180 optim=script_args.optimizer_type,
181 bf16=True,
182 remove_unused_columns=False,
183 run_name="dpo_llama2",
184 )
185
186 peft_config = LoraConfig(
187 r=script_args.lora_r,
188 lora_alpha=script_args.lora_alpha,
189 lora_dropout=script_args.lora_dropout,
190 target_modules=[
191 "q_proj",
192 "v_proj",
193 "k_proj",
194 "out_proj",
195 "fc_in",
196 "fc_out",
197 "wte",
198 ],
199 bias="none",
200 task_type="CAUSAL_LM",
201 )
202
203 # 5. initialize the DPO trainer
204 dpo_trainer = DPOTrainer(
205 model,
206 model_ref,
207 args=training_args,
208 beta=script_args.beta,
209 train_dataset=train_dataset,
210 eval_dataset=eval_dataset,
211 tokenizer=tokenizer,
212 peft_config=peft_config,
213 max_prompt_length=script_args.max_prompt_length,
214 max_length=script_args.max_length,
215 )
216
217 # 6. train
218 dpo_trainer.train()
219 dpo_trainer.save_model(script_args.output_dir)
220
221 # 7. save
222 output_dir = os.path.join(script_args.output_dir, "final_checkpoint")
223 dpo_trainer.model.save_pretrained(output_dir)
224
[end of examples/research_projects/stack_llama_2/scripts/dpo_llama2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py b/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py
--- a/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py
+++ b/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py
@@ -5,8 +5,8 @@
import torch
from datasets import Dataset, load_dataset
-from peft import AutoPeftModelForCausalLM, LoraConfig
-from transformers import AutoTokenizer, HfArgumentParser, TrainingArguments
+from peft import LoraConfig
+from transformers import AutoModelForCausalLM, AutoTokenizer, HfArgumentParser, TrainingArguments
from trl import DPOTrainer
@@ -124,7 +124,7 @@
script_args = parser.parse_args_into_dataclasses()[0]
# 1. load a pretrained model
- model = AutoPeftModelForCausalLM.from_pretrained(
+ model = AutoModelForCausalLM.from_pretrained(
script_args.model_name_or_path,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
@@ -138,7 +138,7 @@
name for name, buffer in model.named_buffers() if buffer.dtype == torch.bool
]
- model_ref = AutoPeftModelForCausalLM.from_pretrained(
+ model_ref = AutoModelForCausalLM.from_pretrained(
script_args.model_name_or_path,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
|
{"golden_diff": "diff --git a/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py b/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py\n--- a/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py\n+++ b/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py\n@@ -5,8 +5,8 @@\n \n import torch\n from datasets import Dataset, load_dataset\n-from peft import AutoPeftModelForCausalLM, LoraConfig\n-from transformers import AutoTokenizer, HfArgumentParser, TrainingArguments\n+from peft import LoraConfig\n+from transformers import AutoModelForCausalLM, AutoTokenizer, HfArgumentParser, TrainingArguments\n \n from trl import DPOTrainer\n \n@@ -124,7 +124,7 @@\n script_args = parser.parse_args_into_dataclasses()[0]\n \n # 1. load a pretrained model\n- model = AutoPeftModelForCausalLM.from_pretrained(\n+ model = AutoModelForCausalLM.from_pretrained(\n script_args.model_name_or_path,\n low_cpu_mem_usage=True,\n torch_dtype=torch.float16,\n@@ -138,7 +138,7 @@\n name for name, buffer in model.named_buffers() if buffer.dtype == torch.bool\n ]\n \n- model_ref = AutoPeftModelForCausalLM.from_pretrained(\n+ model_ref = AutoModelForCausalLM.from_pretrained(\n script_args.model_name_or_path,\n low_cpu_mem_usage=True,\n torch_dtype=torch.float16,\n", "issue": "dpo_llama2.py: peft on top of peft?\nhttps://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py#L127\r\nhttps://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/dpo_llama2.py#L186\r\n\r\nIt looks like the model for DPO training is a PeftModel built upon another PeftModel without weight merging. Is this expected? \r\n\r\n```\r\n(Pdb) PeftModelForCausalLM(\r\n (base_model): LoraModel(\r\n (model): PeftModelForCausalLM(\r\n (base_model): LoraModel(\r\n (model): LlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): Embedding(32000, 4096)\r\n (layers): ModuleList(\r\n ......\r\n```\n", "before_files": [{"content": "# 0. imports\nimport os\nfrom dataclasses import dataclass, field\nfrom typing import Dict, Optional\n\nimport torch\nfrom datasets import Dataset, load_dataset\nfrom peft import AutoPeftModelForCausalLM, LoraConfig\nfrom transformers import AutoTokenizer, HfArgumentParser, TrainingArguments\n\nfrom trl import DPOTrainer\n\n\n# Define and parse arguments.\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The arguments for the DPO training script.\n \"\"\"\n\n # data parameters\n beta: Optional[float] = field(default=0.1, metadata={\"help\": \"the beta parameter for DPO loss\"})\n\n # training parameters\n model_name_or_path: Optional[str] = field(\n default=\"../sft/results/final_checkpoint\",\n metadata={\"help\": \"the location of the SFT model name or path\"},\n )\n learning_rate: Optional[float] = field(default=5e-4, metadata={\"help\": \"optimizer learning rate\"})\n lr_scheduler_type: Optional[str] = field(default=\"cosine\", metadata={\"help\": \"the lr scheduler type\"})\n warmup_steps: Optional[int] = field(default=100, metadata={\"help\": \"the number of warmup steps\"})\n weight_decay: Optional[float] = field(default=0.05, metadata={\"help\": \"the weight decay\"})\n optimizer_type: Optional[str] = field(default=\"paged_adamw_32bit\", metadata={\"help\": \"the optimizer type\"})\n\n per_device_train_batch_size: Optional[int] = field(default=4, metadata={\"help\": \"train batch size per device\"})\n per_device_eval_batch_size: Optional[int] = field(default=1, metadata={\"help\": \"eval batch size per device\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=4, metadata={\"help\": \"the number of gradient accumulation steps\"}\n )\n gradient_checkpointing: Optional[bool] = field(\n default=True, metadata={\"help\": \"whether to use gradient checkpointing\"}\n )\n\n lora_alpha: Optional[float] = field(default=16, metadata={\"help\": \"the lora alpha parameter\"})\n lora_dropout: Optional[float] = field(default=0.05, metadata={\"help\": \"the lora dropout parameter\"})\n lora_r: Optional[int] = field(default=8, metadata={\"help\": \"the lora r parameter\"})\n\n max_prompt_length: Optional[int] = field(default=512, metadata={\"help\": \"the maximum prompt length\"})\n max_length: Optional[int] = field(default=1024, metadata={\"help\": \"the maximum sequence length\"})\n max_steps: Optional[int] = field(default=1000, metadata={\"help\": \"max number of training steps\"})\n logging_steps: Optional[int] = field(default=10, metadata={\"help\": \"the logging frequency\"})\n save_steps: Optional[int] = field(default=100, metadata={\"help\": \"the saving frequency\"})\n eval_steps: Optional[int] = field(default=100, metadata={\"help\": \"the evaluation frequency\"})\n\n output_dir: Optional[str] = field(default=\"./results\", metadata={\"help\": \"the output directory\"})\n log_freq: Optional[int] = field(default=1, metadata={\"help\": \"the logging frequency\"})\n\n # instrumentation\n sanity_check: Optional[bool] = field(default=False, metadata={\"help\": \"only train on 1000 samples\"})\n report_to: Optional[str] = field(\n default=\"wandb\",\n metadata={\n \"help\": 'The list of integrations to report the results and logs to. Supported platforms are `\"azure_ml\"`,'\n '`\"comet_ml\"`, `\"mlflow\"`, `\"neptune\"`, `\"tensorboard\"`,`\"clearml\"` and `\"wandb\"`. '\n 'Use `\"all\"` to report to all integrations installed, `\"none\"` for no integrations.'\n },\n )\n # debug argument for distributed training\n ignore_bias_buffers: Optional[bool] = field(\n default=False,\n metadata={\n \"help\": \"fix for DDP issues with LM bias/mask buffers - invalid scalar type,`inplace operation. See\"\n \"https://github.com/huggingface/transformers/issues/22482#issuecomment-1595790992\"\n },\n )\n\n\ndef get_stack_exchange_paired(\n data_dir: str = \"data/rl\",\n sanity_check: bool = False,\n cache_dir: str = None,\n num_proc=24,\n) -> Dataset:\n \"\"\"Load the stack-exchange-paired dataset from Hugging Face and convert it to the necessary format.\n\n The dataset is converted to a dictionary with the following structure:\n {\n 'prompt': List[str],\n 'chosen': List[str],\n 'rejected': List[str],\n }\n\n Prompts are structured as follows:\n \"Question: \" + <prompt> + \"\\n\\nAnswer: \"\n \"\"\"\n dataset = load_dataset(\n \"lvwerra/stack-exchange-paired\",\n split=\"train\",\n cache_dir=cache_dir,\n data_dir=data_dir,\n )\n original_columns = dataset.column_names\n\n if sanity_check:\n dataset = dataset.select(range(min(len(dataset), 1000)))\n\n def return_prompt_and_responses(samples) -> Dict[str, str]:\n return {\n \"prompt\": [\"Question: \" + question + \"\\n\\nAnswer: \" for question in samples[\"question\"]],\n \"chosen\": samples[\"response_j\"],\n \"rejected\": samples[\"response_k\"],\n }\n\n return dataset.map(\n return_prompt_and_responses,\n batched=True,\n num_proc=num_proc,\n remove_columns=original_columns,\n )\n\n\nif __name__ == \"__main__\":\n parser = HfArgumentParser(ScriptArguments)\n script_args = parser.parse_args_into_dataclasses()[0]\n\n # 1. load a pretrained model\n model = AutoPeftModelForCausalLM.from_pretrained(\n script_args.model_name_or_path,\n low_cpu_mem_usage=True,\n torch_dtype=torch.float16,\n load_in_4bit=True,\n )\n model.config.use_cache = False\n\n if script_args.ignore_bias_buffers:\n # torch distributed hack\n model._ddp_params_and_buffers_to_ignore = [\n name for name, buffer in model.named_buffers() if buffer.dtype == torch.bool\n ]\n\n model_ref = AutoPeftModelForCausalLM.from_pretrained(\n script_args.model_name_or_path,\n low_cpu_mem_usage=True,\n torch_dtype=torch.float16,\n load_in_4bit=True,\n )\n tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\n tokenizer.pad_token = tokenizer.eos_token\n\n # 2. Load the Stack-exchange paired dataset\n train_dataset = get_stack_exchange_paired(data_dir=\"data/rl\", sanity_check=script_args.sanity_check)\n train_dataset = train_dataset.filter(\n lambda x: len(x[\"prompt\"]) + len(x[\"chosen\"]) <= script_args.max_length\n and len(x[\"prompt\"]) + len(x[\"rejected\"]) <= script_args.max_length\n )\n\n # 3. Load evaluation dataset\n eval_dataset = get_stack_exchange_paired(data_dir=\"data/evaluation\", sanity_check=True)\n eval_dataset = eval_dataset.filter(\n lambda x: len(x[\"prompt\"]) + len(x[\"chosen\"]) <= script_args.max_length\n and len(x[\"prompt\"]) + len(x[\"rejected\"]) <= script_args.max_length\n )\n\n # 4. initialize training arguments:\n training_args = TrainingArguments(\n per_device_train_batch_size=script_args.per_device_train_batch_size,\n per_device_eval_batch_size=script_args.per_device_eval_batch_size,\n max_steps=script_args.max_steps,\n logging_steps=script_args.logging_steps,\n save_steps=script_args.save_steps,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n gradient_checkpointing=script_args.gradient_checkpointing,\n learning_rate=script_args.learning_rate,\n evaluation_strategy=\"steps\",\n eval_steps=script_args.eval_steps,\n output_dir=script_args.output_dir,\n report_to=script_args.report_to,\n lr_scheduler_type=script_args.lr_scheduler_type,\n warmup_steps=script_args.warmup_steps,\n optim=script_args.optimizer_type,\n bf16=True,\n remove_unused_columns=False,\n run_name=\"dpo_llama2\",\n )\n\n peft_config = LoraConfig(\n r=script_args.lora_r,\n lora_alpha=script_args.lora_alpha,\n lora_dropout=script_args.lora_dropout,\n target_modules=[\n \"q_proj\",\n \"v_proj\",\n \"k_proj\",\n \"out_proj\",\n \"fc_in\",\n \"fc_out\",\n \"wte\",\n ],\n bias=\"none\",\n task_type=\"CAUSAL_LM\",\n )\n\n # 5. initialize the DPO trainer\n dpo_trainer = DPOTrainer(\n model,\n model_ref,\n args=training_args,\n beta=script_args.beta,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n tokenizer=tokenizer,\n peft_config=peft_config,\n max_prompt_length=script_args.max_prompt_length,\n max_length=script_args.max_length,\n )\n\n # 6. train\n dpo_trainer.train()\n dpo_trainer.save_model(script_args.output_dir)\n\n # 7. save\n output_dir = os.path.join(script_args.output_dir, \"final_checkpoint\")\n dpo_trainer.model.save_pretrained(output_dir)\n", "path": "examples/research_projects/stack_llama_2/scripts/dpo_llama2.py"}]}
| 3,419 | 354 |
gh_patches_debug_28159
|
rasdani/github-patches
|
git_diff
|
dask__distributed-6093
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
psutil causes Nanny to crash
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**What happened**:
We are getting the following exceptions occasionally from our workers resulting the whole process to stall eventually:
```
FileNotFoundError: [Errno 2] No such file or directory: '/proc/12/statm'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/tornado/ioloop.py", line 905, in _run
return self.callback()
File "/usr/local/lib/python3.8/site-packages/distributed/system_monitor.py", line 121, in update
read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (
AttributeError: 'NoneType' object has no attribute 'read_bytes'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/psutil/_common.py", line 441, in wrapper
ret = self._cache[fun]
AttributeError: 'Process' object has no attribute '_cache'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/tornado/ioloop.py", line 905, in _run
return self.callback()
File "/usr/local/lib/python3.8/site-packages/distributed/worker_memory.py", line 322, in memory_monitor
memory = proc.memory_info().rss
File "/usr/local/lib/python3.8/site-packages/psutil/_common.py", line 444, in wrapper
return fun(self)
File "/usr/local/lib/python3.8/site-packages/psutil/__init__.py", line 1061, in memory_info
return self._proc.memory_info()
File "/usr/local/lib/python3.8/site-packages/psutil/_pslinux.py", line 1661, in wrapper
return fun(self, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/psutil/_pslinux.py", line 1895, in memory_info
with open_binary("%s/%s/statm" % (self._procfs_path, self.pid)) as f:
File "/usr/local/lib/python3.8/site-packages/psutil/_common.py", line 711, in open_binary
return open(fname, "rb", **kwargs)
```
If I understand the code correctly the following is happening:
`psutil.disk_io_counters()` returns a named tuple:
https://github.com/dask/distributed/blob/2022.04.0/distributed/system_monitor.py#L39
so that the internal `self._collect_disk_io_counters` is _not_ set to `False` but `psutil.disk_io_counters()` returns `None` instead of an expected named tuple when called within the update method:
https://github.com/dask/distributed/blob/2022.04.0/distributed/system_monitor.py#L115
later thus causing the Nanny to crash.
It seems to be an issue of `psutil` in the first place but I think the `SystemMonitor` could be more resilient if that happens.
**What you expected to happen**:
`SystemMonitor` should not raise an exception if `psutil.disk_io_counters()` returns `None`: https://github.com/dask/distributed/blob/034b4d4c5cfad2fe3b672425eb96b826d79d9415/distributed/system_monitor.py#L115
**Minimal Complete Verifiable Example**:
```python
# Put your MCVE code here
```
**Anything else we need to know?**:
Should I prepare a PR with the suggested changes?
**Environment**:
- Dask version: 2022.4.0
- Python version: 3.8
- Operating System: Debian GNU/Linux 10 (buster)
- Install method (conda, pip, source): pip
<!-- If you are reporting an issue such as scale stability, cluster deadlock.
Please provide a cluster dump state with this issue, by running client.dump_cluster_state()
https://distributed.dask.org/en/stable/api.html?highlight=dump_cluster_state#distributed.Client.dump_cluster_state
-->
<details>
<summary>Cluster Dump State:</summary>
</details>
</issue>
<code>
[start of distributed/system_monitor.py]
1 from collections import deque
2
3 import psutil
4
5 from distributed.compatibility import WINDOWS
6 from distributed.metrics import time
7
8 try:
9 from distributed.diagnostics import nvml
10 except Exception:
11 nvml = None # type: ignore
12
13
14 class SystemMonitor:
15 def __init__(self, n=10000):
16 self.proc = psutil.Process()
17
18 self.time = deque(maxlen=n)
19 self.cpu = deque(maxlen=n)
20 self.memory = deque(maxlen=n)
21 self.count = 0
22
23 self.quantities = {"cpu": self.cpu, "memory": self.memory, "time": self.time}
24
25 try:
26 ioc = psutil.net_io_counters()
27 except Exception:
28 self._collect_net_io_counters = False
29 else:
30 self.last_time = time()
31 self.read_bytes = deque(maxlen=n)
32 self.write_bytes = deque(maxlen=n)
33 self.quantities["read_bytes"] = self.read_bytes
34 self.quantities["write_bytes"] = self.write_bytes
35 self._last_io_counters = ioc
36 self._collect_net_io_counters = True
37
38 try:
39 disk_ioc = psutil.disk_io_counters()
40 except Exception:
41 self._collect_disk_io_counters = False
42 else:
43 if disk_ioc is None: # diskless machine
44 self._collect_disk_io_counters = False
45 else:
46 self.last_time_disk = time()
47 self.read_bytes_disk = deque(maxlen=n)
48 self.write_bytes_disk = deque(maxlen=n)
49 self.quantities["read_bytes_disk"] = self.read_bytes_disk
50 self.quantities["write_bytes_disk"] = self.write_bytes_disk
51 self._last_disk_io_counters = disk_ioc
52 self._collect_disk_io_counters = True
53
54 if not WINDOWS:
55 self.num_fds = deque(maxlen=n)
56 self.quantities["num_fds"] = self.num_fds
57
58 if nvml.device_get_count() > 0:
59 gpu_extra = nvml.one_time()
60 self.gpu_name = gpu_extra["name"]
61 self.gpu_memory_total = gpu_extra["memory-total"]
62 self.gpu_utilization = deque(maxlen=n)
63 self.gpu_memory_used = deque(maxlen=n)
64 self.quantities["gpu_utilization"] = self.gpu_utilization
65 self.quantities["gpu_memory_used"] = self.gpu_memory_used
66
67 self.update()
68
69 def recent(self):
70 try:
71 return {k: v[-1] for k, v in self.quantities.items()}
72 except IndexError:
73 return {k: None for k, v in self.quantities.items()}
74
75 def get_process_memory(self) -> int:
76 """Sample process memory, as reported by the OS.
77 This one-liner function exists so that it can be easily mocked in unit tests,
78 as the OS allocating and releasing memory is highly volatile and a constant
79 source of flakiness.
80 """
81 return self.proc.memory_info().rss
82
83 def update(self):
84 with self.proc.oneshot():
85 cpu = self.proc.cpu_percent()
86 memory = self.get_process_memory()
87 now = time()
88
89 self.cpu.append(cpu)
90 self.memory.append(memory)
91 self.time.append(now)
92 self.count += 1
93
94 result = {"cpu": cpu, "memory": memory, "time": now, "count": self.count}
95
96 if self._collect_net_io_counters:
97 try:
98 ioc = psutil.net_io_counters()
99 except Exception:
100 pass
101 else:
102 last = self._last_io_counters
103 duration = now - self.last_time
104 read_bytes = (ioc.bytes_recv - last.bytes_recv) / (duration or 0.5)
105 write_bytes = (ioc.bytes_sent - last.bytes_sent) / (duration or 0.5)
106 self.last_time = now
107 self._last_io_counters = ioc
108 self.read_bytes.append(read_bytes)
109 self.write_bytes.append(write_bytes)
110 result["read_bytes"] = read_bytes
111 result["write_bytes"] = write_bytes
112
113 if self._collect_disk_io_counters:
114 try:
115 disk_ioc = psutil.disk_io_counters()
116 except Exception:
117 pass
118 else:
119 last_disk = self._last_disk_io_counters
120 duration_disk = now - self.last_time_disk
121 read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (
122 duration_disk or 0.5
123 )
124 write_bytes_disk = (disk_ioc.write_bytes - last_disk.write_bytes) / (
125 duration_disk or 0.5
126 )
127 self.last_time_disk = now
128 self._last_disk_io_counters = disk_ioc
129 self.read_bytes_disk.append(read_bytes_disk)
130 self.write_bytes_disk.append(write_bytes_disk)
131 result["read_bytes_disk"] = read_bytes_disk
132 result["write_bytes_disk"] = write_bytes_disk
133
134 if not WINDOWS:
135 num_fds = self.proc.num_fds()
136 self.num_fds.append(num_fds)
137 result["num_fds"] = num_fds
138
139 if nvml.device_get_count() > 0:
140 gpu_metrics = nvml.real_time()
141 self.gpu_utilization.append(gpu_metrics["utilization"])
142 self.gpu_memory_used.append(gpu_metrics["memory-used"])
143 result["gpu_utilization"] = gpu_metrics["utilization"]
144 result["gpu_memory_used"] = gpu_metrics["memory-used"]
145
146 return result
147
148 def __repr__(self):
149 return "<SystemMonitor: cpu: %d memory: %d MB fds: %d>" % (
150 self.cpu[-1],
151 self.memory[-1] / 1e6,
152 -1 if WINDOWS else self.num_fds[-1],
153 )
154
155 def range_query(self, start):
156 if start == self.count:
157 return {k: [] for k in self.quantities}
158
159 istart = start - (self.count - len(self.cpu))
160 istart = max(0, istart)
161
162 seq = [i for i in range(istart, len(self.cpu))]
163
164 d = {k: [v[i] for i in seq] for k, v in self.quantities.items()}
165 return d
166
[end of distributed/system_monitor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/system_monitor.py b/distributed/system_monitor.py
--- a/distributed/system_monitor.py
+++ b/distributed/system_monitor.py
@@ -116,20 +116,23 @@
except Exception:
pass
else:
- last_disk = self._last_disk_io_counters
- duration_disk = now - self.last_time_disk
- read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (
- duration_disk or 0.5
- )
- write_bytes_disk = (disk_ioc.write_bytes - last_disk.write_bytes) / (
- duration_disk or 0.5
- )
- self.last_time_disk = now
- self._last_disk_io_counters = disk_ioc
- self.read_bytes_disk.append(read_bytes_disk)
- self.write_bytes_disk.append(write_bytes_disk)
- result["read_bytes_disk"] = read_bytes_disk
- result["write_bytes_disk"] = write_bytes_disk
+ if disk_ioc is None: # diskless machine
+ self._collect_disk_io_counters = False
+ else:
+ last_disk = self._last_disk_io_counters
+ duration_disk = now - self.last_time_disk
+ read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (
+ duration_disk or 0.5
+ )
+ write_bytes_disk = (
+ disk_ioc.write_bytes - last_disk.write_bytes
+ ) / (duration_disk or 0.5)
+ self.last_time_disk = now
+ self._last_disk_io_counters = disk_ioc
+ self.read_bytes_disk.append(read_bytes_disk)
+ self.write_bytes_disk.append(write_bytes_disk)
+ result["read_bytes_disk"] = read_bytes_disk
+ result["write_bytes_disk"] = write_bytes_disk
if not WINDOWS:
num_fds = self.proc.num_fds()
|
{"golden_diff": "diff --git a/distributed/system_monitor.py b/distributed/system_monitor.py\n--- a/distributed/system_monitor.py\n+++ b/distributed/system_monitor.py\n@@ -116,20 +116,23 @@\n except Exception:\n pass\n else:\n- last_disk = self._last_disk_io_counters\n- duration_disk = now - self.last_time_disk\n- read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (\n- duration_disk or 0.5\n- )\n- write_bytes_disk = (disk_ioc.write_bytes - last_disk.write_bytes) / (\n- duration_disk or 0.5\n- )\n- self.last_time_disk = now\n- self._last_disk_io_counters = disk_ioc\n- self.read_bytes_disk.append(read_bytes_disk)\n- self.write_bytes_disk.append(write_bytes_disk)\n- result[\"read_bytes_disk\"] = read_bytes_disk\n- result[\"write_bytes_disk\"] = write_bytes_disk\n+ if disk_ioc is None: # diskless machine\n+ self._collect_disk_io_counters = False\n+ else:\n+ last_disk = self._last_disk_io_counters\n+ duration_disk = now - self.last_time_disk\n+ read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (\n+ duration_disk or 0.5\n+ )\n+ write_bytes_disk = (\n+ disk_ioc.write_bytes - last_disk.write_bytes\n+ ) / (duration_disk or 0.5)\n+ self.last_time_disk = now\n+ self._last_disk_io_counters = disk_ioc\n+ self.read_bytes_disk.append(read_bytes_disk)\n+ self.write_bytes_disk.append(write_bytes_disk)\n+ result[\"read_bytes_disk\"] = read_bytes_disk\n+ result[\"write_bytes_disk\"] = write_bytes_disk\n \n if not WINDOWS:\n num_fds = self.proc.num_fds()\n", "issue": "psutil causes Nanny to crash\n<!-- Please include a self-contained copy-pastable example that generates the issue if possible.\r\n\r\nPlease be concise with code posted. See guidelines below on how to provide a good bug report:\r\n\r\n- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports\r\n- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve\r\n\r\nBug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.\r\n-->\r\n\r\n**What happened**:\r\n\r\nWe are getting the following exceptions occasionally from our workers resulting the whole process to stall eventually:\r\n\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: '/proc/12/statm'\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/tornado/ioloop.py\", line 905, in _run\r\n return self.callback()\r\n File \"/usr/local/lib/python3.8/site-packages/distributed/system_monitor.py\", line 121, in update\r\n read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (\r\nAttributeError: 'NoneType' object has no attribute 'read_bytes'\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/psutil/_common.py\", line 441, in wrapper\r\n ret = self._cache[fun]\r\nAttributeError: 'Process' object has no attribute '_cache'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/tornado/ioloop.py\", line 905, in _run\r\n return self.callback()\r\n File \"/usr/local/lib/python3.8/site-packages/distributed/worker_memory.py\", line 322, in memory_monitor\r\n memory = proc.memory_info().rss\r\n File \"/usr/local/lib/python3.8/site-packages/psutil/_common.py\", line 444, in wrapper\r\n return fun(self)\r\n File \"/usr/local/lib/python3.8/site-packages/psutil/__init__.py\", line 1061, in memory_info\r\n return self._proc.memory_info()\r\n File \"/usr/local/lib/python3.8/site-packages/psutil/_pslinux.py\", line 1661, in wrapper\r\n return fun(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.8/site-packages/psutil/_pslinux.py\", line 1895, in memory_info\r\n with open_binary(\"%s/%s/statm\" % (self._procfs_path, self.pid)) as f:\r\n File \"/usr/local/lib/python3.8/site-packages/psutil/_common.py\", line 711, in open_binary\r\n return open(fname, \"rb\", **kwargs)\r\n```\r\n\r\nIf I understand the code correctly the following is happening:\r\n\r\n`psutil.disk_io_counters()` returns a named tuple:\r\n\r\nhttps://github.com/dask/distributed/blob/2022.04.0/distributed/system_monitor.py#L39\r\n\r\nso that the internal `self._collect_disk_io_counters` is _not_ set to `False` but `psutil.disk_io_counters()` returns `None` instead of an expected named tuple when called within the update method:\r\n\r\nhttps://github.com/dask/distributed/blob/2022.04.0/distributed/system_monitor.py#L115\r\n\r\nlater thus causing the Nanny to crash.\r\n\r\nIt seems to be an issue of `psutil` in the first place but I think the `SystemMonitor` could be more resilient if that happens.\r\n\r\n**What you expected to happen**:\r\n\r\n`SystemMonitor` should not raise an exception if `psutil.disk_io_counters()` returns `None`: https://github.com/dask/distributed/blob/034b4d4c5cfad2fe3b672425eb96b826d79d9415/distributed/system_monitor.py#L115\r\n\r\n**Minimal Complete Verifiable Example**:\r\n\r\n```python\r\n# Put your MCVE code here\r\n```\r\n\r\n**Anything else we need to know?**:\r\n\r\nShould I prepare a PR with the suggested changes?\r\n\r\n**Environment**:\r\n\r\n- Dask version: 2022.4.0\r\n- Python version: 3.8\r\n- Operating System: Debian GNU/Linux 10 (buster)\r\n- Install method (conda, pip, source): pip\r\n\r\n<!-- If you are reporting an issue such as scale stability, cluster deadlock.\r\nPlease provide a cluster dump state with this issue, by running client.dump_cluster_state()\r\n\r\nhttps://distributed.dask.org/en/stable/api.html?highlight=dump_cluster_state#distributed.Client.dump_cluster_state\r\n\r\n-->\r\n\r\n<details>\r\n<summary>Cluster Dump State:</summary>\r\n\r\n</details>\n", "before_files": [{"content": "from collections import deque\n\nimport psutil\n\nfrom distributed.compatibility import WINDOWS\nfrom distributed.metrics import time\n\ntry:\n from distributed.diagnostics import nvml\nexcept Exception:\n nvml = None # type: ignore\n\n\nclass SystemMonitor:\n def __init__(self, n=10000):\n self.proc = psutil.Process()\n\n self.time = deque(maxlen=n)\n self.cpu = deque(maxlen=n)\n self.memory = deque(maxlen=n)\n self.count = 0\n\n self.quantities = {\"cpu\": self.cpu, \"memory\": self.memory, \"time\": self.time}\n\n try:\n ioc = psutil.net_io_counters()\n except Exception:\n self._collect_net_io_counters = False\n else:\n self.last_time = time()\n self.read_bytes = deque(maxlen=n)\n self.write_bytes = deque(maxlen=n)\n self.quantities[\"read_bytes\"] = self.read_bytes\n self.quantities[\"write_bytes\"] = self.write_bytes\n self._last_io_counters = ioc\n self._collect_net_io_counters = True\n\n try:\n disk_ioc = psutil.disk_io_counters()\n except Exception:\n self._collect_disk_io_counters = False\n else:\n if disk_ioc is None: # diskless machine\n self._collect_disk_io_counters = False\n else:\n self.last_time_disk = time()\n self.read_bytes_disk = deque(maxlen=n)\n self.write_bytes_disk = deque(maxlen=n)\n self.quantities[\"read_bytes_disk\"] = self.read_bytes_disk\n self.quantities[\"write_bytes_disk\"] = self.write_bytes_disk\n self._last_disk_io_counters = disk_ioc\n self._collect_disk_io_counters = True\n\n if not WINDOWS:\n self.num_fds = deque(maxlen=n)\n self.quantities[\"num_fds\"] = self.num_fds\n\n if nvml.device_get_count() > 0:\n gpu_extra = nvml.one_time()\n self.gpu_name = gpu_extra[\"name\"]\n self.gpu_memory_total = gpu_extra[\"memory-total\"]\n self.gpu_utilization = deque(maxlen=n)\n self.gpu_memory_used = deque(maxlen=n)\n self.quantities[\"gpu_utilization\"] = self.gpu_utilization\n self.quantities[\"gpu_memory_used\"] = self.gpu_memory_used\n\n self.update()\n\n def recent(self):\n try:\n return {k: v[-1] for k, v in self.quantities.items()}\n except IndexError:\n return {k: None for k, v in self.quantities.items()}\n\n def get_process_memory(self) -> int:\n \"\"\"Sample process memory, as reported by the OS.\n This one-liner function exists so that it can be easily mocked in unit tests,\n as the OS allocating and releasing memory is highly volatile and a constant\n source of flakiness.\n \"\"\"\n return self.proc.memory_info().rss\n\n def update(self):\n with self.proc.oneshot():\n cpu = self.proc.cpu_percent()\n memory = self.get_process_memory()\n now = time()\n\n self.cpu.append(cpu)\n self.memory.append(memory)\n self.time.append(now)\n self.count += 1\n\n result = {\"cpu\": cpu, \"memory\": memory, \"time\": now, \"count\": self.count}\n\n if self._collect_net_io_counters:\n try:\n ioc = psutil.net_io_counters()\n except Exception:\n pass\n else:\n last = self._last_io_counters\n duration = now - self.last_time\n read_bytes = (ioc.bytes_recv - last.bytes_recv) / (duration or 0.5)\n write_bytes = (ioc.bytes_sent - last.bytes_sent) / (duration or 0.5)\n self.last_time = now\n self._last_io_counters = ioc\n self.read_bytes.append(read_bytes)\n self.write_bytes.append(write_bytes)\n result[\"read_bytes\"] = read_bytes\n result[\"write_bytes\"] = write_bytes\n\n if self._collect_disk_io_counters:\n try:\n disk_ioc = psutil.disk_io_counters()\n except Exception:\n pass\n else:\n last_disk = self._last_disk_io_counters\n duration_disk = now - self.last_time_disk\n read_bytes_disk = (disk_ioc.read_bytes - last_disk.read_bytes) / (\n duration_disk or 0.5\n )\n write_bytes_disk = (disk_ioc.write_bytes - last_disk.write_bytes) / (\n duration_disk or 0.5\n )\n self.last_time_disk = now\n self._last_disk_io_counters = disk_ioc\n self.read_bytes_disk.append(read_bytes_disk)\n self.write_bytes_disk.append(write_bytes_disk)\n result[\"read_bytes_disk\"] = read_bytes_disk\n result[\"write_bytes_disk\"] = write_bytes_disk\n\n if not WINDOWS:\n num_fds = self.proc.num_fds()\n self.num_fds.append(num_fds)\n result[\"num_fds\"] = num_fds\n\n if nvml.device_get_count() > 0:\n gpu_metrics = nvml.real_time()\n self.gpu_utilization.append(gpu_metrics[\"utilization\"])\n self.gpu_memory_used.append(gpu_metrics[\"memory-used\"])\n result[\"gpu_utilization\"] = gpu_metrics[\"utilization\"]\n result[\"gpu_memory_used\"] = gpu_metrics[\"memory-used\"]\n\n return result\n\n def __repr__(self):\n return \"<SystemMonitor: cpu: %d memory: %d MB fds: %d>\" % (\n self.cpu[-1],\n self.memory[-1] / 1e6,\n -1 if WINDOWS else self.num_fds[-1],\n )\n\n def range_query(self, start):\n if start == self.count:\n return {k: [] for k in self.quantities}\n\n istart = start - (self.count - len(self.cpu))\n istart = max(0, istart)\n\n seq = [i for i in range(istart, len(self.cpu))]\n\n d = {k: [v[i] for i in seq] for k, v in self.quantities.items()}\n return d\n", "path": "distributed/system_monitor.py"}]}
| 3,297 | 422 |
gh_patches_debug_17898
|
rasdani/github-patches
|
git_diff
|
OCA__social-531
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[12.0][mail_tracking] many MailTracking email not found warnings
This code snippet is polluting my logs with warning messages.
https://github.com/OCA/social/blob/8d985d8da9fa864113f87cd59a2b3173f7f89193/mail_tracking/controllers/main.py#L69-L78
How can this work if state will be marked 'opened' the first time? Each successive open will trigger above warning, because state is now 'opened' and will not match the above domain `('state', 'in', ['sent', 'delivered'])`.
Is it intended to generate warnings each time mail is opened? Or what am I missing?
</issue>
<code>
[start of mail_tracking/controllers/main.py]
1 # Copyright 2016 Antonio Espinosa - <[email protected]>
2 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
3
4 import werkzeug
5 import odoo
6 from contextlib import contextmanager
7 from odoo import api, http, SUPERUSER_ID
8
9 from odoo.addons.mail.controllers.main import MailController
10 import logging
11 import base64
12 _logger = logging.getLogger(__name__)
13
14 BLANK = 'R0lGODlhAQABAIAAANvf7wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw=='
15
16
17 @contextmanager
18 def db_env(dbname):
19 if not http.db_filter([dbname]):
20 raise werkzeug.exceptions.BadRequest()
21 cr = None
22 if dbname == http.request.db:
23 cr = http.request.cr
24 if not cr:
25 cr = odoo.sql_db.db_connect(dbname).cursor()
26 with api.Environment.manage():
27 yield api.Environment(cr, SUPERUSER_ID, {})
28
29
30 class MailTrackingController(MailController):
31
32 def _request_metadata(self):
33 """Prepare remote info metadata"""
34 request = http.request.httprequest
35 return {
36 'ip': request.remote_addr or False,
37 'user_agent': request.user_agent or False,
38 'os_family': request.user_agent.platform or False,
39 'ua_family': request.user_agent.browser or False,
40 }
41
42 @http.route(['/mail/tracking/all/<string:db>',
43 '/mail/tracking/event/<string:db>/<string:event_type>'],
44 type='http', auth='none', csrf=False)
45 def mail_tracking_event(self, db, event_type=None, **kw):
46 """Route used by external mail service"""
47 metadata = self._request_metadata()
48 res = None
49 with db_env(db) as env:
50 try:
51 res = env['mail.tracking.email'].event_process(
52 http.request, kw, metadata, event_type=event_type)
53 except Exception:
54 pass
55 if not res or res == 'NOT FOUND':
56 return werkzeug.exceptions.NotAcceptable()
57 return res
58
59 @http.route(['/mail/tracking/open/<string:db>'
60 '/<int:tracking_email_id>/blank.gif',
61 '/mail/tracking/open/<string:db>'
62 '/<int:tracking_email_id>/<string:token>/blank.gif'],
63 type='http', auth='none', methods=['GET'])
64 def mail_tracking_open(self, db, tracking_email_id, token=False, **kw):
65 """Route used to track mail openned (With & Without Token)"""
66 metadata = self._request_metadata()
67 with db_env(db) as env:
68 try:
69 tracking_email = env['mail.tracking.email'].search([
70 ('id', '=', tracking_email_id),
71 ('state', 'in', ['sent', 'delivered']),
72 ('token', '=', token),
73 ])
74 if tracking_email:
75 tracking_email.event_create('open', metadata)
76 else:
77 _logger.warning(
78 "MailTracking email '%s' not found", tracking_email_id)
79 except Exception:
80 pass
81
82 # Always return GIF blank image
83 response = werkzeug.wrappers.Response()
84 response.mimetype = 'image/gif'
85 response.data = base64.b64decode(BLANK)
86 return response
87
88 @http.route()
89 def mail_init_messaging(self):
90 """Route used to initial values of Discuss app"""
91 values = super().mail_init_messaging()
92 values.update({
93 'failed_counter':
94 http.request.env['mail.message'].get_failed_count(),
95 })
96 return values
97
[end of mail_tracking/controllers/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mail_tracking/controllers/main.py b/mail_tracking/controllers/main.py
--- a/mail_tracking/controllers/main.py
+++ b/mail_tracking/controllers/main.py
@@ -68,14 +68,13 @@
try:
tracking_email = env['mail.tracking.email'].search([
('id', '=', tracking_email_id),
- ('state', 'in', ['sent', 'delivered']),
('token', '=', token),
])
- if tracking_email:
- tracking_email.event_create('open', metadata)
- else:
+ if not tracking_email:
_logger.warning(
"MailTracking email '%s' not found", tracking_email_id)
+ elif tracking_email.state in ('sent', 'delivered'):
+ tracking_email.event_create('open', metadata)
except Exception:
pass
|
{"golden_diff": "diff --git a/mail_tracking/controllers/main.py b/mail_tracking/controllers/main.py\n--- a/mail_tracking/controllers/main.py\n+++ b/mail_tracking/controllers/main.py\n@@ -68,14 +68,13 @@\n try:\n tracking_email = env['mail.tracking.email'].search([\n ('id', '=', tracking_email_id),\n- ('state', 'in', ['sent', 'delivered']),\n ('token', '=', token),\n ])\n- if tracking_email:\n- tracking_email.event_create('open', metadata)\n- else:\n+ if not tracking_email:\n _logger.warning(\n \"MailTracking email '%s' not found\", tracking_email_id)\n+ elif tracking_email.state in ('sent', 'delivered'):\n+ tracking_email.event_create('open', metadata)\n except Exception:\n pass\n", "issue": "[12.0][mail_tracking] many MailTracking email not found warnings\nThis code snippet is polluting my logs with warning messages.\r\n\r\nhttps://github.com/OCA/social/blob/8d985d8da9fa864113f87cd59a2b3173f7f89193/mail_tracking/controllers/main.py#L69-L78\r\n\r\nHow can this work if state will be marked 'opened' the first time? Each successive open will trigger above warning, because state is now 'opened' and will not match the above domain `('state', 'in', ['sent', 'delivered'])`.\r\n\r\n Is it intended to generate warnings each time mail is opened? Or what am I missing?\n", "before_files": [{"content": "# Copyright 2016 Antonio Espinosa - <[email protected]>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n\nimport werkzeug\nimport odoo\nfrom contextlib import contextmanager\nfrom odoo import api, http, SUPERUSER_ID\n\nfrom odoo.addons.mail.controllers.main import MailController\nimport logging\nimport base64\n_logger = logging.getLogger(__name__)\n\nBLANK = 'R0lGODlhAQABAIAAANvf7wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw=='\n\n\n@contextmanager\ndef db_env(dbname):\n if not http.db_filter([dbname]):\n raise werkzeug.exceptions.BadRequest()\n cr = None\n if dbname == http.request.db:\n cr = http.request.cr\n if not cr:\n cr = odoo.sql_db.db_connect(dbname).cursor()\n with api.Environment.manage():\n yield api.Environment(cr, SUPERUSER_ID, {})\n\n\nclass MailTrackingController(MailController):\n\n def _request_metadata(self):\n \"\"\"Prepare remote info metadata\"\"\"\n request = http.request.httprequest\n return {\n 'ip': request.remote_addr or False,\n 'user_agent': request.user_agent or False,\n 'os_family': request.user_agent.platform or False,\n 'ua_family': request.user_agent.browser or False,\n }\n\n @http.route(['/mail/tracking/all/<string:db>',\n '/mail/tracking/event/<string:db>/<string:event_type>'],\n type='http', auth='none', csrf=False)\n def mail_tracking_event(self, db, event_type=None, **kw):\n \"\"\"Route used by external mail service\"\"\"\n metadata = self._request_metadata()\n res = None\n with db_env(db) as env:\n try:\n res = env['mail.tracking.email'].event_process(\n http.request, kw, metadata, event_type=event_type)\n except Exception:\n pass\n if not res or res == 'NOT FOUND':\n return werkzeug.exceptions.NotAcceptable()\n return res\n\n @http.route(['/mail/tracking/open/<string:db>'\n '/<int:tracking_email_id>/blank.gif',\n '/mail/tracking/open/<string:db>'\n '/<int:tracking_email_id>/<string:token>/blank.gif'],\n type='http', auth='none', methods=['GET'])\n def mail_tracking_open(self, db, tracking_email_id, token=False, **kw):\n \"\"\"Route used to track mail openned (With & Without Token)\"\"\"\n metadata = self._request_metadata()\n with db_env(db) as env:\n try:\n tracking_email = env['mail.tracking.email'].search([\n ('id', '=', tracking_email_id),\n ('state', 'in', ['sent', 'delivered']),\n ('token', '=', token),\n ])\n if tracking_email:\n tracking_email.event_create('open', metadata)\n else:\n _logger.warning(\n \"MailTracking email '%s' not found\", tracking_email_id)\n except Exception:\n pass\n\n # Always return GIF blank image\n response = werkzeug.wrappers.Response()\n response.mimetype = 'image/gif'\n response.data = base64.b64decode(BLANK)\n return response\n\n @http.route()\n def mail_init_messaging(self):\n \"\"\"Route used to initial values of Discuss app\"\"\"\n values = super().mail_init_messaging()\n values.update({\n 'failed_counter':\n http.request.env['mail.message'].get_failed_count(),\n })\n return values\n", "path": "mail_tracking/controllers/main.py"}]}
| 1,658 | 175 |
gh_patches_debug_19190
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-9709
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] Using filter list with Jinja on metrics and metric input measures causes error when running `dbt parse`
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
When you define a metric or metric input measure with a filter list and include Jinja in the list, `dbt parse` triggers an error. Example YAML:
```
metrics:
- name: new_customer
description: Unique count of new customers.
label: New Customers
type: simple
type_params:
measure: customers_with_orders
filter:
- "{{ Dimension('customer__customer_type') }} = 'new'"
```
Running `dbt parse` in a repo with that metric definition triggers this error:
```
Compilation Error
Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined
```
Meanwhile, if you apply the same filter as a string (example below), no error will be triggered.
```
metrics:
- name: new_customer
description: Unique count of new customers.
label: New Customers
type: simple
type_params:
measure: customers_with_orders
filter: "{{ Dimension('customer__customer_type') }} = 'new'"
```
### Expected Behavior
Applying the filter in a list should have the same impact as applying it as a string. Currently this limitation will prevent users from using multiple filters on metrics and metric input measures.
### Steps To Reproduce
1. Add a list-type filter on a metric or a metric input measure in a dbt project
2. Run `dbt parse` in the repo
### Relevant log output
```shell
Output when running dbt parse:
21:05:56 Running with dbt=1.7.3
21:05:56 Registered adapter: duckdb=1.7.0
21:05:56 Encountered an error:
Compilation Error
Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined
```
### Environment
```markdown
- OS: Mac OSX 13.4.1
- Python: 3.9.16
- dbt: 1.7.3
```
### Which database adapter are you using with dbt?
other (mention it in "Additional Context")
### Additional Context
DuckDB
</issue>
<code>
[start of core/dbt/parser/schema_renderer.py]
1 from typing import Dict, Any
2
3 from dbt.config.renderer import BaseRenderer, Keypath
4
5
6 # This class renders dictionaries derived from "schema" yaml files.
7 # It calls Jinja on strings (in deep_map_render), except for certain
8 # keys which are skipped because they need to be rendered later
9 # (tests and description). Test configs are rendered in the
10 # generic test builder code, but skips the keyword args. The test
11 # keyword args are rendered to capture refs in render_test_update.
12 # Keyword args are finally rendered at compilation time.
13 # Descriptions are not rendered until 'process_docs'.
14 class SchemaYamlRenderer(BaseRenderer):
15 def __init__(self, context: Dict[str, Any], key: str) -> None:
16 super().__init__(context)
17 self.key = key
18
19 @property
20 def name(self):
21 return "Rendering yaml"
22
23 def _is_norender_key(self, keypath: Keypath) -> bool:
24 """
25 models:
26 - name: blah
27 description: blah
28 data_tests: ...
29 columns:
30 - name:
31 description: blah
32 data_tests: ...
33
34 Return True if it's tests, data_tests or description - those aren't rendered now
35 because they're rendered later in parse_generic_tests or process_docs.
36 "tests" and "data_tests" are both currently supported but "tests" has been deprecated
37 """
38 # top level descriptions and data_tests
39 if len(keypath) >= 1 and keypath[0] in ("tests", "data_tests", "description"):
40 return True
41
42 # columns descriptions and data_tests
43 if len(keypath) == 2 and keypath[1] in ("tests", "data_tests", "description"):
44 return True
45
46 # versions
47 if len(keypath) == 5 and keypath[4] == "description":
48 return True
49
50 if (
51 len(keypath) >= 3
52 and keypath[0] in ("columns", "dimensions", "measures", "entities")
53 and keypath[2] in ("tests", "data_tests", "description")
54 ):
55 return True
56
57 return False
58
59 # don't render descriptions or test keyword arguments
60 def should_render_keypath(self, keypath: Keypath) -> bool:
61 if len(keypath) < 1:
62 return True
63
64 if self.key == "sources":
65 if keypath[0] == "description":
66 return False
67 if keypath[0] == "tables":
68 if self._is_norender_key(keypath[2:]):
69 return False
70 elif self.key == "macros":
71 if keypath[0] == "arguments":
72 if self._is_norender_key(keypath[1:]):
73 return False
74 elif self._is_norender_key(keypath[0:]):
75 return False
76 elif self.key == "metrics":
77 # This ensures all key paths that end in 'filter' for a metric are skipped
78 if keypath[-1] == "filter":
79 return False
80 elif self._is_norender_key(keypath[0:]):
81 return False
82 elif self.key == "saved_queries":
83 if keypath[0] == "query_params" and len(keypath) > 1 and keypath[1] == "where":
84 return False
85 elif self._is_norender_key(keypath[0:]):
86 return False
87 else: # models, seeds, snapshots, analyses
88 if self._is_norender_key(keypath[0:]):
89 return False
90 return True
91
[end of core/dbt/parser/schema_renderer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/dbt/parser/schema_renderer.py b/core/dbt/parser/schema_renderer.py
--- a/core/dbt/parser/schema_renderer.py
+++ b/core/dbt/parser/schema_renderer.py
@@ -74,12 +74,13 @@
elif self._is_norender_key(keypath[0:]):
return False
elif self.key == "metrics":
- # This ensures all key paths that end in 'filter' for a metric are skipped
- if keypath[-1] == "filter":
+ # This ensures that metric filters are skipped
+ if keypath[-1] == "filter" or len(keypath) > 1 and keypath[-2] == "filter":
return False
elif self._is_norender_key(keypath[0:]):
return False
elif self.key == "saved_queries":
+ # This ensures that saved query filters are skipped
if keypath[0] == "query_params" and len(keypath) > 1 and keypath[1] == "where":
return False
elif self._is_norender_key(keypath[0:]):
|
{"golden_diff": "diff --git a/core/dbt/parser/schema_renderer.py b/core/dbt/parser/schema_renderer.py\n--- a/core/dbt/parser/schema_renderer.py\n+++ b/core/dbt/parser/schema_renderer.py\n@@ -74,12 +74,13 @@\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"metrics\":\n- # This ensures all key paths that end in 'filter' for a metric are skipped\n- if keypath[-1] == \"filter\":\n+ # This ensures that metric filters are skipped\n+ if keypath[-1] == \"filter\" or len(keypath) > 1 and keypath[-2] == \"filter\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"saved_queries\":\n+ # This ensures that saved query filters are skipped\n if keypath[0] == \"query_params\" and len(keypath) > 1 and keypath[1] == \"where\":\n return False\n elif self._is_norender_key(keypath[0:]):\n", "issue": "[Bug] Using filter list with Jinja on metrics and metric input measures causes error when running `dbt parse`\n### Is this a new bug in dbt-core?\n\n- [X] I believe this is a new bug in dbt-core\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\n\n### Current Behavior\n\nWhen you define a metric or metric input measure with a filter list and include Jinja in the list, `dbt parse` triggers an error. Example YAML:\r\n```\r\nmetrics:\r\n - name: new_customer\r\n description: Unique count of new customers.\r\n label: New Customers\r\n type: simple\r\n type_params:\r\n measure: customers_with_orders\r\n filter:\r\n - \"{{ Dimension('customer__customer_type') }} = 'new'\"\r\n```\r\nRunning `dbt parse` in a repo with that metric definition triggers this error:\r\n```\r\nCompilation Error\r\n Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined\r\n```\r\nMeanwhile, if you apply the same filter as a string (example below), no error will be triggered.\r\n```\r\nmetrics:\r\n - name: new_customer\r\n description: Unique count of new customers.\r\n label: New Customers\r\n type: simple\r\n type_params:\r\n measure: customers_with_orders\r\n filter: \"{{ Dimension('customer__customer_type') }} = 'new'\"\r\n```\n\n### Expected Behavior\n\nApplying the filter in a list should have the same impact as applying it as a string. Currently this limitation will prevent users from using multiple filters on metrics and metric input measures.\n\n### Steps To Reproduce\n\n1. Add a list-type filter on a metric or a metric input measure in a dbt project\r\n2. Run `dbt parse` in the repo\n\n### Relevant log output\n\n```shell\nOutput when running dbt parse:\r\n\r\n21:05:56 Running with dbt=1.7.3\r\n21:05:56 Registered adapter: duckdb=1.7.0\r\n21:05:56 Encountered an error:\r\nCompilation Error\r\n Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined\n```\n\n\n### Environment\n\n```markdown\n- OS: Mac OSX 13.4.1\r\n- Python: 3.9.16\r\n- dbt: 1.7.3\n```\n\n\n### Which database adapter are you using with dbt?\n\nother (mention it in \"Additional Context\")\n\n### Additional Context\n\nDuckDB\n", "before_files": [{"content": "from typing import Dict, Any\n\nfrom dbt.config.renderer import BaseRenderer, Keypath\n\n\n# This class renders dictionaries derived from \"schema\" yaml files.\n# It calls Jinja on strings (in deep_map_render), except for certain\n# keys which are skipped because they need to be rendered later\n# (tests and description). Test configs are rendered in the\n# generic test builder code, but skips the keyword args. The test\n# keyword args are rendered to capture refs in render_test_update.\n# Keyword args are finally rendered at compilation time.\n# Descriptions are not rendered until 'process_docs'.\nclass SchemaYamlRenderer(BaseRenderer):\n def __init__(self, context: Dict[str, Any], key: str) -> None:\n super().__init__(context)\n self.key = key\n\n @property\n def name(self):\n return \"Rendering yaml\"\n\n def _is_norender_key(self, keypath: Keypath) -> bool:\n \"\"\"\n models:\n - name: blah\n description: blah\n data_tests: ...\n columns:\n - name:\n description: blah\n data_tests: ...\n\n Return True if it's tests, data_tests or description - those aren't rendered now\n because they're rendered later in parse_generic_tests or process_docs.\n \"tests\" and \"data_tests\" are both currently supported but \"tests\" has been deprecated\n \"\"\"\n # top level descriptions and data_tests\n if len(keypath) >= 1 and keypath[0] in (\"tests\", \"data_tests\", \"description\"):\n return True\n\n # columns descriptions and data_tests\n if len(keypath) == 2 and keypath[1] in (\"tests\", \"data_tests\", \"description\"):\n return True\n\n # versions\n if len(keypath) == 5 and keypath[4] == \"description\":\n return True\n\n if (\n len(keypath) >= 3\n and keypath[0] in (\"columns\", \"dimensions\", \"measures\", \"entities\")\n and keypath[2] in (\"tests\", \"data_tests\", \"description\")\n ):\n return True\n\n return False\n\n # don't render descriptions or test keyword arguments\n def should_render_keypath(self, keypath: Keypath) -> bool:\n if len(keypath) < 1:\n return True\n\n if self.key == \"sources\":\n if keypath[0] == \"description\":\n return False\n if keypath[0] == \"tables\":\n if self._is_norender_key(keypath[2:]):\n return False\n elif self.key == \"macros\":\n if keypath[0] == \"arguments\":\n if self._is_norender_key(keypath[1:]):\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"metrics\":\n # This ensures all key paths that end in 'filter' for a metric are skipped\n if keypath[-1] == \"filter\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"saved_queries\":\n if keypath[0] == \"query_params\" and len(keypath) > 1 and keypath[1] == \"where\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n else: # models, seeds, snapshots, analyses\n if self._is_norender_key(keypath[0:]):\n return False\n return True\n", "path": "core/dbt/parser/schema_renderer.py"}]}
| 2,053 | 247 |
gh_patches_debug_1138
|
rasdani/github-patches
|
git_diff
|
DataBiosphere__toil-4011
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
We're getting the new Werkzeug without asking for it
CI for the merge https://github.com/DataBiosphere/toil/commit/0e256d63cb974a87b8f6b807bf7d23bc9a12fb76 failed at the lint stage, because the merge commit ends up installing a different Werkzeug than the PR's test run did, and the new one has type hints, which upsets MyPy because we now have an unused ignore.
This is because the `connexion` devs finally got access to the `connexion` PyPI package again, and published the current release there. So we started picking up connexion 2.10 instead of 2.5, which is now compatible with Flask 2. So we started installing Flask 2 and Werkzeug 2.
If we're going to import out of Werkzeug, we need to depend on a particular major version of it, so it can't be changed put from under us by pip.
┆Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-1130)
┆friendlyId: TOIL-1130
</issue>
<code>
[start of src/toil/server/wes/abstract_backend.py]
1 # Modified from: https://github.com/common-workflow-language/workflow-service
2 import functools
3 import json
4 import os
5 import logging
6 import tempfile
7 from abc import abstractmethod
8 from typing import Optional, List, Dict, Any, Tuple, Callable
9 from urllib.parse import urldefrag
10
11 import connexion # type: ignore
12 from werkzeug.utils import secure_filename # type: ignore
13
14 logger = logging.getLogger(__name__)
15
16
17 class VersionNotImplementedException(Exception):
18 """
19 Raised when the requested workflow version is not implemented.
20 """
21 def __init__(self,
22 wf_type: str, version: Optional[str] = None, supported_versions: Optional[List[str]] = None) -> None:
23 if version:
24 message = ("workflow_type '{}' requires 'workflow_type_version' to be one of '{}'. "
25 "Got '{}' instead.".format(wf_type, str(supported_versions), version))
26 else:
27 message = f"workflow_type '{wf_type}' is not supported."
28
29 super(VersionNotImplementedException, self).__init__(message)
30
31
32 class MalformedRequestException(Exception):
33 """
34 Raised when the request is malformed.
35 """
36 def __init__(self, message: str) -> None:
37 super(MalformedRequestException, self).__init__(message)
38
39
40 class WorkflowNotFoundException(Exception):
41 """
42 Raised when the requested run ID is not found.
43 """
44 def __init__(self) -> None:
45 super(WorkflowNotFoundException, self).__init__("The requested workflow run wasn't found.")
46
47
48 class WorkflowConflictException(Exception):
49 """
50 Raised when the requested workflow is not in the expected state.
51 """
52 def __init__(self, run_id: str):
53 super(WorkflowConflictException, self).__init__(f"Workflow {run_id} exists when it shouldn't.")
54
55
56 class OperationForbidden(Exception):
57 """
58 Raised when the request is forbidden.
59 """
60 def __init__(self, message: str) -> None:
61 super(OperationForbidden, self).__init__(message)
62
63
64 class WorkflowExecutionException(Exception):
65 """
66 Raised when an internal error occurred during the execution of the workflow.
67 """
68 def __init__(self, message: str) -> None:
69 super(WorkflowExecutionException, self).__init__(message)
70
71
72 def handle_errors(func: Callable[..., Any]) -> Callable[..., Any]:
73 """
74 This decorator catches errors from the wrapped function and returns a JSON
75 formatted error message with the appropriate status code defined by the
76 GA4GH WES spec.
77 """
78
79 def error(msg: Any, code: int = 500) -> Tuple[Dict[str, Any], int]:
80 logger.warning(f"Exception raised when calling '{func.__name__}()':", exc_info=True)
81 return {"msg": str(msg), "status_code": code}, code
82
83 @functools.wraps(func)
84 def wrapper(*args: Any, **kwargs: Any) -> Any:
85 try:
86 return func(*args, **kwargs)
87 except MalformedRequestException as e:
88 return error(e, code=400)
89 except VersionNotImplementedException as e:
90 return error(e, code=400)
91 except OperationForbidden as e:
92 return error(e, code=403)
93 except (FileNotFoundError, WorkflowNotFoundException) as e:
94 return error(e, code=404)
95 except WorkflowConflictException as e:
96 return error(e, code=400)
97 except WorkflowExecutionException as e:
98 return error(e, code=500)
99 except Exception as e:
100 return error(e, code=500)
101
102 return wrapper
103
104
105 class WESBackend:
106 """
107 A class to represent a GA4GH Workflow Execution Service (WES) API backend.
108 Intended to be inherited. Subclasses should implement all abstract methods
109 to handle user requests when they hit different endpoints.
110 """
111
112 def __init__(self, options: List[str]):
113 """
114 :param options: A list of default engine options to use when executing
115 a workflow. Example options:
116 ["--logLevel=CRITICAL","--workDir=/path/to/dir",
117 "--tag=Name=default", "--tag=Owner=shared", ...]
118 """
119 self.options = options or []
120
121 def resolve_operation_id(self, operation_id: str) -> Any:
122 """
123 Map an operationId defined in the OpenAPI or swagger yaml file to a
124 function.
125
126 :param operation_id: The operation ID defined in the specification.
127 :returns: A function that should be called when the given endpoint is
128 reached.
129 """
130 return getattr(self, operation_id.split(".")[-1])
131
132 @abstractmethod
133 def get_service_info(self) -> Dict[str, Any]:
134 """
135 Get information about the Workflow Execution Service.
136
137 GET /service-info
138 """
139 raise NotImplementedError
140
141 @abstractmethod
142 def list_runs(self, page_size: Optional[int] = None, page_token: Optional[str] = None) -> Dict[str, Any]:
143 """
144 List the workflow runs.
145
146 GET /runs
147 """
148 raise NotImplementedError
149
150 @abstractmethod
151 def run_workflow(self) -> Dict[str, str]:
152 """
153 Run a workflow. This endpoint creates a new workflow run and returns
154 a `RunId` to monitor its progress.
155
156 POST /runs
157 """
158 raise NotImplementedError
159
160 @abstractmethod
161 def get_run_log(self, run_id: str) -> Dict[str, Any]:
162 """
163 Get detailed info about a workflow run.
164
165 GET /runs/{run_id}
166 """
167 raise NotImplementedError
168
169 @abstractmethod
170 def cancel_run(self, run_id: str) -> Dict[str, str]:
171 """
172 Cancel a running workflow.
173
174 POST /runs/{run_id}/cancel
175 """
176 raise NotImplementedError
177
178 @abstractmethod
179 def get_run_status(self, run_id: str) -> Dict[str, str]:
180 """
181 Get quick status info about a workflow run, returning a simple result
182 with the overall state of the workflow run.
183
184 GET /runs/{run_id}/status
185 """
186 raise NotImplementedError
187
188 @staticmethod
189 def log_for_run(run_id: Optional[str], message: str) -> None:
190 if run_id:
191 logging.info("Workflow %s: %s", run_id, message)
192 else:
193 logging.info(message)
194
195 @staticmethod
196 def secure_path(path: str) -> str:
197 return os.path.join(*[str(secure_filename(p)) for p in path.split("/") if p not in ("", ".", "..")])
198
199 def collect_attachments(self, run_id: Optional[str], temp_dir: Optional[str]) -> Tuple[str, Dict[str, Any]]:
200 """
201 Collect attachments from the current request by staging uploaded files
202 to temp_dir, and return the temp_dir and parsed body of the request.
203
204 :param run_id: The run ID for logging.
205 :param temp_dir: The directory where uploaded files should be staged.
206 If None, a temporary directory is created.
207 """
208 if not temp_dir:
209 temp_dir = tempfile.mkdtemp()
210 body = {}
211 has_attachments = False
212 for key, ls in connexion.request.files.lists():
213 try:
214 for value in ls:
215 # uploaded files that are required to execute the workflow
216 if key == "workflow_attachment":
217 # guard against maliciously constructed filenames
218 dest = os.path.join(temp_dir, self.secure_path(value.filename))
219 if not os.path.isdir(os.path.dirname(dest)):
220 os.makedirs(os.path.dirname(dest))
221 self.log_for_run(run_id, f"Staging attachment '{value.filename}' to '{dest}'")
222 value.save(dest)
223 has_attachments = True
224 body[key] = f"file://{temp_dir}" # Reference to temp working dir.
225
226 elif key in ("workflow_params", "tags", "workflow_engine_parameters"):
227 content = value.read()
228 body[key] = json.loads(content.decode("utf-8"))
229 else:
230 body[key] = value.read().decode()
231 except Exception as e:
232 raise MalformedRequestException(f"Error reading parameter '{key}': {e}")
233
234 for key, ls in connexion.request.form.lists():
235 try:
236 for value in ls:
237 if not value:
238 continue
239 if key in ("workflow_params", "tags", "workflow_engine_parameters"):
240 body[key] = json.loads(value)
241 else:
242 body[key] = value
243 except Exception as e:
244 raise MalformedRequestException(f"Error reading parameter '{key}': {e}")
245
246 if "workflow_url" in body:
247 url, ref = urldefrag(body["workflow_url"])
248 if ":" not in url:
249 if not has_attachments:
250 raise MalformedRequestException("Relative 'workflow_url' but missing 'workflow_attachment'")
251 body["workflow_url"] = self.secure_path(url) # keep this relative
252 if ref:
253 # append "#ref" after the url
254 body["workflow_url"] += "#" + self.secure_path(ref)
255 self.log_for_run(run_id, "Using workflow_url '%s'" % body.get("workflow_url"))
256 else:
257 raise MalformedRequestException("Missing 'workflow_url' in submission")
258 if "workflow_params" not in body:
259 raise MalformedRequestException("Missing 'workflow_params' in submission")
260
261 return temp_dir, body
262
[end of src/toil/server/wes/abstract_backend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/toil/server/wes/abstract_backend.py b/src/toil/server/wes/abstract_backend.py
--- a/src/toil/server/wes/abstract_backend.py
+++ b/src/toil/server/wes/abstract_backend.py
@@ -9,7 +9,7 @@
from urllib.parse import urldefrag
import connexion # type: ignore
-from werkzeug.utils import secure_filename # type: ignore
+from werkzeug.utils import secure_filename
logger = logging.getLogger(__name__)
|
{"golden_diff": "diff --git a/src/toil/server/wes/abstract_backend.py b/src/toil/server/wes/abstract_backend.py\n--- a/src/toil/server/wes/abstract_backend.py\n+++ b/src/toil/server/wes/abstract_backend.py\n@@ -9,7 +9,7 @@\n from urllib.parse import urldefrag\n \n import connexion # type: ignore\n-from werkzeug.utils import secure_filename # type: ignore\n+from werkzeug.utils import secure_filename\n \n logger = logging.getLogger(__name__)\n", "issue": "We're getting the new Werkzeug without asking for it\nCI for the merge https://github.com/DataBiosphere/toil/commit/0e256d63cb974a87b8f6b807bf7d23bc9a12fb76 failed at the lint stage, because the merge commit ends up installing a different Werkzeug than the PR's test run did, and the new one has type hints, which upsets MyPy because we now have an unused ignore.\n\nThis is because the `connexion` devs finally got access to the `connexion` PyPI package again, and published the current release there. So we started picking up connexion 2.10 instead of 2.5, which is now compatible with Flask 2. So we started installing Flask 2 and Werkzeug 2.\n\nIf we're going to import out of Werkzeug, we need to depend on a particular major version of it, so it can't be changed put from under us by pip.\n\n\u2506Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-1130)\n\u2506friendlyId: TOIL-1130\n\n", "before_files": [{"content": "# Modified from: https://github.com/common-workflow-language/workflow-service\nimport functools\nimport json\nimport os\nimport logging\nimport tempfile\nfrom abc import abstractmethod\nfrom typing import Optional, List, Dict, Any, Tuple, Callable\nfrom urllib.parse import urldefrag\n\nimport connexion # type: ignore\nfrom werkzeug.utils import secure_filename # type: ignore\n\nlogger = logging.getLogger(__name__)\n\n\nclass VersionNotImplementedException(Exception):\n \"\"\"\n Raised when the requested workflow version is not implemented.\n \"\"\"\n def __init__(self,\n wf_type: str, version: Optional[str] = None, supported_versions: Optional[List[str]] = None) -> None:\n if version:\n message = (\"workflow_type '{}' requires 'workflow_type_version' to be one of '{}'. \"\n \"Got '{}' instead.\".format(wf_type, str(supported_versions), version))\n else:\n message = f\"workflow_type '{wf_type}' is not supported.\"\n\n super(VersionNotImplementedException, self).__init__(message)\n\n\nclass MalformedRequestException(Exception):\n \"\"\"\n Raised when the request is malformed.\n \"\"\"\n def __init__(self, message: str) -> None:\n super(MalformedRequestException, self).__init__(message)\n\n\nclass WorkflowNotFoundException(Exception):\n \"\"\"\n Raised when the requested run ID is not found.\n \"\"\"\n def __init__(self) -> None:\n super(WorkflowNotFoundException, self).__init__(\"The requested workflow run wasn't found.\")\n\n\nclass WorkflowConflictException(Exception):\n \"\"\"\n Raised when the requested workflow is not in the expected state.\n \"\"\"\n def __init__(self, run_id: str):\n super(WorkflowConflictException, self).__init__(f\"Workflow {run_id} exists when it shouldn't.\")\n\n\nclass OperationForbidden(Exception):\n \"\"\"\n Raised when the request is forbidden.\n \"\"\"\n def __init__(self, message: str) -> None:\n super(OperationForbidden, self).__init__(message)\n\n\nclass WorkflowExecutionException(Exception):\n \"\"\"\n Raised when an internal error occurred during the execution of the workflow.\n \"\"\"\n def __init__(self, message: str) -> None:\n super(WorkflowExecutionException, self).__init__(message)\n\n\ndef handle_errors(func: Callable[..., Any]) -> Callable[..., Any]:\n \"\"\"\n This decorator catches errors from the wrapped function and returns a JSON\n formatted error message with the appropriate status code defined by the\n GA4GH WES spec.\n \"\"\"\n\n def error(msg: Any, code: int = 500) -> Tuple[Dict[str, Any], int]:\n logger.warning(f\"Exception raised when calling '{func.__name__}()':\", exc_info=True)\n return {\"msg\": str(msg), \"status_code\": code}, code\n\n @functools.wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n try:\n return func(*args, **kwargs)\n except MalformedRequestException as e:\n return error(e, code=400)\n except VersionNotImplementedException as e:\n return error(e, code=400)\n except OperationForbidden as e:\n return error(e, code=403)\n except (FileNotFoundError, WorkflowNotFoundException) as e:\n return error(e, code=404)\n except WorkflowConflictException as e:\n return error(e, code=400)\n except WorkflowExecutionException as e:\n return error(e, code=500)\n except Exception as e:\n return error(e, code=500)\n\n return wrapper\n\n\nclass WESBackend:\n \"\"\"\n A class to represent a GA4GH Workflow Execution Service (WES) API backend.\n Intended to be inherited. Subclasses should implement all abstract methods\n to handle user requests when they hit different endpoints.\n \"\"\"\n\n def __init__(self, options: List[str]):\n \"\"\"\n :param options: A list of default engine options to use when executing\n a workflow. Example options:\n [\"--logLevel=CRITICAL\",\"--workDir=/path/to/dir\",\n \"--tag=Name=default\", \"--tag=Owner=shared\", ...]\n \"\"\"\n self.options = options or []\n\n def resolve_operation_id(self, operation_id: str) -> Any:\n \"\"\"\n Map an operationId defined in the OpenAPI or swagger yaml file to a\n function.\n\n :param operation_id: The operation ID defined in the specification.\n :returns: A function that should be called when the given endpoint is\n reached.\n \"\"\"\n return getattr(self, operation_id.split(\".\")[-1])\n\n @abstractmethod\n def get_service_info(self) -> Dict[str, Any]:\n \"\"\"\n Get information about the Workflow Execution Service.\n\n GET /service-info\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n def list_runs(self, page_size: Optional[int] = None, page_token: Optional[str] = None) -> Dict[str, Any]:\n \"\"\"\n List the workflow runs.\n\n GET /runs\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n def run_workflow(self) -> Dict[str, str]:\n \"\"\"\n Run a workflow. This endpoint creates a new workflow run and returns\n a `RunId` to monitor its progress.\n\n POST /runs\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n def get_run_log(self, run_id: str) -> Dict[str, Any]:\n \"\"\"\n Get detailed info about a workflow run.\n\n GET /runs/{run_id}\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n def cancel_run(self, run_id: str) -> Dict[str, str]:\n \"\"\"\n Cancel a running workflow.\n\n POST /runs/{run_id}/cancel\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n def get_run_status(self, run_id: str) -> Dict[str, str]:\n \"\"\"\n Get quick status info about a workflow run, returning a simple result\n with the overall state of the workflow run.\n\n GET /runs/{run_id}/status\n \"\"\"\n raise NotImplementedError\n\n @staticmethod\n def log_for_run(run_id: Optional[str], message: str) -> None:\n if run_id:\n logging.info(\"Workflow %s: %s\", run_id, message)\n else:\n logging.info(message)\n\n @staticmethod\n def secure_path(path: str) -> str:\n return os.path.join(*[str(secure_filename(p)) for p in path.split(\"/\") if p not in (\"\", \".\", \"..\")])\n\n def collect_attachments(self, run_id: Optional[str], temp_dir: Optional[str]) -> Tuple[str, Dict[str, Any]]:\n \"\"\"\n Collect attachments from the current request by staging uploaded files\n to temp_dir, and return the temp_dir and parsed body of the request.\n\n :param run_id: The run ID for logging.\n :param temp_dir: The directory where uploaded files should be staged.\n If None, a temporary directory is created.\n \"\"\"\n if not temp_dir:\n temp_dir = tempfile.mkdtemp()\n body = {}\n has_attachments = False\n for key, ls in connexion.request.files.lists():\n try:\n for value in ls:\n # uploaded files that are required to execute the workflow\n if key == \"workflow_attachment\":\n # guard against maliciously constructed filenames\n dest = os.path.join(temp_dir, self.secure_path(value.filename))\n if not os.path.isdir(os.path.dirname(dest)):\n os.makedirs(os.path.dirname(dest))\n self.log_for_run(run_id, f\"Staging attachment '{value.filename}' to '{dest}'\")\n value.save(dest)\n has_attachments = True\n body[key] = f\"file://{temp_dir}\" # Reference to temp working dir.\n\n elif key in (\"workflow_params\", \"tags\", \"workflow_engine_parameters\"):\n content = value.read()\n body[key] = json.loads(content.decode(\"utf-8\"))\n else:\n body[key] = value.read().decode()\n except Exception as e:\n raise MalformedRequestException(f\"Error reading parameter '{key}': {e}\")\n\n for key, ls in connexion.request.form.lists():\n try:\n for value in ls:\n if not value:\n continue\n if key in (\"workflow_params\", \"tags\", \"workflow_engine_parameters\"):\n body[key] = json.loads(value)\n else:\n body[key] = value\n except Exception as e:\n raise MalformedRequestException(f\"Error reading parameter '{key}': {e}\")\n\n if \"workflow_url\" in body:\n url, ref = urldefrag(body[\"workflow_url\"])\n if \":\" not in url:\n if not has_attachments:\n raise MalformedRequestException(\"Relative 'workflow_url' but missing 'workflow_attachment'\")\n body[\"workflow_url\"] = self.secure_path(url) # keep this relative\n if ref:\n # append \"#ref\" after the url\n body[\"workflow_url\"] += \"#\" + self.secure_path(ref)\n self.log_for_run(run_id, \"Using workflow_url '%s'\" % body.get(\"workflow_url\"))\n else:\n raise MalformedRequestException(\"Missing 'workflow_url' in submission\")\n if \"workflow_params\" not in body:\n raise MalformedRequestException(\"Missing 'workflow_params' in submission\")\n\n return temp_dir, body\n", "path": "src/toil/server/wes/abstract_backend.py"}]}
| 3,497 | 107 |
gh_patches_debug_56475
|
rasdani/github-patches
|
git_diff
|
mantl__mantl-1028
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
upgrade path from 0.5.1 to 0.6
0.6 will include updated versions of marathon (0.13), mesos (0.25), and consul (0.6.3). We want to support an upgrade of an existing 0.5.1 cluster.
- [x] mesos/marathon: https://github.com/CiscoCloud/microservices-infrastructure/blob/master/playbooks/upgrade-mesos-marathon.yml
- [ ] consul upgrade
- [ ] vault (depending if https://github.com/CiscoCloud/microservices-infrastructure/tree/feature/vault-ssl makes it to this release)
- [ ] document upgrade process
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Microservices Infrastructure documentation build configuration file, created by
4 # sphinx-quickstart on Wed Feb 4 06:59:14 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17
18 # If extensions (or modules to document with autodoc) are in another directory,
19 # add these directories to sys.path here. If the directory is relative to the
20 # documentation root, use os.path.abspath to make it absolute, like shown here.
21 #sys.path.insert(0, os.path.abspath('.'))
22
23 # -- General configuration ------------------------------------------------
24
25 # If your documentation needs a minimal Sphinx version, state it here.
26 #needs_sphinx = '1.0'
27
28 # Add any Sphinx extension module names here, as strings. They can be
29 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
30 # ones.
31 extensions = [
32 'sphinx.ext.autodoc',
33 'sphinx.ext.intersphinx',
34 'sphinx.ext.viewcode',
35 'sphinx.ext.todo',
36 ]
37
38 # Add any paths that contain templates here, relative to this directory.
39 templates_path = ['_templates']
40
41 # The suffix of source filenames.
42 source_suffix = '.rst'
43
44 # The encoding of source files.
45 #source_encoding = 'utf-8-sig'
46
47 # The master toctree document.
48 master_doc = 'index'
49
50 # General information about the project.
51 project = u'Microservices Infrastructure'
52 copyright = u'2015, Cisco Systems, Incorporated'
53
54 # The version info for the project you're documenting, acts as replacement for
55 # |version| and |release|, also used in various other places throughout the
56 # built documents.
57 #
58 # The short X.Y version.
59 version = '0.3'
60 # The full version, including alpha/beta/rc tags.
61 release = '0.3'
62
63 # The language for content autogenerated by Sphinx. Refer to documentation
64 # for a list of supported languages.
65 #language = None
66
67 # There are two options for replacing |today|: either, you set today to some
68 # non-false value, then it is used:
69 #today = ''
70 # Else, today_fmt is used as the format for a strftime call.
71 #today_fmt = '%B %d, %Y'
72
73 # List of patterns, relative to source directory, that match files and
74 # directories to ignore when looking for source files.
75 exclude_patterns = ['_build']
76
77 # The reST default role (used for this markup: `text`) to use for all
78 # documents.
79 #default_role = None
80
81 # If true, '()' will be appended to :func: etc. cross-reference text.
82 #add_function_parentheses = True
83
84 # If true, the current module name will be prepended to all description
85 # unit titles (such as .. function::).
86 #add_module_names = True
87
88 # If true, sectionauthor and moduleauthor directives will be shown in the
89 # output. They are ignored by default.
90 #show_authors = False
91
92 # The name of the Pygments (syntax highlighting) style to use.
93 pygments_style = 'sphinx'
94
95 # A list of ignored prefixes for module index sorting.
96 #modindex_common_prefix = []
97
98 # If true, keep warnings as "system message" paragraphs in the built documents.
99 #keep_warnings = False
100
101
102 # -- Options for HTML output ----------------------------------------------
103
104 import alabaster
105
106 # The theme to use for HTML and HTML Help pages. See the documentation for
107 # a list of builtin themes.
108 html_theme = 'alabaster'
109
110 # Theme options are theme-specific and customize the look and feel of a theme
111 # further. For a list of options available for each theme, see the
112 # documentation.
113 extensions += ['alabaster']
114 html_theme_options = {
115 'github_user': 'ciscocloud',
116 'github_repo': 'microservices-infrastructure',
117 'logo': 'cisco.png',
118 'logo_name': True,
119 }
120
121 # Add any paths that contain custom themes here, relative to this directory.
122 html_theme_path = [alabaster.get_path()]
123
124 # The name for this set of Sphinx documents. If None, it defaults to
125 # "<project> v<release> documentation".
126 #html_title = None
127
128 # A shorter title for the navigation bar. Default is the same as html_title.
129 #html_short_title = None
130
131 # The name of an image file (relative to this directory) to place at the top
132 # of the sidebar.
133 # html_logo = None
134
135 # The name of an image file (within the static path) to use as favicon of the
136 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
137 # pixels large.
138 #html_favicon = None
139
140 # Add any paths that contain custom static files (such as style sheets) here,
141 # relative to this directory. They are copied after the builtin static files,
142 # so a file named "default.css" will overwrite the builtin "default.css".
143 html_static_path = ['_static']
144
145 # Add any extra paths that contain custom files (such as robots.txt or
146 # .htaccess) here, relative to this directory. These files are copied
147 # directly to the root of the documentation.
148 #html_extra_path = []
149
150 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
151 # using the given strftime format.
152 html_last_updated_fmt = '%b %d, %Y'
153
154 # If true, SmartyPants will be used to convert quotes and dashes to
155 # typographically correct entities.
156 #html_use_smartypants = True
157
158 # Custom sidebar templates, maps document names to template names.
159 html_sidebars = {
160 '**': [
161 'about.html', 'navigation.html', 'searchbox.html'
162 ]
163 }
164
165 # Additional templates that should be rendered to pages, maps page names to
166 # template names.
167 #html_additional_pages = {}
168
169 # If false, no module index is generated.
170 html_domain_indices = True
171
172 # If false, no index is generated.
173 html_use_index = True
174
175 # If true, the index is split into individual pages for each letter.
176 #html_split_index = False
177
178 # If true, links to the reST sources are added to the pages.
179 html_show_sourcelink = True
180
181 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
182 html_show_sphinx = False
183
184 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
185 #html_show_copyright = True
186
187 # If true, an OpenSearch description file will be output, and all pages will
188 # contain a <link> tag referring to it. The value of this option must be the
189 # base URL from which the finished HTML is served.
190 #html_use_opensearch = ''
191
192 # This is the file name suffix for HTML files (e.g. ".xhtml").
193 #html_file_suffix = None
194
195 # Output file base name for HTML help builder.
196 htmlhelp_basename = 'MicroservicesInfrastructuredoc'
197
198
199 # -- Options for LaTeX output ---------------------------------------------
200
201 latex_elements = {
202 # The paper size ('letterpaper' or 'a4paper').
203 #'papersize': 'letterpaper',
204
205 # The font size ('10pt', '11pt' or '12pt').
206 #'pointsize': '10pt',
207
208 # Additional stuff for the LaTeX preamble.
209 #'preamble': '',
210 }
211
212 # Grouping the document tree into LaTeX files. List of tuples
213 # (source start file, target name, title,
214 # author, documentclass [howto, manual, or own class]).
215 latex_documents = [
216 ('index', 'MicroservicesInfrastructure.tex', u'Microservices Infrastructure Documentation',
217 u'Cisco Systems, Incorporated', 'manual'),
218 ]
219
220 # The name of an image file (relative to this directory) to place at the top of
221 # the title page.
222 #latex_logo = None
223
224 # For "manual" documents, if this is true, then toplevel headings are parts,
225 # not chapters.
226 #latex_use_parts = False
227
228 # If true, show page references after internal links.
229 #latex_show_pagerefs = False
230
231 # If true, show URL addresses after external links.
232 latex_show_urls = 'footnote'
233
234 # Documents to append as an appendix to all manuals.
235 #latex_appendices = []
236
237 # If false, no module index is generated.
238 #latex_domain_indices = True
239
240
241 # -- Options for manual page output ---------------------------------------
242
243 # One entry per manual page. List of tuples
244 # (source start file, name, description, authors, manual section).
245 man_pages = [
246 ('index', 'microservicesinfrastructure', u'Microservices Infrastructure Documentation',
247 [u'Cisco Systems, Incorporated'], 1)
248 ]
249
250 # If true, show URL addresses after external links.
251 #man_show_urls = False
252
253
254 # -- Options for Texinfo output -------------------------------------------
255
256 # Grouping the document tree into Texinfo files. List of tuples
257 # (source start file, target name, title, author,
258 # dir menu entry, description, category)
259 texinfo_documents = [
260 ('index', 'MicroservicesInfrastructure', u'Microservices Infrastructure Documentation',
261 u'Cisco Systems, Incorporated', 'MicroservicesInfrastructure', 'One line description of project.',
262 'Miscellaneous'),
263 ]
264
265 # Documents to append as an appendix to all manuals.
266 #texinfo_appendices = []
267
268 # If false, no module index is generated.
269 #texinfo_domain_indices = True
270
271 # How to display URL addresses: 'footnote', 'no', or 'inline'.
272 #texinfo_show_urls = 'footnote'
273
274 # If true, do not generate a @detailmenu in the "Top" node's menu.
275 #texinfo_no_detailmenu = False
276
277
278 # Example configuration for intersphinx: refer to the Python standard library.
279 intersphinx_mapping = {
280 'python': ('http://docs.python.org/', None),
281 'ansible': ('http://docs.ansible.com/', None),
282 }
283
284 # -- Options for todo ext ------------------------------------------------
285 todo_include_todos = os.getenv('INCLUDE_TODOS', '0') == '1' or version != release
286
287 # -- setup ---------------------------------------------------------------
288 def setup(app):
289 from sphinx.util.texescape import tex_replacements
290 tex_replacements.extend([
291 (u'☐', u'[ ]'),
292 (u'☑', u'[x]'),
293 ])
294
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -56,9 +56,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.3'
+version = '0.6'
# The full version, including alpha/beta/rc tags.
-release = '0.3'
+release = '0.6'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -56,9 +56,9 @@\n # built documents.\n #\n # The short X.Y version.\n-version = '0.3'\n+version = '0.6'\n # The full version, including alpha/beta/rc tags.\n-release = '0.3'\n+release = '0.6'\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\n", "issue": "upgrade path from 0.5.1 to 0.6\n0.6 will include updated versions of marathon (0.13), mesos (0.25), and consul (0.6.3). We want to support an upgrade of an existing 0.5.1 cluster.\n- [x] mesos/marathon: https://github.com/CiscoCloud/microservices-infrastructure/blob/master/playbooks/upgrade-mesos-marathon.yml\n- [ ] consul upgrade\n- [ ] vault (depending if https://github.com/CiscoCloud/microservices-infrastructure/tree/feature/vault-ssl makes it to this release)\n- [ ] document upgrade process\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Microservices Infrastructure documentation build configuration file, created by\n# sphinx-quickstart on Wed Feb 4 06:59:14 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.todo',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Microservices Infrastructure'\ncopyright = u'2015, Cisco Systems, Incorporated'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.3'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.3'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\nimport alabaster\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nextensions += ['alabaster']\nhtml_theme_options = {\n 'github_user': 'ciscocloud',\n 'github_repo': 'microservices-infrastructure',\n 'logo': 'cisco.png',\n 'logo_name': True,\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = [alabaster.get_path()]\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': [\n 'about.html', 'navigation.html', 'searchbox.html'\n ]\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\nhtml_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'MicroservicesInfrastructuredoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'MicroservicesInfrastructure.tex', u'Microservices Infrastructure Documentation',\n u'Cisco Systems, Incorporated', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\nlatex_show_urls = 'footnote'\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'microservicesinfrastructure', u'Microservices Infrastructure Documentation',\n [u'Cisco Systems, Incorporated'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'MicroservicesInfrastructure', u'Microservices Infrastructure Documentation',\n u'Cisco Systems, Incorporated', 'MicroservicesInfrastructure', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('http://docs.python.org/', None),\n 'ansible': ('http://docs.ansible.com/', None),\n}\n\n# -- Options for todo ext ------------------------------------------------\ntodo_include_todos = os.getenv('INCLUDE_TODOS', '0') == '1' or version != release\n\n# -- setup ---------------------------------------------------------------\ndef setup(app):\n from sphinx.util.texescape import tex_replacements\n tex_replacements.extend([\n (u'\u2610', u'[ ]'),\n (u'\u2611', u'[x]'),\n ])\n", "path": "docs/conf.py"}]}
| 3,750 | 112 |
gh_patches_debug_29976
|
rasdani/github-patches
|
git_diff
|
opensearch-project__opensearch-build-1275
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add release meta issue template for 1.1.1
Signed-off-by: Peter Zhu <[email protected]>
### Description
Add release meta issue template for 1.1.1
### Issues Resolved
[List any issues this PR will resolve]
### Check List
- [x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).
</issue>
<code>
[start of src/ci_workflow/ci_input_manifest.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import logging
8
9 from ci_workflow.ci_check_lists import CiCheckLists
10 from ci_workflow.ci_manifest import CiManifest
11 from ci_workflow.ci_target import CiTarget
12 from manifests.input_manifest import InputManifest
13 from system.temporary_directory import TemporaryDirectory
14
15
16 class CiInputManifest(CiManifest):
17 def __init__(self, file, args):
18 super().__init__(InputManifest.from_file(file), args)
19
20 def __check__(self):
21
22 target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)
23
24 with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:
25 logging.info(f"Sanity-testing in {work_dir.name}")
26
27 logging.info(f"Sanity testing {self.manifest.build.name}")
28
29 for component in self.manifest.components.select(focus=self.args.component):
30 logging.info(f"Sanity testing {component.name}")
31
32 ci_check_list = CiCheckLists.from_component(component, target)
33 ci_check_list.checkout(work_dir.name)
34 ci_check_list.check()
35 logging.info("Done.")
36
[end of src/ci_workflow/ci_input_manifest.py]
[start of src/ci_workflow/ci_target.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7
8 class CiTarget:
9 version: str
10 snapshot: bool
11
12 def __init__(self, version, snapshot=True):
13 self.version = version
14 self.snapshot = snapshot
15
16 @property
17 def opensearch_version(self):
18 return self.version + "-SNAPSHOT" if self.snapshot else self.version
19
20 @property
21 def component_version(self):
22 # BUG: the 4th digit is dictated by the component, it's not .0, this will break for 1.1.0.1
23 return self.version + ".0-SNAPSHOT" if self.snapshot else f"{self.version}.0"
24
[end of src/ci_workflow/ci_target.py]
[start of src/ci_workflow/ci_check_manifest_component.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import logging
8
9 from build_workflow.build_args import BuildArgs
10 from ci_workflow.ci_check import CiCheckDist
11 from manifests.build_manifest import BuildManifest
12
13
14 class CiCheckManifestComponent(CiCheckDist):
15 class MissingComponentError(Exception):
16 def __init__(self, component, url):
17 super().__init__(f"Missing {component} in {url}.")
18
19 def check(self):
20 for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:
21 url = "/".join([self.component.dist, architecture, "manifest.yml"])
22 self.build_manifest = BuildManifest.from_url(url)
23 if self.component.name in self.build_manifest.components:
24 logging.info(f"Found {self.component.name} in {url}.")
25 else:
26 raise CiCheckManifestComponent.MissingComponentError(self.component.name, url)
27
[end of src/ci_workflow/ci_check_manifest_component.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/ci_workflow/ci_check_manifest_component.py b/src/ci_workflow/ci_check_manifest_component.py
--- a/src/ci_workflow/ci_check_manifest_component.py
+++ b/src/ci_workflow/ci_check_manifest_component.py
@@ -18,7 +18,9 @@
def check(self):
for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:
- url = "/".join([self.component.dist, architecture, "manifest.yml"])
+ # Since we only have 'linux' builds now we hard code it to 'linux'
+ # Once we have all platform builds on S3 we can then add a second loop for 'BuildArgs.SUPPORTED_PLATFORMS'
+ url = "/".join([self.component.dist, "linux", architecture, "builds", self.target.name, "manifest.yml"])
self.build_manifest = BuildManifest.from_url(url)
if self.component.name in self.build_manifest.components:
logging.info(f"Found {self.component.name} in {url}.")
diff --git a/src/ci_workflow/ci_input_manifest.py b/src/ci_workflow/ci_input_manifest.py
--- a/src/ci_workflow/ci_input_manifest.py
+++ b/src/ci_workflow/ci_input_manifest.py
@@ -19,7 +19,7 @@
def __check__(self):
- target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)
+ target = CiTarget(version=self.manifest.build.version, name=self.manifest.build.filename, snapshot=self.args.snapshot)
with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:
logging.info(f"Sanity-testing in {work_dir.name}")
diff --git a/src/ci_workflow/ci_target.py b/src/ci_workflow/ci_target.py
--- a/src/ci_workflow/ci_target.py
+++ b/src/ci_workflow/ci_target.py
@@ -7,10 +7,12 @@
class CiTarget:
version: str
+ name: str
snapshot: bool
- def __init__(self, version, snapshot=True):
+ def __init__(self, version, name, snapshot=True):
self.version = version
+ self.name = name
self.snapshot = snapshot
@property
|
{"golden_diff": "diff --git a/src/ci_workflow/ci_check_manifest_component.py b/src/ci_workflow/ci_check_manifest_component.py\n--- a/src/ci_workflow/ci_check_manifest_component.py\n+++ b/src/ci_workflow/ci_check_manifest_component.py\n@@ -18,7 +18,9 @@\n \n def check(self):\n for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:\n- url = \"/\".join([self.component.dist, architecture, \"manifest.yml\"])\n+ # Since we only have 'linux' builds now we hard code it to 'linux'\n+ # Once we have all platform builds on S3 we can then add a second loop for 'BuildArgs.SUPPORTED_PLATFORMS'\n+ url = \"/\".join([self.component.dist, \"linux\", architecture, \"builds\", self.target.name, \"manifest.yml\"])\n self.build_manifest = BuildManifest.from_url(url)\n if self.component.name in self.build_manifest.components:\n logging.info(f\"Found {self.component.name} in {url}.\")\ndiff --git a/src/ci_workflow/ci_input_manifest.py b/src/ci_workflow/ci_input_manifest.py\n--- a/src/ci_workflow/ci_input_manifest.py\n+++ b/src/ci_workflow/ci_input_manifest.py\n@@ -19,7 +19,7 @@\n \n def __check__(self):\n \n- target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)\n+ target = CiTarget(version=self.manifest.build.version, name=self.manifest.build.filename, snapshot=self.args.snapshot)\n \n with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:\n logging.info(f\"Sanity-testing in {work_dir.name}\")\ndiff --git a/src/ci_workflow/ci_target.py b/src/ci_workflow/ci_target.py\n--- a/src/ci_workflow/ci_target.py\n+++ b/src/ci_workflow/ci_target.py\n@@ -7,10 +7,12 @@\n \n class CiTarget:\n version: str\n+ name: str\n snapshot: bool\n \n- def __init__(self, version, snapshot=True):\n+ def __init__(self, version, name, snapshot=True):\n self.version = version\n+ self.name = name\n self.snapshot = snapshot\n \n @property\n", "issue": "Add release meta issue template for 1.1.1\nSigned-off-by: Peter Zhu <[email protected]>\r\n\r\n### Description\r\nAdd release meta issue template for 1.1.1\r\n \r\n### Issues Resolved\r\n[List any issues this PR will resolve]\r\n \r\n### Check List\r\n- [x] Commits are signed per the DCO using --signoff \r\n\r\nBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.\r\nFor more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\n\nfrom ci_workflow.ci_check_lists import CiCheckLists\nfrom ci_workflow.ci_manifest import CiManifest\nfrom ci_workflow.ci_target import CiTarget\nfrom manifests.input_manifest import InputManifest\nfrom system.temporary_directory import TemporaryDirectory\n\n\nclass CiInputManifest(CiManifest):\n def __init__(self, file, args):\n super().__init__(InputManifest.from_file(file), args)\n\n def __check__(self):\n\n target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)\n\n with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:\n logging.info(f\"Sanity-testing in {work_dir.name}\")\n\n logging.info(f\"Sanity testing {self.manifest.build.name}\")\n\n for component in self.manifest.components.select(focus=self.args.component):\n logging.info(f\"Sanity testing {component.name}\")\n\n ci_check_list = CiCheckLists.from_component(component, target)\n ci_check_list.checkout(work_dir.name)\n ci_check_list.check()\n logging.info(\"Done.\")\n", "path": "src/ci_workflow/ci_input_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\n\nclass CiTarget:\n version: str\n snapshot: bool\n\n def __init__(self, version, snapshot=True):\n self.version = version\n self.snapshot = snapshot\n\n @property\n def opensearch_version(self):\n return self.version + \"-SNAPSHOT\" if self.snapshot else self.version\n\n @property\n def component_version(self):\n # BUG: the 4th digit is dictated by the component, it's not .0, this will break for 1.1.0.1\n return self.version + \".0-SNAPSHOT\" if self.snapshot else f\"{self.version}.0\"\n", "path": "src/ci_workflow/ci_target.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\n\nfrom build_workflow.build_args import BuildArgs\nfrom ci_workflow.ci_check import CiCheckDist\nfrom manifests.build_manifest import BuildManifest\n\n\nclass CiCheckManifestComponent(CiCheckDist):\n class MissingComponentError(Exception):\n def __init__(self, component, url):\n super().__init__(f\"Missing {component} in {url}.\")\n\n def check(self):\n for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:\n url = \"/\".join([self.component.dist, architecture, \"manifest.yml\"])\n self.build_manifest = BuildManifest.from_url(url)\n if self.component.name in self.build_manifest.components:\n logging.info(f\"Found {self.component.name} in {url}.\")\n else:\n raise CiCheckManifestComponent.MissingComponentError(self.component.name, url)\n", "path": "src/ci_workflow/ci_check_manifest_component.py"}]}
| 1,557 | 489 |
gh_patches_debug_29114
|
rasdani/github-patches
|
git_diff
|
python-trio__trio-228
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need to fix bpo-29692
Someone recently found some bugs in some subtle `contextlib.contextmanager` corner cases. The fixes are here: https://github.com/python/cpython/pull/949/files ([discussion](https://bugs.python.org/issue29692))
`trio._util.acontextmanager` is a fork of `contextlib.contextmanager`, and has the same bugs. We should import those fixes.
</issue>
<code>
[start of trio/_util.py]
1 # Little utilities we use internally
2
3 import os
4 import sys
5 from functools import wraps
6
7 import async_generator
8
9 # There's a dependency loop here... _core is allowed to use this file (in fact
10 # it's the *only* file in the main trio/ package it's allowed to use), but
11 # UnLock needs yield_briefly so it also has to import _core. Possibly we
12 # should split this file into two: one for true generic low-level utility
13 # code, and one for higher level helpers?
14 from . import _core
15
16 __all__ = ["signal_raise", "aiter_compat", "acontextmanager", "UnLock"]
17
18 # Equivalent to the C function raise(), which Python doesn't wrap
19 if os.name == "nt":
20 # On windows, os.kill exists but is really weird.
21 #
22 # If you give it CTRL_C_EVENT or CTRL_BREAK_EVENT, it tries to deliver
23 # those using GenerateConsoleCtrlEvent. But I found that when I tried
24 # to run my test normally, it would freeze waiting... unless I added
25 # print statements, in which case the test suddenly worked. So I guess
26 # these signals are only delivered if/when you access the console? I
27 # don't really know what was going on there. From reading the
28 # GenerateConsoleCtrlEvent docs I don't know how it worked at all.
29 #
30 # I later spent a bunch of time trying to make GenerateConsoleCtrlEvent
31 # work for creating synthetic control-C events, and... failed
32 # utterly. There are lots of details in the code and comments
33 # removed/added at this commit:
34 # https://github.com/python-trio/trio/commit/95843654173e3e826c34d70a90b369ba6edf2c23
35 #
36 # OTOH, if you pass os.kill any *other* signal number... then CPython
37 # just calls TerminateProcess (wtf).
38 #
39 # So, anyway, os.kill is not so useful for testing purposes. Instead
40 # we use raise():
41 #
42 # https://msdn.microsoft.com/en-us/library/dwwzkt4c.aspx
43 #
44 # Have to import cffi inside the 'if os.name' block because we don't
45 # depend on cffi on non-Windows platforms. (It would be easy to switch
46 # this to ctypes though if we ever remove the cffi dependency.)
47 #
48 # Some more information:
49 # https://bugs.python.org/issue26350
50 #
51 # Anyway, we use this for two things:
52 # - redelivering unhandled signals
53 # - generating synthetic signals for tests
54 # and for both of those purposes, 'raise' works fine.
55 import cffi
56 _ffi = cffi.FFI()
57 _ffi.cdef("int raise(int);")
58 _lib = _ffi.dlopen("api-ms-win-crt-runtime-l1-1-0.dll")
59 signal_raise = getattr(_lib, "raise")
60 else:
61 def signal_raise(signum):
62 os.kill(os.getpid(), signum)
63
64
65 # Decorator to handle the change to __aiter__ in 3.5.2
66 def aiter_compat(aiter_impl):
67 if sys.version_info < (3, 5, 2):
68 @wraps(aiter_impl)
69 async def __aiter__(*args, **kwargs):
70 return aiter_impl(*args, **kwargs)
71 return __aiter__
72 else:
73 return aiter_impl
74
75
76 # Very much derived from the one in contextlib, by copy/pasting and then
77 # asyncifying everything. (Also I dropped the obscure support for using
78 # context managers as function decorators. It could be re-added; I just
79 # couldn't be bothered.)
80 # So this is a derivative work licensed under the PSF License, which requires
81 # the following notice:
82 #
83 # Copyright © 2001-2017 Python Software Foundation; All Rights Reserved
84 class _AsyncGeneratorContextManager:
85 def __init__(self, func, args, kwds):
86 self._func_name = func.__name__
87 self._agen = func(*args, **kwds).__aiter__()
88
89 async def __aenter__(self):
90 if sys.version_info < (3, 5, 2):
91 self._agen = await self._agen
92 try:
93 return await self._agen.asend(None)
94 except StopAsyncIteration:
95 raise RuntimeError("async generator didn't yield") from None
96
97 async def __aexit__(self, type, value, traceback):
98 if type is None:
99 try:
100 await self._agen.asend(None)
101 except StopAsyncIteration:
102 return
103 else:
104 raise RuntimeError("async generator didn't stop")
105 else:
106 if value is None:
107 # Need to force instantiation so we can reliably
108 # tell if we get the same exception back
109 value = type()
110 try:
111 await self._agen.athrow(type, value, traceback)
112 raise RuntimeError("async generator didn't stop after athrow()")
113 except StopAsyncIteration as exc:
114 # Suppress StopIteration *unless* it's the same exception that
115 # was passed to throw(). This prevents a StopIteration
116 # raised inside the "with" statement from being suppressed.
117 return (exc is not value)
118 except RuntimeError as exc:
119 # Don't re-raise the passed in exception. (issue27112)
120 if exc is value:
121 return False
122 # Likewise, avoid suppressing if a StopIteration exception
123 # was passed to throw() and later wrapped into a RuntimeError
124 # (see PEP 479).
125 if exc.__cause__ is value:
126 return False
127 raise
128 except:
129 # only re-raise if it's *not* the exception that was
130 # passed to throw(), because __exit__() must not raise
131 # an exception unless __exit__() itself failed. But throw()
132 # has to raise the exception to signal propagation, so this
133 # fixes the impedance mismatch between the throw() protocol
134 # and the __exit__() protocol.
135 #
136 if sys.exc_info()[1] is not value:
137 raise
138
139 def __enter__(self):
140 raise RuntimeError("use 'async with {func_name}(...)', not 'with {func_name}(...)'".format(func_name=self._func_name))
141
142 def __exit__(self): # pragma: no cover
143 assert False, """Never called, but should be defined"""
144
145
146 def acontextmanager(func):
147 """Like @contextmanager, but async."""
148 if not async_generator.isasyncgenfunction(func):
149 raise TypeError(
150 "must be an async generator (native or from async_generator; "
151 "if using @async_generator then @acontextmanager must be on top.")
152 @wraps(func)
153 def helper(*args, **kwds):
154 return _AsyncGeneratorContextManager(func, args, kwds)
155 # A hint for sphinxcontrib-trio:
156 helper.__returns_acontextmanager__ = True
157 return helper
158
159
160 class _UnLockSync:
161 def __init__(self, exc, *args):
162 self._exc = exc
163 self._args = args
164 self._held = False
165
166 def __enter__(self):
167 if self._held:
168 raise self._exc(*self._args)
169 else:
170 self._held = True
171
172 def __exit__(self, *args):
173 self._held = False
174
175
176 class UnLock:
177 """An unnecessary lock.
178
179 Use as an async context manager; if two tasks enter it at the same
180 time then the second one raises an error. You can use it when there are
181 two pieces of code that *would* collide and need a lock if they ever were
182 called at the same time, but that should never happen.
183
184 We use this in particular for things like, making sure that two different
185 tasks don't call sendall simultaneously on the same stream.
186
187 This executes a checkpoint on entry. That's the only reason it's async.
188
189 """
190 def __init__(self, exc, *args):
191 self.sync = _UnLockSync(exc, *args)
192
193 async def __aenter__(self):
194 await _core.yield_briefly()
195 return self.sync.__enter__()
196
197 async def __aexit__(self, *args):
198 return self.sync.__exit__()
199
200
201 def async_wraps(cls, wrapped_cls, attr_name):
202 """Similar to wraps, but for async wrappers of non-async functions.
203
204 """
205 def decorator(func):
206 func.__name__ = attr_name
207 func.__qualname__ = '.'.join((cls.__qualname__,
208 attr_name))
209
210 func.__doc__ = """Like :meth:`~{}.{}.{}`, but async.
211
212 """.format(wrapped_cls.__module__,
213 wrapped_cls.__qualname__,
214 attr_name)
215
216 return func
217 return decorator
218
[end of trio/_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/trio/_util.py b/trio/_util.py
--- a/trio/_util.py
+++ b/trio/_util.py
@@ -99,7 +99,7 @@
try:
await self._agen.asend(None)
except StopAsyncIteration:
- return
+ return False
else:
raise RuntimeError("async generator didn't stop")
else:
@@ -122,7 +122,7 @@
# Likewise, avoid suppressing if a StopIteration exception
# was passed to throw() and later wrapped into a RuntimeError
# (see PEP 479).
- if exc.__cause__ is value:
+ if isinstance(value, (StopIteration, StopAsyncIteration)) and exc.__cause__ is value:
return False
raise
except:
@@ -133,8 +133,9 @@
# fixes the impedance mismatch between the throw() protocol
# and the __exit__() protocol.
#
- if sys.exc_info()[1] is not value:
- raise
+ if sys.exc_info()[1] is value:
+ return False
+ raise
def __enter__(self):
raise RuntimeError("use 'async with {func_name}(...)', not 'with {func_name}(...)'".format(func_name=self._func_name))
|
{"golden_diff": "diff --git a/trio/_util.py b/trio/_util.py\n--- a/trio/_util.py\n+++ b/trio/_util.py\n@@ -99,7 +99,7 @@\n try:\n await self._agen.asend(None)\n except StopAsyncIteration:\n- return\n+ return False\n else:\n raise RuntimeError(\"async generator didn't stop\")\n else:\n@@ -122,7 +122,7 @@\n # Likewise, avoid suppressing if a StopIteration exception\n # was passed to throw() and later wrapped into a RuntimeError\n # (see PEP 479).\n- if exc.__cause__ is value:\n+ if isinstance(value, (StopIteration, StopAsyncIteration)) and exc.__cause__ is value:\n return False\n raise\n except:\n@@ -133,8 +133,9 @@\n # fixes the impedance mismatch between the throw() protocol\n # and the __exit__() protocol.\n #\n- if sys.exc_info()[1] is not value:\n- raise\n+ if sys.exc_info()[1] is value:\n+ return False\n+ raise\n \n def __enter__(self):\n raise RuntimeError(\"use 'async with {func_name}(...)', not 'with {func_name}(...)'\".format(func_name=self._func_name))\n", "issue": "Need to fix bpo-29692\nSomeone recently found some bugs in some subtle `contextlib.contextmanager` corner cases. The fixes are here: https://github.com/python/cpython/pull/949/files ([discussion](https://bugs.python.org/issue29692))\r\n\r\n`trio._util.acontextmanager` is a fork of `contextlib.contextmanager`, and has the same bugs. We should import those fixes.\n", "before_files": [{"content": "# Little utilities we use internally\n\nimport os\nimport sys\nfrom functools import wraps\n\nimport async_generator\n\n# There's a dependency loop here... _core is allowed to use this file (in fact\n# it's the *only* file in the main trio/ package it's allowed to use), but\n# UnLock needs yield_briefly so it also has to import _core. Possibly we\n# should split this file into two: one for true generic low-level utility\n# code, and one for higher level helpers?\nfrom . import _core\n\n__all__ = [\"signal_raise\", \"aiter_compat\", \"acontextmanager\", \"UnLock\"]\n\n# Equivalent to the C function raise(), which Python doesn't wrap\nif os.name == \"nt\":\n # On windows, os.kill exists but is really weird.\n #\n # If you give it CTRL_C_EVENT or CTRL_BREAK_EVENT, it tries to deliver\n # those using GenerateConsoleCtrlEvent. But I found that when I tried\n # to run my test normally, it would freeze waiting... unless I added\n # print statements, in which case the test suddenly worked. So I guess\n # these signals are only delivered if/when you access the console? I\n # don't really know what was going on there. From reading the\n # GenerateConsoleCtrlEvent docs I don't know how it worked at all.\n #\n # I later spent a bunch of time trying to make GenerateConsoleCtrlEvent\n # work for creating synthetic control-C events, and... failed\n # utterly. There are lots of details in the code and comments\n # removed/added at this commit:\n # https://github.com/python-trio/trio/commit/95843654173e3e826c34d70a90b369ba6edf2c23\n #\n # OTOH, if you pass os.kill any *other* signal number... then CPython\n # just calls TerminateProcess (wtf).\n #\n # So, anyway, os.kill is not so useful for testing purposes. Instead\n # we use raise():\n #\n # https://msdn.microsoft.com/en-us/library/dwwzkt4c.aspx\n #\n # Have to import cffi inside the 'if os.name' block because we don't\n # depend on cffi on non-Windows platforms. (It would be easy to switch\n # this to ctypes though if we ever remove the cffi dependency.)\n #\n # Some more information:\n # https://bugs.python.org/issue26350\n #\n # Anyway, we use this for two things:\n # - redelivering unhandled signals\n # - generating synthetic signals for tests\n # and for both of those purposes, 'raise' works fine.\n import cffi\n _ffi = cffi.FFI()\n _ffi.cdef(\"int raise(int);\")\n _lib = _ffi.dlopen(\"api-ms-win-crt-runtime-l1-1-0.dll\")\n signal_raise = getattr(_lib, \"raise\")\nelse:\n def signal_raise(signum):\n os.kill(os.getpid(), signum)\n\n\n# Decorator to handle the change to __aiter__ in 3.5.2\ndef aiter_compat(aiter_impl):\n if sys.version_info < (3, 5, 2):\n @wraps(aiter_impl)\n async def __aiter__(*args, **kwargs):\n return aiter_impl(*args, **kwargs)\n return __aiter__\n else:\n return aiter_impl\n\n\n# Very much derived from the one in contextlib, by copy/pasting and then\n# asyncifying everything. (Also I dropped the obscure support for using\n# context managers as function decorators. It could be re-added; I just\n# couldn't be bothered.)\n# So this is a derivative work licensed under the PSF License, which requires\n# the following notice:\n#\n# Copyright \u00a9 2001-2017 Python Software Foundation; All Rights Reserved\nclass _AsyncGeneratorContextManager:\n def __init__(self, func, args, kwds):\n self._func_name = func.__name__\n self._agen = func(*args, **kwds).__aiter__()\n\n async def __aenter__(self):\n if sys.version_info < (3, 5, 2):\n self._agen = await self._agen\n try:\n return await self._agen.asend(None)\n except StopAsyncIteration:\n raise RuntimeError(\"async generator didn't yield\") from None\n\n async def __aexit__(self, type, value, traceback):\n if type is None:\n try:\n await self._agen.asend(None)\n except StopAsyncIteration:\n return\n else:\n raise RuntimeError(\"async generator didn't stop\")\n else:\n if value is None:\n # Need to force instantiation so we can reliably\n # tell if we get the same exception back\n value = type()\n try:\n await self._agen.athrow(type, value, traceback)\n raise RuntimeError(\"async generator didn't stop after athrow()\")\n except StopAsyncIteration as exc:\n # Suppress StopIteration *unless* it's the same exception that\n # was passed to throw(). This prevents a StopIteration\n # raised inside the \"with\" statement from being suppressed.\n return (exc is not value)\n except RuntimeError as exc:\n # Don't re-raise the passed in exception. (issue27112)\n if exc is value:\n return False\n # Likewise, avoid suppressing if a StopIteration exception\n # was passed to throw() and later wrapped into a RuntimeError\n # (see PEP 479).\n if exc.__cause__ is value:\n return False\n raise\n except:\n # only re-raise if it's *not* the exception that was\n # passed to throw(), because __exit__() must not raise\n # an exception unless __exit__() itself failed. But throw()\n # has to raise the exception to signal propagation, so this\n # fixes the impedance mismatch between the throw() protocol\n # and the __exit__() protocol.\n #\n if sys.exc_info()[1] is not value:\n raise\n\n def __enter__(self):\n raise RuntimeError(\"use 'async with {func_name}(...)', not 'with {func_name}(...)'\".format(func_name=self._func_name))\n\n def __exit__(self): # pragma: no cover\n assert False, \"\"\"Never called, but should be defined\"\"\"\n\n\ndef acontextmanager(func):\n \"\"\"Like @contextmanager, but async.\"\"\"\n if not async_generator.isasyncgenfunction(func):\n raise TypeError(\n \"must be an async generator (native or from async_generator; \"\n \"if using @async_generator then @acontextmanager must be on top.\")\n @wraps(func)\n def helper(*args, **kwds):\n return _AsyncGeneratorContextManager(func, args, kwds)\n # A hint for sphinxcontrib-trio:\n helper.__returns_acontextmanager__ = True\n return helper\n\n\nclass _UnLockSync:\n def __init__(self, exc, *args):\n self._exc = exc\n self._args = args\n self._held = False\n\n def __enter__(self):\n if self._held:\n raise self._exc(*self._args)\n else:\n self._held = True\n\n def __exit__(self, *args):\n self._held = False\n\n\nclass UnLock:\n \"\"\"An unnecessary lock.\n\n Use as an async context manager; if two tasks enter it at the same\n time then the second one raises an error. You can use it when there are\n two pieces of code that *would* collide and need a lock if they ever were\n called at the same time, but that should never happen.\n\n We use this in particular for things like, making sure that two different\n tasks don't call sendall simultaneously on the same stream.\n\n This executes a checkpoint on entry. That's the only reason it's async.\n\n \"\"\"\n def __init__(self, exc, *args):\n self.sync = _UnLockSync(exc, *args)\n\n async def __aenter__(self):\n await _core.yield_briefly()\n return self.sync.__enter__()\n\n async def __aexit__(self, *args):\n return self.sync.__exit__()\n\n\ndef async_wraps(cls, wrapped_cls, attr_name):\n \"\"\"Similar to wraps, but for async wrappers of non-async functions.\n\n \"\"\"\n def decorator(func):\n func.__name__ = attr_name\n func.__qualname__ = '.'.join((cls.__qualname__,\n attr_name))\n\n func.__doc__ = \"\"\"Like :meth:`~{}.{}.{}`, but async.\n\n \"\"\".format(wrapped_cls.__module__,\n wrapped_cls.__qualname__,\n attr_name)\n\n return func\n return decorator\n", "path": "trio/_util.py"}]}
| 3,152 | 294 |
gh_patches_debug_16670
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-2917
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restore the Registry(*args, **kw) API
In the course of #2891, the Registry API was changed in a backwards-incompatible way. While the author of the change (PR 2893) couldn't imagine anyone using the keywords API, I immediately ran into the incompatibility with a number of customer projects. Since the change back to the old API shouldn't introduce another backwards incompatibility, I suggest reverting that change as soon as possible.
</issue>
<code>
[start of pyramid/registry.py]
1 import operator
2 import threading
3
4 from zope.interface import implementer
5
6 from zope.interface.registry import Components
7
8 from pyramid.compat import text_
9 from pyramid.decorator import reify
10
11 from pyramid.interfaces import (
12 IIntrospector,
13 IIntrospectable,
14 ISettings,
15 )
16
17 from pyramid.path import (
18 CALLER_PACKAGE,
19 caller_package,
20 )
21
22 empty = text_('')
23
24 class Registry(Components, dict):
25 """ A registry object is an :term:`application registry`.
26
27 It is used by the framework itself to perform mappings of URLs to view
28 callables, as well as servicing other various framework duties. A registry
29 has its own internal API, but this API is rarely used by Pyramid
30 application developers (it's usually only used by developers of the
31 Pyramid framework and Pyramid addons). But it has a number of attributes
32 that may be useful to application developers within application code,
33 such as ``settings``, which is a dictionary containing application
34 deployment settings.
35
36 For information about the purpose and usage of the application registry,
37 see :ref:`zca_chapter`.
38
39 The registry may be used both as an :class:`pyramid.interfaces.IDict` and
40 as a Zope component registry.
41 These two ways of storing configuration are independent.
42 Applications will tend to prefer to store information as key-values
43 whereas addons may prefer to use the component registry to avoid naming
44 conflicts and to provide more complex lookup mechanisms.
45
46 The application registry is usually accessed as ``request.registry`` in
47 application code. By the time a registry is used to handle requests it
48 should be considered frozen and read-only. Any changes to its internal
49 state should be done with caution and concern for thread-safety.
50
51 """
52
53 # for optimization purposes, if no listeners are listening, don't try
54 # to notify them
55 has_listeners = False
56
57 _settings = None
58
59 def __init__(self, package_name=CALLER_PACKAGE):
60 # add a registry-instance-specific lock, which is used when the lookup
61 # cache is mutated
62 self._lock = threading.Lock()
63 # add a view lookup cache
64 self._clear_view_lookup_cache()
65 if package_name is CALLER_PACKAGE:
66 package_name = caller_package().__name__
67 Components.__init__(self, package_name)
68 dict.__init__(self)
69
70 def _clear_view_lookup_cache(self):
71 self._view_lookup_cache = {}
72
73 def __nonzero__(self):
74 # defeat bool determination via dict.__len__
75 return True
76
77 @reify
78 def package_name(self):
79 return self.__name__
80
81 def registerSubscriptionAdapter(self, *arg, **kw):
82 result = Components.registerSubscriptionAdapter(self, *arg, **kw)
83 self.has_listeners = True
84 return result
85
86 def registerSelfAdapter(self, required=None, provided=None, name=empty,
87 info=empty, event=True):
88 # registerAdapter analogue which always returns the object itself
89 # when required is matched
90 return self.registerAdapter(lambda x: x, required=required,
91 provided=provided, name=name,
92 info=info, event=event)
93
94 def queryAdapterOrSelf(self, object, interface, default=None):
95 # queryAdapter analogue which returns the object if it implements
96 # the interface, otherwise it will return an adaptation to the
97 # interface
98 if not interface.providedBy(object):
99 return self.queryAdapter(object, interface, default=default)
100 return object
101
102 def registerHandler(self, *arg, **kw):
103 result = Components.registerHandler(self, *arg, **kw)
104 self.has_listeners = True
105 return result
106
107 def notify(self, *events):
108 if self.has_listeners:
109 # iterating over subscribers assures they get executed
110 [ _ for _ in self.subscribers(events, None) ]
111
112 # backwards compatibility for code that wants to look up a settings
113 # object via ``registry.getUtility(ISettings)``
114 def _get_settings(self):
115 return self._settings
116
117 def _set_settings(self, settings):
118 self.registerUtility(settings, ISettings)
119 self._settings = settings
120
121 settings = property(_get_settings, _set_settings)
122
123 @implementer(IIntrospector)
124 class Introspector(object):
125 def __init__(self):
126 self._refs = {}
127 self._categories = {}
128 self._counter = 0
129
130 def add(self, intr):
131 category = self._categories.setdefault(intr.category_name, {})
132 category[intr.discriminator] = intr
133 category[intr.discriminator_hash] = intr
134 intr.order = self._counter
135 self._counter += 1
136
137 def get(self, category_name, discriminator, default=None):
138 category = self._categories.setdefault(category_name, {})
139 intr = category.get(discriminator, default)
140 return intr
141
142 def get_category(self, category_name, default=None, sort_key=None):
143 if sort_key is None:
144 sort_key = operator.attrgetter('order')
145 category = self._categories.get(category_name)
146 if category is None:
147 return default
148 values = category.values()
149 values = sorted(set(values), key=sort_key)
150 return [
151 {'introspectable': intr,
152 'related': self.related(intr)}
153 for intr in values
154 ]
155
156 def categorized(self, sort_key=None):
157 L = []
158 for category_name in self.categories():
159 L.append((category_name, self.get_category(category_name,
160 sort_key=sort_key)))
161 return L
162
163 def categories(self):
164 return sorted(self._categories.keys())
165
166 def remove(self, category_name, discriminator):
167 intr = self.get(category_name, discriminator)
168 if intr is None:
169 return
170 L = self._refs.pop(intr, [])
171 for d in L:
172 L2 = self._refs[d]
173 L2.remove(intr)
174 category = self._categories[intr.category_name]
175 del category[intr.discriminator]
176 del category[intr.discriminator_hash]
177
178 def _get_intrs_by_pairs(self, pairs):
179 introspectables = []
180 for pair in pairs:
181 category_name, discriminator = pair
182 intr = self._categories.get(category_name, {}).get(discriminator)
183 if intr is None:
184 raise KeyError((category_name, discriminator))
185 introspectables.append(intr)
186 return introspectables
187
188 def relate(self, *pairs):
189 introspectables = self._get_intrs_by_pairs(pairs)
190 relatable = ((x,y) for x in introspectables for y in introspectables)
191 for x, y in relatable:
192 L = self._refs.setdefault(x, [])
193 if x is not y and y not in L:
194 L.append(y)
195
196 def unrelate(self, *pairs):
197 introspectables = self._get_intrs_by_pairs(pairs)
198 relatable = ((x,y) for x in introspectables for y in introspectables)
199 for x, y in relatable:
200 L = self._refs.get(x, [])
201 if y in L:
202 L.remove(y)
203
204 def related(self, intr):
205 category_name, discriminator = intr.category_name, intr.discriminator
206 intr = self._categories.get(category_name, {}).get(discriminator)
207 if intr is None:
208 raise KeyError((category_name, discriminator))
209 return self._refs.get(intr, [])
210
211 @implementer(IIntrospectable)
212 class Introspectable(dict):
213
214 order = 0 # mutated by introspector.add
215 action_info = None # mutated by self.register
216
217 def __init__(self, category_name, discriminator, title, type_name):
218 self.category_name = category_name
219 self.discriminator = discriminator
220 self.title = title
221 self.type_name = type_name
222 self._relations = []
223
224 def relate(self, category_name, discriminator):
225 self._relations.append((True, category_name, discriminator))
226
227 def unrelate(self, category_name, discriminator):
228 self._relations.append((False, category_name, discriminator))
229
230 def _assert_resolved(self):
231 assert undefer(self.discriminator) is self.discriminator
232
233 @property
234 def discriminator_hash(self):
235 self._assert_resolved()
236 return hash(self.discriminator)
237
238 def __hash__(self):
239 self._assert_resolved()
240 return hash((self.category_name,) + (self.discriminator,))
241
242 def __repr__(self):
243 self._assert_resolved()
244 return '<%s category %r, discriminator %r>' % (self.__class__.__name__,
245 self.category_name,
246 self.discriminator)
247
248 def __nonzero__(self):
249 return True
250
251 __bool__ = __nonzero__ # py3
252
253 def register(self, introspector, action_info):
254 self.discriminator = undefer(self.discriminator)
255 self.action_info = action_info
256 introspector.add(self)
257 for relate, category_name, discriminator in self._relations:
258 discriminator = undefer(discriminator)
259 if relate:
260 method = introspector.relate
261 else:
262 method = introspector.unrelate
263 method(
264 (self.category_name, self.discriminator),
265 (category_name, discriminator)
266 )
267
268 class Deferred(object):
269 """ Can be used by a third-party configuration extender to wrap a
270 :term:`discriminator` during configuration if an immediately hashable
271 discriminator cannot be computed because it relies on unresolved values.
272 The function should accept no arguments and should return a hashable
273 discriminator."""
274 def __init__(self, func):
275 self.func = func
276
277 @reify
278 def value(self):
279 return self.func()
280
281 def resolve(self):
282 return self.value
283
284 def undefer(v):
285 """ Function which accepts an object and returns it unless it is a
286 :class:`pyramid.registry.Deferred` instance. If it is an instance of
287 that class, its ``resolve`` method is called, and the result of the
288 method is returned."""
289 if isinstance(v, Deferred):
290 v = v.resolve()
291 return v
292
293 class predvalseq(tuple):
294 """ A subtype of tuple used to represent a sequence of predicate values """
295 pass
296
297 global_registry = Registry('global')
298
[end of pyramid/registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyramid/registry.py b/pyramid/registry.py
--- a/pyramid/registry.py
+++ b/pyramid/registry.py
@@ -56,7 +56,7 @@
_settings = None
- def __init__(self, package_name=CALLER_PACKAGE):
+ def __init__(self, package_name=CALLER_PACKAGE, *args, **kw):
# add a registry-instance-specific lock, which is used when the lookup
# cache is mutated
self._lock = threading.Lock()
@@ -64,7 +64,7 @@
self._clear_view_lookup_cache()
if package_name is CALLER_PACKAGE:
package_name = caller_package().__name__
- Components.__init__(self, package_name)
+ Components.__init__(self, package_name, *args, **kw)
dict.__init__(self)
def _clear_view_lookup_cache(self):
|
{"golden_diff": "diff --git a/pyramid/registry.py b/pyramid/registry.py\n--- a/pyramid/registry.py\n+++ b/pyramid/registry.py\n@@ -56,7 +56,7 @@\n \n _settings = None\n \n- def __init__(self, package_name=CALLER_PACKAGE):\n+ def __init__(self, package_name=CALLER_PACKAGE, *args, **kw):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n@@ -64,7 +64,7 @@\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n- Components.__init__(self, package_name)\n+ Components.__init__(self, package_name, *args, **kw)\n dict.__init__(self)\n \n def _clear_view_lookup_cache(self):\n", "issue": "Restore the Registry(*args, **kw) API\nIn the course of #2891, the Registry API was changed in a backwards-incompatible way. While the author of the change (PR 2893) couldn't imagine anyone using the keywords API, I immediately ran into the incompatibility with a number of customer projects. Since the change back to the old API shouldn't introduce another backwards incompatibility, I suggest reverting that change as soon as possible.\n", "before_files": [{"content": "import operator\nimport threading\n\nfrom zope.interface import implementer\n\nfrom zope.interface.registry import Components\n\nfrom pyramid.compat import text_\nfrom pyramid.decorator import reify\n\nfrom pyramid.interfaces import (\n IIntrospector,\n IIntrospectable,\n ISettings,\n )\n\nfrom pyramid.path import (\n CALLER_PACKAGE,\n caller_package,\n)\n\nempty = text_('')\n\nclass Registry(Components, dict):\n \"\"\" A registry object is an :term:`application registry`.\n\n It is used by the framework itself to perform mappings of URLs to view\n callables, as well as servicing other various framework duties. A registry\n has its own internal API, but this API is rarely used by Pyramid\n application developers (it's usually only used by developers of the\n Pyramid framework and Pyramid addons). But it has a number of attributes\n that may be useful to application developers within application code,\n such as ``settings``, which is a dictionary containing application\n deployment settings.\n\n For information about the purpose and usage of the application registry,\n see :ref:`zca_chapter`.\n\n The registry may be used both as an :class:`pyramid.interfaces.IDict` and\n as a Zope component registry.\n These two ways of storing configuration are independent.\n Applications will tend to prefer to store information as key-values\n whereas addons may prefer to use the component registry to avoid naming\n conflicts and to provide more complex lookup mechanisms.\n\n The application registry is usually accessed as ``request.registry`` in\n application code. By the time a registry is used to handle requests it\n should be considered frozen and read-only. Any changes to its internal\n state should be done with caution and concern for thread-safety.\n\n \"\"\"\n\n # for optimization purposes, if no listeners are listening, don't try\n # to notify them\n has_listeners = False\n\n _settings = None\n\n def __init__(self, package_name=CALLER_PACKAGE):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n # add a view lookup cache\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n Components.__init__(self, package_name)\n dict.__init__(self)\n\n def _clear_view_lookup_cache(self):\n self._view_lookup_cache = {}\n\n def __nonzero__(self):\n # defeat bool determination via dict.__len__\n return True\n\n @reify\n def package_name(self):\n return self.__name__\n\n def registerSubscriptionAdapter(self, *arg, **kw):\n result = Components.registerSubscriptionAdapter(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def registerSelfAdapter(self, required=None, provided=None, name=empty,\n info=empty, event=True):\n # registerAdapter analogue which always returns the object itself\n # when required is matched\n return self.registerAdapter(lambda x: x, required=required,\n provided=provided, name=name,\n info=info, event=event)\n\n def queryAdapterOrSelf(self, object, interface, default=None):\n # queryAdapter analogue which returns the object if it implements\n # the interface, otherwise it will return an adaptation to the\n # interface\n if not interface.providedBy(object):\n return self.queryAdapter(object, interface, default=default)\n return object\n\n def registerHandler(self, *arg, **kw):\n result = Components.registerHandler(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def notify(self, *events):\n if self.has_listeners:\n # iterating over subscribers assures they get executed\n [ _ for _ in self.subscribers(events, None) ]\n\n # backwards compatibility for code that wants to look up a settings\n # object via ``registry.getUtility(ISettings)``\n def _get_settings(self):\n return self._settings\n\n def _set_settings(self, settings):\n self.registerUtility(settings, ISettings)\n self._settings = settings\n\n settings = property(_get_settings, _set_settings)\n\n@implementer(IIntrospector)\nclass Introspector(object):\n def __init__(self):\n self._refs = {}\n self._categories = {}\n self._counter = 0\n\n def add(self, intr):\n category = self._categories.setdefault(intr.category_name, {})\n category[intr.discriminator] = intr\n category[intr.discriminator_hash] = intr\n intr.order = self._counter\n self._counter += 1\n\n def get(self, category_name, discriminator, default=None):\n category = self._categories.setdefault(category_name, {})\n intr = category.get(discriminator, default)\n return intr\n\n def get_category(self, category_name, default=None, sort_key=None):\n if sort_key is None:\n sort_key = operator.attrgetter('order')\n category = self._categories.get(category_name)\n if category is None:\n return default\n values = category.values()\n values = sorted(set(values), key=sort_key)\n return [\n {'introspectable': intr,\n 'related': self.related(intr)}\n for intr in values\n ]\n\n def categorized(self, sort_key=None):\n L = []\n for category_name in self.categories():\n L.append((category_name, self.get_category(category_name,\n sort_key=sort_key)))\n return L\n\n def categories(self):\n return sorted(self._categories.keys())\n\n def remove(self, category_name, discriminator):\n intr = self.get(category_name, discriminator)\n if intr is None:\n return\n L = self._refs.pop(intr, [])\n for d in L:\n L2 = self._refs[d]\n L2.remove(intr)\n category = self._categories[intr.category_name]\n del category[intr.discriminator]\n del category[intr.discriminator_hash]\n\n def _get_intrs_by_pairs(self, pairs):\n introspectables = []\n for pair in pairs:\n category_name, discriminator = pair\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n introspectables.append(intr)\n return introspectables\n\n def relate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.setdefault(x, [])\n if x is not y and y not in L:\n L.append(y)\n\n def unrelate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.get(x, [])\n if y in L:\n L.remove(y)\n\n def related(self, intr):\n category_name, discriminator = intr.category_name, intr.discriminator\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n return self._refs.get(intr, [])\n\n@implementer(IIntrospectable)\nclass Introspectable(dict):\n\n order = 0 # mutated by introspector.add\n action_info = None # mutated by self.register\n\n def __init__(self, category_name, discriminator, title, type_name):\n self.category_name = category_name\n self.discriminator = discriminator\n self.title = title\n self.type_name = type_name\n self._relations = []\n\n def relate(self, category_name, discriminator):\n self._relations.append((True, category_name, discriminator))\n\n def unrelate(self, category_name, discriminator):\n self._relations.append((False, category_name, discriminator))\n\n def _assert_resolved(self):\n assert undefer(self.discriminator) is self.discriminator\n\n @property\n def discriminator_hash(self):\n self._assert_resolved()\n return hash(self.discriminator)\n\n def __hash__(self):\n self._assert_resolved()\n return hash((self.category_name,) + (self.discriminator,))\n\n def __repr__(self):\n self._assert_resolved()\n return '<%s category %r, discriminator %r>' % (self.__class__.__name__,\n self.category_name,\n self.discriminator)\n\n def __nonzero__(self):\n return True\n\n __bool__ = __nonzero__ # py3\n\n def register(self, introspector, action_info):\n self.discriminator = undefer(self.discriminator)\n self.action_info = action_info\n introspector.add(self)\n for relate, category_name, discriminator in self._relations:\n discriminator = undefer(discriminator)\n if relate:\n method = introspector.relate\n else:\n method = introspector.unrelate\n method(\n (self.category_name, self.discriminator),\n (category_name, discriminator)\n )\n\nclass Deferred(object):\n \"\"\" Can be used by a third-party configuration extender to wrap a\n :term:`discriminator` during configuration if an immediately hashable\n discriminator cannot be computed because it relies on unresolved values.\n The function should accept no arguments and should return a hashable\n discriminator.\"\"\"\n def __init__(self, func):\n self.func = func\n\n @reify\n def value(self):\n return self.func()\n\n def resolve(self):\n return self.value\n\ndef undefer(v):\n \"\"\" Function which accepts an object and returns it unless it is a\n :class:`pyramid.registry.Deferred` instance. If it is an instance of\n that class, its ``resolve`` method is called, and the result of the\n method is returned.\"\"\"\n if isinstance(v, Deferred):\n v = v.resolve()\n return v\n\nclass predvalseq(tuple):\n \"\"\" A subtype of tuple used to represent a sequence of predicate values \"\"\"\n pass\n\nglobal_registry = Registry('global')\n", "path": "pyramid/registry.py"}]}
| 3,658 | 200 |
gh_patches_debug_40657
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-core-439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Torch GPU test failures
Install PyTorch via -
```
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
```
We still have to install TF (for tf.data) and JAX (for import - though this could be avoided) in the environment via -
```
pip install tensorflow_cpu
pip install jax[cpu]
```
Test via -
```
KERAS_BACKEND=torch pytest keras_core --ignore keras_core/applications
```
Result summary:
```
FAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d0 - AssertionError:
FAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d1 - AssertionError:
FAILED keras_core/layers/convolutional/conv_transpose_test.py::ConvTransposeCorrectnessTest::test_conv3d_transpose2 - AssertionError:
FAILED keras_core/layers/preprocessing/random_brightness_test.py::RandomBrightnessTest::test_tf_data_compatibility - TypeError: in user code:
FAILED keras_core/layers/preprocessing/random_contrast_test.py::RandomContrastTest::test_tf_data_compatibility - TypeError: in user code:
```
ConvCorrectnessTest, ConvTransposeCorrectnessTest - Failed due to mismatch in test array values.
Preprpcoessing RandomBrightnessTest, RandomContrastTest failed on `numpy()` being calld on GPU tensor. It needs to be `cpu().numpy()` for torch.
</issue>
<code>
[start of keras_core/layers/preprocessing/tf_data_layer.py]
1 from tensorflow import nest
2
3 from keras_core import backend
4 from keras_core.layers.layer import Layer
5 from keras_core.random.seed_generator import SeedGenerator
6 from keras_core.utils import backend_utils
7 from keras_core.utils import tracking
8
9
10 class TFDataLayer(Layer):
11 """Layer that can safely used in a tf.data pipeline.
12
13 The `call()` method must solely rely on `self.backend` ops.
14
15 Only supports a single input tensor argument.
16 """
17
18 def __init__(self, **kwargs):
19 super().__init__(**kwargs)
20 self.backend = backend_utils.DynamicBackend()
21 self._allow_non_tensor_positional_args = True
22
23 def __call__(self, inputs, **kwargs):
24 if backend_utils.in_tf_graph() and not isinstance(
25 inputs, backend.KerasTensor
26 ):
27 # We're in a TF graph, e.g. a tf.data pipeline.
28 self.backend.set_backend("tensorflow")
29 inputs = nest.map_structure(
30 lambda x: self.backend.convert_to_tensor(
31 x, dtype=self.compute_dtype
32 ),
33 inputs,
34 )
35 switch_convert_input_args = False
36 if self._convert_input_args:
37 self._convert_input_args = False
38 switch_convert_input_args = True
39 try:
40 outputs = super().__call__(inputs, **kwargs)
41 finally:
42 self.backend.reset()
43 if switch_convert_input_args:
44 self._convert_input_args = True
45 return outputs
46 return super().__call__(inputs, **kwargs)
47
48 @tracking.no_automatic_dependency_tracking
49 def _get_seed_generator(self, backend=None):
50 if backend is None or backend == self.backend._backend:
51 return self.generator
52 if not hasattr(self, "_backend_generators"):
53 self._backend_generators = {}
54 if backend in self._backend_generators:
55 return self._backend_generators[backend]
56 seed_generator = SeedGenerator(self.seed, backend=self.backend)
57 self._backend_generators[backend] = seed_generator
58 return seed_generator
59
[end of keras_core/layers/preprocessing/tf_data_layer.py]
[start of keras_core/random/seed_generator.py]
1 import random as python_random
2
3 import numpy as np
4
5 from keras_core.api_export import keras_core_export
6
7
8 @keras_core_export("keras_core.random.SeedGenerator")
9 class SeedGenerator:
10 """Generates variable seeds upon each call to a RNG-using function.
11
12 In Keras, all RNG-using methods (such as `keras_core.random.normal()`)
13 are stateless, meaning that if you pass an integer seed to them
14 (such as `seed=42`), they will return the same values at each call.
15 In order to get different values at each call, you must use a
16 `SeedGenerator` instead as the seed argument. The `SeedGenerator`
17 instead is stateful.
18
19 Example:
20
21 ```python
22 seed_gen = keras_core.random.SeedGenerator(seed=42)
23 values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)
24 new_values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)
25 ```
26 """
27
28 def __init__(self, seed, **kwargs):
29 custom_backend = kwargs.pop("backend", None)
30 if kwargs:
31 raise ValueError(f"Unrecognized keyword arguments: {kwargs}")
32 if custom_backend is not None:
33 backend = custom_backend
34 else:
35 from keras_core import backend
36
37 if seed is None:
38 seed = make_default_seed()
39 if not isinstance(seed, int):
40 raise ValueError(
41 "Argument `seed` must be an integer. " f"Received: seed={seed}"
42 )
43
44 def seed_initializer(*args, **kwargs):
45 dtype = kwargs.get("dtype", None)
46 return backend.convert_to_tensor([seed, 0], dtype=dtype)
47
48 self.state = backend.Variable(
49 seed_initializer,
50 shape=(2,),
51 dtype="uint32",
52 trainable=False,
53 name="seed_generator_state",
54 )
55
56
57 def make_default_seed():
58 return python_random.randint(1, int(1e9))
59
60
61 def draw_seed(seed):
62 from keras_core.backend import convert_to_tensor
63
64 if isinstance(seed, SeedGenerator):
65 seed_state = seed.state
66 # Use * 1 to create a copy
67 new_seed_value = seed_state.value * 1
68 increment = convert_to_tensor(np.array([0, 1]), dtype="uint32")
69 seed.state.assign(seed_state + increment)
70 return new_seed_value
71 elif isinstance(seed, int):
72 return convert_to_tensor([seed, 0], dtype="uint32")
73 elif seed is None:
74 return convert_to_tensor([make_default_seed(), 0], dtype="uint32")
75 raise ValueError(
76 "Argument `seed` must be either an integer "
77 "or an instance of `SeedGenerator`. "
78 f"Received: seed={seed} (of type {type(seed)})"
79 )
80
[end of keras_core/random/seed_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras_core/layers/preprocessing/tf_data_layer.py b/keras_core/layers/preprocessing/tf_data_layer.py
--- a/keras_core/layers/preprocessing/tf_data_layer.py
+++ b/keras_core/layers/preprocessing/tf_data_layer.py
@@ -1,6 +1,6 @@
from tensorflow import nest
-from keras_core import backend
+import keras_core.backend
from keras_core.layers.layer import Layer
from keras_core.random.seed_generator import SeedGenerator
from keras_core.utils import backend_utils
@@ -22,7 +22,7 @@
def __call__(self, inputs, **kwargs):
if backend_utils.in_tf_graph() and not isinstance(
- inputs, backend.KerasTensor
+ inputs, keras_core.backend.KerasTensor
):
# We're in a TF graph, e.g. a tf.data pipeline.
self.backend.set_backend("tensorflow")
@@ -47,7 +47,7 @@
@tracking.no_automatic_dependency_tracking
def _get_seed_generator(self, backend=None):
- if backend is None or backend == self.backend._backend:
+ if backend is None or backend == keras_core.backend.backend():
return self.generator
if not hasattr(self, "_backend_generators"):
self._backend_generators = {}
diff --git a/keras_core/random/seed_generator.py b/keras_core/random/seed_generator.py
--- a/keras_core/random/seed_generator.py
+++ b/keras_core/random/seed_generator.py
@@ -2,6 +2,7 @@
import numpy as np
+import keras_core.backend
from keras_core.api_export import keras_core_export
@@ -30,9 +31,9 @@
if kwargs:
raise ValueError(f"Unrecognized keyword arguments: {kwargs}")
if custom_backend is not None:
- backend = custom_backend
+ self.backend = custom_backend
else:
- from keras_core import backend
+ self.backend = keras_core.backend
if seed is None:
seed = make_default_seed()
@@ -43,9 +44,9 @@
def seed_initializer(*args, **kwargs):
dtype = kwargs.get("dtype", None)
- return backend.convert_to_tensor([seed, 0], dtype=dtype)
+ return self.backend.convert_to_tensor([seed, 0], dtype=dtype)
- self.state = backend.Variable(
+ self.state = self.backend.Variable(
seed_initializer,
shape=(2,),
dtype="uint32",
@@ -65,7 +66,9 @@
seed_state = seed.state
# Use * 1 to create a copy
new_seed_value = seed_state.value * 1
- increment = convert_to_tensor(np.array([0, 1]), dtype="uint32")
+ increment = seed.backend.convert_to_tensor(
+ np.array([0, 1]), dtype="uint32"
+ )
seed.state.assign(seed_state + increment)
return new_seed_value
elif isinstance(seed, int):
|
{"golden_diff": "diff --git a/keras_core/layers/preprocessing/tf_data_layer.py b/keras_core/layers/preprocessing/tf_data_layer.py\n--- a/keras_core/layers/preprocessing/tf_data_layer.py\n+++ b/keras_core/layers/preprocessing/tf_data_layer.py\n@@ -1,6 +1,6 @@\n from tensorflow import nest\n \n-from keras_core import backend\n+import keras_core.backend\n from keras_core.layers.layer import Layer\n from keras_core.random.seed_generator import SeedGenerator\n from keras_core.utils import backend_utils\n@@ -22,7 +22,7 @@\n \n def __call__(self, inputs, **kwargs):\n if backend_utils.in_tf_graph() and not isinstance(\n- inputs, backend.KerasTensor\n+ inputs, keras_core.backend.KerasTensor\n ):\n # We're in a TF graph, e.g. a tf.data pipeline.\n self.backend.set_backend(\"tensorflow\")\n@@ -47,7 +47,7 @@\n \n @tracking.no_automatic_dependency_tracking\n def _get_seed_generator(self, backend=None):\n- if backend is None or backend == self.backend._backend:\n+ if backend is None or backend == keras_core.backend.backend():\n return self.generator\n if not hasattr(self, \"_backend_generators\"):\n self._backend_generators = {}\ndiff --git a/keras_core/random/seed_generator.py b/keras_core/random/seed_generator.py\n--- a/keras_core/random/seed_generator.py\n+++ b/keras_core/random/seed_generator.py\n@@ -2,6 +2,7 @@\n \n import numpy as np\n \n+import keras_core.backend\n from keras_core.api_export import keras_core_export\n \n \n@@ -30,9 +31,9 @@\n if kwargs:\n raise ValueError(f\"Unrecognized keyword arguments: {kwargs}\")\n if custom_backend is not None:\n- backend = custom_backend\n+ self.backend = custom_backend\n else:\n- from keras_core import backend\n+ self.backend = keras_core.backend\n \n if seed is None:\n seed = make_default_seed()\n@@ -43,9 +44,9 @@\n \n def seed_initializer(*args, **kwargs):\n dtype = kwargs.get(\"dtype\", None)\n- return backend.convert_to_tensor([seed, 0], dtype=dtype)\n+ return self.backend.convert_to_tensor([seed, 0], dtype=dtype)\n \n- self.state = backend.Variable(\n+ self.state = self.backend.Variable(\n seed_initializer,\n shape=(2,),\n dtype=\"uint32\",\n@@ -65,7 +66,9 @@\n seed_state = seed.state\n # Use * 1 to create a copy\n new_seed_value = seed_state.value * 1\n- increment = convert_to_tensor(np.array([0, 1]), dtype=\"uint32\")\n+ increment = seed.backend.convert_to_tensor(\n+ np.array([0, 1]), dtype=\"uint32\"\n+ )\n seed.state.assign(seed_state + increment)\n return new_seed_value\n elif isinstance(seed, int):\n", "issue": "Torch GPU test failures\nInstall PyTorch via - \r\n```\r\nconda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia\r\n```\r\n\r\nWe still have to install TF (for tf.data) and JAX (for import - though this could be avoided) in the environment via - \r\n```\r\npip install tensorflow_cpu\r\npip install jax[cpu]\r\n```\r\n\r\nTest via -\r\n```\r\n KERAS_BACKEND=torch pytest keras_core --ignore keras_core/applications\r\n```\r\nResult summary:\r\n```\r\nFAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d0 - AssertionError:\r\nFAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d1 - AssertionError:\r\nFAILED keras_core/layers/convolutional/conv_transpose_test.py::ConvTransposeCorrectnessTest::test_conv3d_transpose2 - AssertionError:\r\nFAILED keras_core/layers/preprocessing/random_brightness_test.py::RandomBrightnessTest::test_tf_data_compatibility - TypeError: in user code:\r\nFAILED keras_core/layers/preprocessing/random_contrast_test.py::RandomContrastTest::test_tf_data_compatibility - TypeError: in user code:\r\n```\r\n\r\nConvCorrectnessTest, ConvTransposeCorrectnessTest - Failed due to mismatch in test array values.\r\nPreprpcoessing RandomBrightnessTest, RandomContrastTest failed on `numpy()` being calld on GPU tensor. It needs to be `cpu().numpy()` for torch.\n", "before_files": [{"content": "from tensorflow import nest\n\nfrom keras_core import backend\nfrom keras_core.layers.layer import Layer\nfrom keras_core.random.seed_generator import SeedGenerator\nfrom keras_core.utils import backend_utils\nfrom keras_core.utils import tracking\n\n\nclass TFDataLayer(Layer):\n \"\"\"Layer that can safely used in a tf.data pipeline.\n\n The `call()` method must solely rely on `self.backend` ops.\n\n Only supports a single input tensor argument.\n \"\"\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.backend = backend_utils.DynamicBackend()\n self._allow_non_tensor_positional_args = True\n\n def __call__(self, inputs, **kwargs):\n if backend_utils.in_tf_graph() and not isinstance(\n inputs, backend.KerasTensor\n ):\n # We're in a TF graph, e.g. a tf.data pipeline.\n self.backend.set_backend(\"tensorflow\")\n inputs = nest.map_structure(\n lambda x: self.backend.convert_to_tensor(\n x, dtype=self.compute_dtype\n ),\n inputs,\n )\n switch_convert_input_args = False\n if self._convert_input_args:\n self._convert_input_args = False\n switch_convert_input_args = True\n try:\n outputs = super().__call__(inputs, **kwargs)\n finally:\n self.backend.reset()\n if switch_convert_input_args:\n self._convert_input_args = True\n return outputs\n return super().__call__(inputs, **kwargs)\n\n @tracking.no_automatic_dependency_tracking\n def _get_seed_generator(self, backend=None):\n if backend is None or backend == self.backend._backend:\n return self.generator\n if not hasattr(self, \"_backend_generators\"):\n self._backend_generators = {}\n if backend in self._backend_generators:\n return self._backend_generators[backend]\n seed_generator = SeedGenerator(self.seed, backend=self.backend)\n self._backend_generators[backend] = seed_generator\n return seed_generator\n", "path": "keras_core/layers/preprocessing/tf_data_layer.py"}, {"content": "import random as python_random\n\nimport numpy as np\n\nfrom keras_core.api_export import keras_core_export\n\n\n@keras_core_export(\"keras_core.random.SeedGenerator\")\nclass SeedGenerator:\n \"\"\"Generates variable seeds upon each call to a RNG-using function.\n\n In Keras, all RNG-using methods (such as `keras_core.random.normal()`)\n are stateless, meaning that if you pass an integer seed to them\n (such as `seed=42`), they will return the same values at each call.\n In order to get different values at each call, you must use a\n `SeedGenerator` instead as the seed argument. The `SeedGenerator`\n instead is stateful.\n\n Example:\n\n ```python\n seed_gen = keras_core.random.SeedGenerator(seed=42)\n values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)\n new_values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)\n ```\n \"\"\"\n\n def __init__(self, seed, **kwargs):\n custom_backend = kwargs.pop(\"backend\", None)\n if kwargs:\n raise ValueError(f\"Unrecognized keyword arguments: {kwargs}\")\n if custom_backend is not None:\n backend = custom_backend\n else:\n from keras_core import backend\n\n if seed is None:\n seed = make_default_seed()\n if not isinstance(seed, int):\n raise ValueError(\n \"Argument `seed` must be an integer. \" f\"Received: seed={seed}\"\n )\n\n def seed_initializer(*args, **kwargs):\n dtype = kwargs.get(\"dtype\", None)\n return backend.convert_to_tensor([seed, 0], dtype=dtype)\n\n self.state = backend.Variable(\n seed_initializer,\n shape=(2,),\n dtype=\"uint32\",\n trainable=False,\n name=\"seed_generator_state\",\n )\n\n\ndef make_default_seed():\n return python_random.randint(1, int(1e9))\n\n\ndef draw_seed(seed):\n from keras_core.backend import convert_to_tensor\n\n if isinstance(seed, SeedGenerator):\n seed_state = seed.state\n # Use * 1 to create a copy\n new_seed_value = seed_state.value * 1\n increment = convert_to_tensor(np.array([0, 1]), dtype=\"uint32\")\n seed.state.assign(seed_state + increment)\n return new_seed_value\n elif isinstance(seed, int):\n return convert_to_tensor([seed, 0], dtype=\"uint32\")\n elif seed is None:\n return convert_to_tensor([make_default_seed(), 0], dtype=\"uint32\")\n raise ValueError(\n \"Argument `seed` must be either an integer \"\n \"or an instance of `SeedGenerator`. \"\n f\"Received: seed={seed} (of type {type(seed)})\"\n )\n", "path": "keras_core/random/seed_generator.py"}]}
| 2,186 | 668 |
gh_patches_debug_42987
|
rasdani/github-patches
|
git_diff
|
biopython__biopython-4616
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bio.exPASy.cellosaurus.parse misses some keys
### Setup
I am reporting a problem with Biopython version, Python version, and operating
system as follows:
```python
Python 3.9.16 (main, Mar 8 2023, 04:29:44)
[Clang 14.0.6 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys; print(sys.version)
3.9.16 (main, Mar 8 2023, 04:29:44)
[Clang 14.0.6 ]
>>> import platform; print(platform.python_implementation()); print(platform.platform())
CPython
macOS-10.16-x86_64-i386-64bit
>>> import Bio; print(Bio.__version__)
1.81
```
(*Please copy and run the above in your Python, and copy-and-paste the output*)
### Expected behaviour
Bio.exPASy.cellosaurus.parse returns a dict containing all the key/value pairs from [cellosaurus.txt](https://ftp.expasy.org/databases/cellosaurus/cellosaurus.txt) file. And, the text file lists the key at the beginning of the file as follow.
--------- --------------------------- ----------------------
Line code Content Occurrence in an entry
--------- --------------------------- ----------------------
ID Identifier (cell line name) Once; starts an entry
AC Accession (CVCL_xxxx) Once
AS Secondary accession number(s) Optional; once
SY Synonyms Optional; once
DR Cross-references Optional; once or more
RX References identifiers Optional: once or more
WW Web pages Optional; once or more
CC Comments Optional; once or more
ST STR profile data Optional; twice or more
DI Diseases Optional; once or more
OX Species of origin Once or more
HI Hierarchy Optional; once or more
OI Originate from same individual Optional; once or more
SX Sex of cell Optional; once
AG Age of donor at sampling Optional; once
CA Category Once
DT Date (entry history) Once
// Terminator Once; ends an entry
### Actual behaviour
But, actual implementation misses to parse out AG and DT.
def __init__(self):
"""Initialize the class."""
dict.__init__(self)
self["ID"] = ""
self["AC"] = ""
self["AS"] = ""
self["SY"] = ""
self["DR"] = []
self["RX"] = []
self["WW"] = []
self["CC"] = []
self["ST"] = []
self["DI"] = []
self["OX"] = []
self["HI"] = []
self["OI"] = []
self["SX"] = ""
self["CA"] = ""
### Steps to reproduce
Well, missing keys are pretty obvious. And, Age and Date wasn't returned from parse.
</issue>
<code>
[start of Bio/ExPASy/cellosaurus.py]
1 # Copyright 2016 by Stephen Marshall. All rights reserved.
2 # This code is part of the Biopython distribution and governed by its
3 # license. Please see the LICENSE file that should have been included
4 # as part of this package.
5
6 """Parser for the cellosaurus.txt file from ExPASy.
7
8 See https://web.expasy.org/cellosaurus/
9
10 Tested with the release of Version 18 (July 2016).
11
12 Functions:
13 - read Reads a file containing one cell line entry
14 - parse Reads a file containing multiple cell line entries
15
16 Classes:
17 - Record Holds cell line data.
18
19 Examples
20 --------
21 This example downloads the Cellosaurus database and parses it. Note that
22 urlopen returns a stream of bytes, while the parser expects a stream of plain
23 string, so we use TextIOWrapper to convert bytes to string using the UTF-8
24 encoding. This is not needed if you download the cellosaurus.txt file in
25 advance and open it (see the comment below).
26
27 >>> from urllib.request import urlopen
28 >>> from io import TextIOWrapper
29 >>> from Bio.ExPASy import cellosaurus
30 >>> url = "ftp://ftp.expasy.org/databases/cellosaurus/cellosaurus.txt"
31 >>> bytestream = urlopen(url)
32 >>> textstream = TextIOWrapper(bytestream, "UTF-8")
33 >>> # alternatively, use
34 >>> # textstream = open("cellosaurus.txt")
35 >>> # if you downloaded the cellosaurus.txt file in advance.
36 >>> records = cellosaurus.parse(textstream)
37 >>> for record in records:
38 ... if 'Homo sapiens' in record['OX'][0]:
39 ... print(record['ID']) # doctest:+ELLIPSIS
40 ...
41 #15310-LN
42 #W7079
43 (L)PC6
44 0.5alpha
45 ...
46
47 """
48
49
50 def parse(handle):
51 """Parse cell line records.
52
53 This function is for parsing cell line files containing multiple
54 records.
55
56 Arguments:
57 - handle - handle to the file.
58
59 """
60 while True:
61 record = __read(handle)
62 if not record:
63 break
64 yield record
65
66
67 def read(handle):
68 """Read one cell line record.
69
70 This function is for parsing cell line files containing
71 exactly one record.
72
73 Arguments:
74 - handle - handle to the file.
75
76 """
77 record = __read(handle)
78 # We should have reached the end of the record by now
79 remainder = handle.read()
80 if remainder:
81 raise ValueError("More than one cell line record found")
82 return record
83
84
85 class Record(dict):
86 """Holds information from an ExPASy Cellosaurus record as a Python dictionary.
87
88 Each record contains the following keys:
89
90 --------- --------------------------- ----------------------
91 Line code Content Occurrence in an entry
92 --------- --------------------------- ----------------------
93 ID Identifier (cell line name) Once; starts an entry
94 AC Accession (CVCL_xxxx) Once
95 AS Secondary accession number(s) Optional; once
96 SY Synonyms Optional; once
97 DR Cross-references Optional; once or more
98 RX References identifiers Optional: once or more
99 WW Web pages Optional; once or more
100 CC Comments Optional; once or more
101 ST STR profile data Optional; once or more
102 DI Diseases Optional; once or more
103 OX Species of origin Once or more
104 HI Hierarchy Optional; once or more
105 OI Originate from same individual Optional; once or more
106 SX Sex (gender) of cell Optional; once
107 CA Category Once
108 // Terminator Once; ends an entry
109
110 """
111
112 def __init__(self):
113 """Initialize the class."""
114 dict.__init__(self)
115 self["ID"] = ""
116 self["AC"] = ""
117 self["AS"] = ""
118 self["SY"] = ""
119 self["DR"] = []
120 self["RX"] = []
121 self["WW"] = []
122 self["CC"] = []
123 self["ST"] = []
124 self["DI"] = []
125 self["OX"] = []
126 self["HI"] = []
127 self["OI"] = []
128 self["SX"] = ""
129 self["CA"] = ""
130
131 def __repr__(self):
132 """Return the canonical string representation of the Record object."""
133 if self["ID"]:
134 if self["AC"]:
135 return f"{self.__class__.__name__} ({self['ID']}, {self['AC']})"
136 else:
137 return f"{self.__class__.__name__} ({self['ID']})"
138 else:
139 return f"{self.__class__.__name__} ( )"
140
141 def __str__(self):
142 """Return a readable string representation of the Record object."""
143 output = "ID: " + self["ID"]
144 output += " AC: " + self["AC"]
145 output += " AS: " + self["AS"]
146 output += " SY: " + self["SY"]
147 output += " DR: " + repr(self["DR"])
148 output += " RX: " + repr(self["RX"])
149 output += " WW: " + repr(self["WW"])
150 output += " CC: " + repr(self["CC"])
151 output += " ST: " + repr(self["ST"])
152 output += " DI: " + repr(self["DI"])
153 output += " OX: " + repr(self["OX"])
154 output += " HI: " + repr(self["HI"])
155 output += " OI: " + repr(self["OI"])
156 output += " SX: " + self["SX"]
157 output += " CA: " + self["CA"]
158 return output
159
160
161 # Everything below is private
162
163
164 def __read(handle):
165 record = None
166
167 for line in handle:
168 key, value = line[:2], line[5:].rstrip()
169 if key == "ID":
170 record = Record()
171 record["ID"] = value
172 elif key in ["AC", "AS", "SY", "SX", "CA"]:
173 record[key] += value
174 elif key in [
175 "AC",
176 "AS",
177 "SY",
178 "RX",
179 "WW",
180 "CC",
181 "ST",
182 "DI",
183 "OX",
184 "HI",
185 "OI",
186 "SX",
187 "CA",
188 ]:
189 record[key].append(value)
190 elif key == "DR":
191 k, v = value.split(";")
192 record["DR"].append((k.strip(), v.strip()))
193 elif key == "//":
194 if record:
195 return record
196 else:
197 continue
198 if record:
199 raise ValueError("Unexpected end of stream")
200
201
202 if __name__ == "__main__":
203 from Bio._utils import run_doctest
204
205 run_doctest()
206
[end of Bio/ExPASy/cellosaurus.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Bio/ExPASy/cellosaurus.py b/Bio/ExPASy/cellosaurus.py
--- a/Bio/ExPASy/cellosaurus.py
+++ b/Bio/ExPASy/cellosaurus.py
@@ -87,25 +87,28 @@
Each record contains the following keys:
- --------- --------------------------- ----------------------
- Line code Content Occurrence in an entry
- --------- --------------------------- ----------------------
- ID Identifier (cell line name) Once; starts an entry
- AC Accession (CVCL_xxxx) Once
- AS Secondary accession number(s) Optional; once
- SY Synonyms Optional; once
- DR Cross-references Optional; once or more
- RX References identifiers Optional: once or more
- WW Web pages Optional; once or more
- CC Comments Optional; once or more
- ST STR profile data Optional; once or more
- DI Diseases Optional; once or more
- OX Species of origin Once or more
- HI Hierarchy Optional; once or more
- OI Originate from same individual Optional; once or more
- SX Sex (gender) of cell Optional; once
- CA Category Once
- // Terminator Once; ends an entry
+ ========= ============================== =======================
+ Line code Content Occurrence in an entry
+ ========= ============================== =======================
+ ID Identifier (cell line name) Once; starts an entry
+ AC Accession (CVCL_xxxx) Once
+ AS Secondary accession number(s) Optional; once
+ SY Synonyms Optional; once
+ DR Cross-references Optional; once or more
+ RX References identifiers Optional: once or more
+ WW Web pages Optional; once or more
+ CC Comments Optional; once or more
+ ST STR profile data Optional; twice or more
+ DI Diseases Optional; once or more
+ OX Species of origin Once or more
+ HI Hierarchy Optional; once or more
+ OI Originate from same individual Optional; once or more
+ SX Sex of cell Optional; once
+ AG Age of donor at sampling Optional; once
+ CA Category Once
+ DT Date (entry history) Once
+ // Terminator Once; ends an entry
+ ========= ============================== =======================
"""
@@ -126,7 +129,9 @@
self["HI"] = []
self["OI"] = []
self["SX"] = ""
+ self["AG"] = ""
self["CA"] = ""
+ self["DT"] = ""
def __repr__(self):
"""Return the canonical string representation of the Record object."""
@@ -154,7 +159,9 @@
output += " HI: " + repr(self["HI"])
output += " OI: " + repr(self["OI"])
output += " SX: " + self["SX"]
+ output += " AG: " + self["AG"]
output += " CA: " + self["CA"]
+ output += " DT: " + self["DT"]
return output
@@ -169,12 +176,10 @@
if key == "ID":
record = Record()
record["ID"] = value
- elif key in ["AC", "AS", "SY", "SX", "CA"]:
+ elif key in ["AC", "AS", "SY", "SX", "AG", "CA", "DT"]:
record[key] += value
elif key in [
- "AC",
- "AS",
- "SY",
+ # just append to the fields defined as lists, not to strings
"RX",
"WW",
"CC",
@@ -183,8 +188,6 @@
"OX",
"HI",
"OI",
- "SX",
- "CA",
]:
record[key].append(value)
elif key == "DR":
|
{"golden_diff": "diff --git a/Bio/ExPASy/cellosaurus.py b/Bio/ExPASy/cellosaurus.py\n--- a/Bio/ExPASy/cellosaurus.py\n+++ b/Bio/ExPASy/cellosaurus.py\n@@ -87,25 +87,28 @@\n \n Each record contains the following keys:\n \n- --------- --------------------------- ----------------------\n- Line code Content Occurrence in an entry\n- --------- --------------------------- ----------------------\n- ID Identifier (cell line name) Once; starts an entry\n- AC Accession (CVCL_xxxx) Once\n- AS Secondary accession number(s) Optional; once\n- SY Synonyms Optional; once\n- DR Cross-references Optional; once or more\n- RX References identifiers Optional: once or more\n- WW Web pages Optional; once or more\n- CC Comments Optional; once or more\n- ST STR profile data Optional; once or more\n- DI Diseases Optional; once or more\n- OX Species of origin Once or more\n- HI Hierarchy Optional; once or more\n- OI Originate from same individual Optional; once or more\n- SX Sex (gender) of cell Optional; once\n- CA Category Once\n- // Terminator Once; ends an entry\n+ ========= ============================== =======================\n+ Line code Content Occurrence in an entry\n+ ========= ============================== =======================\n+ ID Identifier (cell line name) Once; starts an entry\n+ AC Accession (CVCL_xxxx) Once\n+ AS Secondary accession number(s) Optional; once\n+ SY Synonyms Optional; once\n+ DR Cross-references Optional; once or more\n+ RX References identifiers Optional: once or more\n+ WW Web pages Optional; once or more\n+ CC Comments Optional; once or more\n+ ST STR profile data Optional; twice or more\n+ DI Diseases Optional; once or more\n+ OX Species of origin Once or more\n+ HI Hierarchy Optional; once or more\n+ OI Originate from same individual Optional; once or more\n+ SX Sex of cell Optional; once\n+ AG Age of donor at sampling Optional; once\n+ CA Category Once\n+ DT Date (entry history) Once\n+ // Terminator Once; ends an entry\n+ ========= ============================== =======================\n \n \"\"\"\n \n@@ -126,7 +129,9 @@\n self[\"HI\"] = []\n self[\"OI\"] = []\n self[\"SX\"] = \"\"\n+ self[\"AG\"] = \"\"\n self[\"CA\"] = \"\"\n+ self[\"DT\"] = \"\"\n \n def __repr__(self):\n \"\"\"Return the canonical string representation of the Record object.\"\"\"\n@@ -154,7 +159,9 @@\n output += \" HI: \" + repr(self[\"HI\"])\n output += \" OI: \" + repr(self[\"OI\"])\n output += \" SX: \" + self[\"SX\"]\n+ output += \" AG: \" + self[\"AG\"]\n output += \" CA: \" + self[\"CA\"]\n+ output += \" DT: \" + self[\"DT\"]\n return output\n \n \n@@ -169,12 +176,10 @@\n if key == \"ID\":\n record = Record()\n record[\"ID\"] = value\n- elif key in [\"AC\", \"AS\", \"SY\", \"SX\", \"CA\"]:\n+ elif key in [\"AC\", \"AS\", \"SY\", \"SX\", \"AG\", \"CA\", \"DT\"]:\n record[key] += value\n elif key in [\n- \"AC\",\n- \"AS\",\n- \"SY\",\n+ # just append to the fields defined as lists, not to strings\n \"RX\",\n \"WW\",\n \"CC\",\n@@ -183,8 +188,6 @@\n \"OX\",\n \"HI\",\n \"OI\",\n- \"SX\",\n- \"CA\",\n ]:\n record[key].append(value)\n elif key == \"DR\":\n", "issue": "Bio.exPASy.cellosaurus.parse misses some keys\n### Setup\r\n\r\nI am reporting a problem with Biopython version, Python version, and operating\r\nsystem as follows:\r\n\r\n```python\r\nPython 3.9.16 (main, Mar 8 2023, 04:29:44) \r\n[Clang 14.0.6 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import sys; print(sys.version)\r\n3.9.16 (main, Mar 8 2023, 04:29:44) \r\n[Clang 14.0.6 ]\r\n>>> import platform; print(platform.python_implementation()); print(platform.platform())\r\nCPython\r\nmacOS-10.16-x86_64-i386-64bit\r\n>>> import Bio; print(Bio.__version__)\r\n1.81\r\n```\r\n\r\n(*Please copy and run the above in your Python, and copy-and-paste the output*)\r\n\r\n### Expected behaviour\r\nBio.exPASy.cellosaurus.parse returns a dict containing all the key/value pairs from [cellosaurus.txt](https://ftp.expasy.org/databases/cellosaurus/cellosaurus.txt) file. And, the text file lists the key at the beginning of the file as follow.\r\n--------- --------------------------- ----------------------\r\n Line code Content Occurrence in an entry\r\n --------- --------------------------- ----------------------\r\n ID Identifier (cell line name) Once; starts an entry\r\n AC Accession (CVCL_xxxx) Once\r\n AS Secondary accession number(s) Optional; once\r\n SY Synonyms Optional; once\r\n DR Cross-references Optional; once or more\r\n RX References identifiers Optional: once or more\r\n WW Web pages Optional; once or more\r\n CC Comments Optional; once or more\r\n ST STR profile data Optional; twice or more\r\n DI Diseases Optional; once or more\r\n OX Species of origin Once or more\r\n HI Hierarchy Optional; once or more\r\n OI Originate from same individual Optional; once or more\r\n SX Sex of cell Optional; once\r\n AG Age of donor at sampling Optional; once\r\n CA Category Once\r\n DT Date (entry history) Once\r\n // Terminator Once; ends an entry\r\n\r\n### Actual behaviour\r\n\r\nBut, actual implementation misses to parse out AG and DT.\r\n def __init__(self):\r\n \"\"\"Initialize the class.\"\"\"\r\n dict.__init__(self)\r\n self[\"ID\"] = \"\"\r\n self[\"AC\"] = \"\"\r\n self[\"AS\"] = \"\"\r\n self[\"SY\"] = \"\"\r\n self[\"DR\"] = []\r\n self[\"RX\"] = []\r\n self[\"WW\"] = []\r\n self[\"CC\"] = []\r\n self[\"ST\"] = []\r\n self[\"DI\"] = []\r\n self[\"OX\"] = []\r\n self[\"HI\"] = []\r\n self[\"OI\"] = []\r\n self[\"SX\"] = \"\"\r\n self[\"CA\"] = \"\"\r\n\r\n### Steps to reproduce\r\n\r\nWell, missing keys are pretty obvious. And, Age and Date wasn't returned from parse.\n", "before_files": [{"content": "# Copyright 2016 by Stephen Marshall. All rights reserved.\n# This code is part of the Biopython distribution and governed by its\n# license. Please see the LICENSE file that should have been included\n# as part of this package.\n\n\"\"\"Parser for the cellosaurus.txt file from ExPASy.\n\nSee https://web.expasy.org/cellosaurus/\n\nTested with the release of Version 18 (July 2016).\n\nFunctions:\n - read Reads a file containing one cell line entry\n - parse Reads a file containing multiple cell line entries\n\nClasses:\n - Record Holds cell line data.\n\nExamples\n--------\nThis example downloads the Cellosaurus database and parses it. Note that\nurlopen returns a stream of bytes, while the parser expects a stream of plain\nstring, so we use TextIOWrapper to convert bytes to string using the UTF-8\nencoding. This is not needed if you download the cellosaurus.txt file in\nadvance and open it (see the comment below).\n\n >>> from urllib.request import urlopen\n >>> from io import TextIOWrapper\n >>> from Bio.ExPASy import cellosaurus\n >>> url = \"ftp://ftp.expasy.org/databases/cellosaurus/cellosaurus.txt\"\n >>> bytestream = urlopen(url)\n >>> textstream = TextIOWrapper(bytestream, \"UTF-8\")\n >>> # alternatively, use\n >>> # textstream = open(\"cellosaurus.txt\")\n >>> # if you downloaded the cellosaurus.txt file in advance.\n >>> records = cellosaurus.parse(textstream)\n >>> for record in records:\n ... if 'Homo sapiens' in record['OX'][0]:\n ... print(record['ID']) # doctest:+ELLIPSIS\n ...\n #15310-LN\n #W7079\n (L)PC6\n 0.5alpha\n ...\n\n\"\"\"\n\n\ndef parse(handle):\n \"\"\"Parse cell line records.\n\n This function is for parsing cell line files containing multiple\n records.\n\n Arguments:\n - handle - handle to the file.\n\n \"\"\"\n while True:\n record = __read(handle)\n if not record:\n break\n yield record\n\n\ndef read(handle):\n \"\"\"Read one cell line record.\n\n This function is for parsing cell line files containing\n exactly one record.\n\n Arguments:\n - handle - handle to the file.\n\n \"\"\"\n record = __read(handle)\n # We should have reached the end of the record by now\n remainder = handle.read()\n if remainder:\n raise ValueError(\"More than one cell line record found\")\n return record\n\n\nclass Record(dict):\n \"\"\"Holds information from an ExPASy Cellosaurus record as a Python dictionary.\n\n Each record contains the following keys:\n\n --------- --------------------------- ----------------------\n Line code Content Occurrence in an entry\n --------- --------------------------- ----------------------\n ID Identifier (cell line name) Once; starts an entry\n AC Accession (CVCL_xxxx) Once\n AS Secondary accession number(s) Optional; once\n SY Synonyms Optional; once\n DR Cross-references Optional; once or more\n RX References identifiers Optional: once or more\n WW Web pages Optional; once or more\n CC Comments Optional; once or more\n ST STR profile data Optional; once or more\n DI Diseases Optional; once or more\n OX Species of origin Once or more\n HI Hierarchy Optional; once or more\n OI Originate from same individual Optional; once or more\n SX Sex (gender) of cell Optional; once\n CA Category Once\n // Terminator Once; ends an entry\n\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n dict.__init__(self)\n self[\"ID\"] = \"\"\n self[\"AC\"] = \"\"\n self[\"AS\"] = \"\"\n self[\"SY\"] = \"\"\n self[\"DR\"] = []\n self[\"RX\"] = []\n self[\"WW\"] = []\n self[\"CC\"] = []\n self[\"ST\"] = []\n self[\"DI\"] = []\n self[\"OX\"] = []\n self[\"HI\"] = []\n self[\"OI\"] = []\n self[\"SX\"] = \"\"\n self[\"CA\"] = \"\"\n\n def __repr__(self):\n \"\"\"Return the canonical string representation of the Record object.\"\"\"\n if self[\"ID\"]:\n if self[\"AC\"]:\n return f\"{self.__class__.__name__} ({self['ID']}, {self['AC']})\"\n else:\n return f\"{self.__class__.__name__} ({self['ID']})\"\n else:\n return f\"{self.__class__.__name__} ( )\"\n\n def __str__(self):\n \"\"\"Return a readable string representation of the Record object.\"\"\"\n output = \"ID: \" + self[\"ID\"]\n output += \" AC: \" + self[\"AC\"]\n output += \" AS: \" + self[\"AS\"]\n output += \" SY: \" + self[\"SY\"]\n output += \" DR: \" + repr(self[\"DR\"])\n output += \" RX: \" + repr(self[\"RX\"])\n output += \" WW: \" + repr(self[\"WW\"])\n output += \" CC: \" + repr(self[\"CC\"])\n output += \" ST: \" + repr(self[\"ST\"])\n output += \" DI: \" + repr(self[\"DI\"])\n output += \" OX: \" + repr(self[\"OX\"])\n output += \" HI: \" + repr(self[\"HI\"])\n output += \" OI: \" + repr(self[\"OI\"])\n output += \" SX: \" + self[\"SX\"]\n output += \" CA: \" + self[\"CA\"]\n return output\n\n\n# Everything below is private\n\n\ndef __read(handle):\n record = None\n\n for line in handle:\n key, value = line[:2], line[5:].rstrip()\n if key == \"ID\":\n record = Record()\n record[\"ID\"] = value\n elif key in [\"AC\", \"AS\", \"SY\", \"SX\", \"CA\"]:\n record[key] += value\n elif key in [\n \"AC\",\n \"AS\",\n \"SY\",\n \"RX\",\n \"WW\",\n \"CC\",\n \"ST\",\n \"DI\",\n \"OX\",\n \"HI\",\n \"OI\",\n \"SX\",\n \"CA\",\n ]:\n record[key].append(value)\n elif key == \"DR\":\n k, v = value.split(\";\")\n record[\"DR\"].append((k.strip(), v.strip()))\n elif key == \"//\":\n if record:\n return record\n else:\n continue\n if record:\n raise ValueError(\"Unexpected end of stream\")\n\n\nif __name__ == \"__main__\":\n from Bio._utils import run_doctest\n\n run_doctest()\n", "path": "Bio/ExPASy/cellosaurus.py"}]}
| 3,295 | 975 |
gh_patches_debug_27608
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-2324
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Virtualenv 20.14.0 bundles broken setuptools
**Issue**
Virtualenv 20.14.0 has broken version of setuptools 61.0.0 bundled in. This causes broken installations of packages, especially missing package data files.
**Environment**
Various
</issue>
<code>
[start of src/virtualenv/seed/wheels/embed/__init__.py]
1 from __future__ import absolute_import, unicode_literals
2
3 from virtualenv.seed.wheels.util import Wheel
4 from virtualenv.util.path import Path
5
6 BUNDLE_FOLDER = Path(__file__).absolute().parent
7 BUNDLE_SUPPORT = {
8 "3.11": {
9 "pip": "pip-22.0.4-py3-none-any.whl",
10 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
11 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
12 },
13 "3.10": {
14 "pip": "pip-22.0.4-py3-none-any.whl",
15 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
16 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
17 },
18 "3.9": {
19 "pip": "pip-22.0.4-py3-none-any.whl",
20 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
21 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
22 },
23 "3.8": {
24 "pip": "pip-22.0.4-py3-none-any.whl",
25 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
26 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
27 },
28 "3.7": {
29 "pip": "pip-22.0.4-py3-none-any.whl",
30 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
31 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
32 },
33 "3.6": {
34 "pip": "pip-21.3.1-py3-none-any.whl",
35 "setuptools": "setuptools-59.6.0-py3-none-any.whl",
36 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
37 },
38 "3.5": {
39 "pip": "pip-20.3.4-py2.py3-none-any.whl",
40 "setuptools": "setuptools-50.3.2-py3-none-any.whl",
41 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
42 },
43 "2.7": {
44 "pip": "pip-20.3.4-py2.py3-none-any.whl",
45 "setuptools": "setuptools-44.1.1-py2.py3-none-any.whl",
46 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
47 },
48 }
49 MAX = "3.11"
50
51
52 def get_embed_wheel(distribution, for_py_version):
53 path = BUNDLE_FOLDER / (BUNDLE_SUPPORT.get(for_py_version, {}) or BUNDLE_SUPPORT[MAX]).get(distribution)
54 return Wheel.from_path(path)
55
56
57 __all__ = (
58 "get_embed_wheel",
59 "BUNDLE_SUPPORT",
60 "MAX",
61 "BUNDLE_FOLDER",
62 )
63
[end of src/virtualenv/seed/wheels/embed/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/virtualenv/seed/wheels/embed/__init__.py b/src/virtualenv/seed/wheels/embed/__init__.py
--- a/src/virtualenv/seed/wheels/embed/__init__.py
+++ b/src/virtualenv/seed/wheels/embed/__init__.py
@@ -7,27 +7,27 @@
BUNDLE_SUPPORT = {
"3.11": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.10": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.9": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.8": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.7": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.6": {
|
{"golden_diff": "diff --git a/src/virtualenv/seed/wheels/embed/__init__.py b/src/virtualenv/seed/wheels/embed/__init__.py\n--- a/src/virtualenv/seed/wheels/embed/__init__.py\n+++ b/src/virtualenv/seed/wheels/embed/__init__.py\n@@ -7,27 +7,27 @@\n BUNDLE_SUPPORT = {\n \"3.11\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.10\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.9\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.8\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.7\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.6\": {\n", "issue": "Virtualenv 20.14.0 bundles broken setuptools\n**Issue**\r\n\r\nVirtualenv 20.14.0 has broken version of setuptools 61.0.0 bundled in. This causes broken installations of packages, especially missing package data files.\r\n\r\n**Environment**\r\n\r\nVarious\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom virtualenv.seed.wheels.util import Wheel\nfrom virtualenv.util.path import Path\n\nBUNDLE_FOLDER = Path(__file__).absolute().parent\nBUNDLE_SUPPORT = {\n \"3.11\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.10\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.9\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.8\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.7\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.6\": {\n \"pip\": \"pip-21.3.1-py3-none-any.whl\",\n \"setuptools\": \"setuptools-59.6.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.5\": {\n \"pip\": \"pip-20.3.4-py2.py3-none-any.whl\",\n \"setuptools\": \"setuptools-50.3.2-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"2.7\": {\n \"pip\": \"pip-20.3.4-py2.py3-none-any.whl\",\n \"setuptools\": \"setuptools-44.1.1-py2.py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n}\nMAX = \"3.11\"\n\n\ndef get_embed_wheel(distribution, for_py_version):\n path = BUNDLE_FOLDER / (BUNDLE_SUPPORT.get(for_py_version, {}) or BUNDLE_SUPPORT[MAX]).get(distribution)\n return Wheel.from_path(path)\n\n\n__all__ = (\n \"get_embed_wheel\",\n \"BUNDLE_SUPPORT\",\n \"MAX\",\n \"BUNDLE_FOLDER\",\n)\n", "path": "src/virtualenv/seed/wheels/embed/__init__.py"}]}
| 1,480 | 595 |
gh_patches_debug_26898
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-3042
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bitcoin legacy xpub with too many used address may fail to query balances after restart
## Problem Definition
If a user adds a legacy xpub that generates too many used addresses rotki will properly add the xpub and all its addresses. But then after a restart it will fail to query all balances again.
Error is:
```
Error at querying blockchain balances: bitcoin external API request for balances failed due to https://blockchain.info/multiaddr?active=XXXXXX
```
The failure is due to attempting to query all addresses from blockchain.info using the multiaddr endpoint which seems to have a non-documented limit.
blockchain.info errors with a 502 if too many bitcoin addresses are sent.
## Task
Fix it by experimenting and figuring out a safe limit.
Then apply that limit where we query for multiple legacy bitcoin addresses.
</issue>
<code>
[start of rotkehlchen/chain/bitcoin/__init__.py]
1 from typing import Dict, List, Tuple
2
3 import requests
4
5 from rotkehlchen.errors import RemoteError, UnableToDecryptRemoteData
6 from rotkehlchen.fval import FVal
7 from rotkehlchen.typing import BTCAddress
8 from rotkehlchen.utils.misc import satoshis_to_btc
9 from rotkehlchen.utils.network import request_get_dict
10
11
12 def _have_bc1_accounts(accounts: List[BTCAddress]) -> bool:
13 return any(account.lower()[0:3] == 'bc1' for account in accounts)
14
15
16 def get_bitcoin_addresses_balances(accounts: List[BTCAddress]) -> Dict[BTCAddress, FVal]:
17 """Queries blockchain.info or blockstream for the balances of accounts
18
19 May raise:
20 - RemotError if there is a problem querying blockchain.info or blockstream
21 """
22 source = 'blockchain.info'
23 balances: Dict[BTCAddress, FVal] = {}
24 try:
25 if _have_bc1_accounts(accounts):
26 # if 1 account is bech32 we have to query blockstream. blockchaininfo won't work
27 source = 'blockstream'
28 balances = {}
29 for account in accounts:
30 url = f'https://blockstream.info/api/address/{account}'
31 response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)
32 stats = response_data['chain_stats']
33 balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])
34 balances[account] = satoshis_to_btc(balance)
35 else:
36 params = '|'.join(accounts)
37 btc_resp = request_get_dict(
38 url=f'https://blockchain.info/multiaddr?active={params}',
39 handle_429=True,
40 # If we get a 429 then their docs suggest 10 seconds
41 # https://blockchain.info/q
42 backoff_in_seconds=10,
43 )
44 for entry in btc_resp['addresses']:
45 balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))
46 except (
47 requests.exceptions.RequestException,
48 UnableToDecryptRemoteData,
49 requests.exceptions.Timeout,
50 ) as e:
51 raise RemoteError(f'bitcoin external API request for balances failed due to {str(e)}') from e # noqa: E501
52 except KeyError as e:
53 raise RemoteError(
54 f'Malformed response when querying bitcoin blockchain via {source}.'
55 f'Did not find key {e}',
56 ) from e
57
58 return balances
59
60
61 def _check_blockstream_for_transactions(
62 accounts: List[BTCAddress],
63 ) -> Dict[BTCAddress, Tuple[bool, FVal]]:
64 """May raise connection errors or KeyError"""
65 have_transactions = {}
66 for account in accounts:
67 url = f'https://blockstream.info/api/address/{account}'
68 response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)
69 stats = response_data['chain_stats']
70 balance = satoshis_to_btc(int(stats['funded_txo_sum']) - int(stats['spent_txo_sum']))
71 have_txs = stats['tx_count'] != 0
72 have_transactions[account] = (have_txs, balance)
73
74 return have_transactions
75
76
77 def _check_blockchaininfo_for_transactions(
78 accounts: List[BTCAddress],
79 ) -> Dict[BTCAddress, Tuple[bool, FVal]]:
80 """May raise RemotError or KeyError"""
81 have_transactions = {}
82 params = '|'.join(accounts)
83 btc_resp = request_get_dict(
84 url=f'https://blockchain.info/multiaddr?active={params}',
85 handle_429=True,
86 # If we get a 429 then their docs suggest 10 seconds
87 # https://blockchain.infoq/
88 backoff_in_seconds=15,
89 )
90 for entry in btc_resp['addresses']:
91 balance = satoshis_to_btc(entry['final_balance'])
92 have_transactions[entry['address']] = (entry['n_tx'] != 0, balance)
93
94 return have_transactions
95
96
97 def have_bitcoin_transactions(accounts: List[BTCAddress]) -> Dict[BTCAddress, Tuple[bool, FVal]]:
98 """
99 Takes a list of addresses and returns a mapping of which addresses have had transactions
100 and also their current balance
101
102 May raise:
103 - RemoteError if any of the queried websites fail to be queried
104 """
105 try:
106 if _have_bc1_accounts(accounts):
107 source = 'blockstream'
108 have_transactions = _check_blockstream_for_transactions(accounts)
109 else:
110 source = 'blockchain.info'
111 have_transactions = _check_blockchaininfo_for_transactions(accounts)
112 except (
113 requests.exceptions.RequestException,
114 UnableToDecryptRemoteData,
115 requests.exceptions.Timeout,
116 ) as e:
117 raise RemoteError(f'bitcoin external API request for transactions failed due to {str(e)}') from e # noqa: E501
118 except KeyError as e:
119 raise RemoteError(
120 f'Malformed response when querying bitcoin blockchain via {source}.'
121 f'Did not find key {str(e)}',
122 ) from e
123
124 return have_transactions
125
[end of rotkehlchen/chain/bitcoin/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rotkehlchen/chain/bitcoin/__init__.py b/rotkehlchen/chain/bitcoin/__init__.py
--- a/rotkehlchen/chain/bitcoin/__init__.py
+++ b/rotkehlchen/chain/bitcoin/__init__.py
@@ -33,16 +33,20 @@
balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])
balances[account] = satoshis_to_btc(balance)
else:
- params = '|'.join(accounts)
- btc_resp = request_get_dict(
- url=f'https://blockchain.info/multiaddr?active={params}',
- handle_429=True,
- # If we get a 429 then their docs suggest 10 seconds
- # https://blockchain.info/q
- backoff_in_seconds=10,
- )
- for entry in btc_resp['addresses']:
- balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))
+ # split the list of accounts into sublists of 80 addresses per list to overcome:
+ # https://github.com/rotki/rotki/issues/3037
+ accounts_chunks = [accounts[x:x + 80] for x in range(0, len(accounts), 80)]
+ for accounts_chunk in accounts_chunks:
+ params = '|'.join(accounts_chunk)
+ btc_resp = request_get_dict(
+ url=f'https://blockchain.info/multiaddr?active={params}',
+ handle_429=True,
+ # If we get a 429 then their docs suggest 10 seconds
+ # https://blockchain.info/q
+ backoff_in_seconds=10,
+ )
+ for entry in btc_resp['addresses']:
+ balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))
except (
requests.exceptions.RequestException,
UnableToDecryptRemoteData,
|
{"golden_diff": "diff --git a/rotkehlchen/chain/bitcoin/__init__.py b/rotkehlchen/chain/bitcoin/__init__.py\n--- a/rotkehlchen/chain/bitcoin/__init__.py\n+++ b/rotkehlchen/chain/bitcoin/__init__.py\n@@ -33,16 +33,20 @@\n balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])\n balances[account] = satoshis_to_btc(balance)\n else:\n- params = '|'.join(accounts)\n- btc_resp = request_get_dict(\n- url=f'https://blockchain.info/multiaddr?active={params}',\n- handle_429=True,\n- # If we get a 429 then their docs suggest 10 seconds\n- # https://blockchain.info/q\n- backoff_in_seconds=10,\n- )\n- for entry in btc_resp['addresses']:\n- balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n+ # split the list of accounts into sublists of 80 addresses per list to overcome:\n+ # https://github.com/rotki/rotki/issues/3037\n+ accounts_chunks = [accounts[x:x + 80] for x in range(0, len(accounts), 80)]\n+ for accounts_chunk in accounts_chunks:\n+ params = '|'.join(accounts_chunk)\n+ btc_resp = request_get_dict(\n+ url=f'https://blockchain.info/multiaddr?active={params}',\n+ handle_429=True,\n+ # If we get a 429 then their docs suggest 10 seconds\n+ # https://blockchain.info/q\n+ backoff_in_seconds=10,\n+ )\n+ for entry in btc_resp['addresses']:\n+ balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n", "issue": "Bitcoin legacy xpub with too many used address may fail to query balances after restart\n## Problem Definition\r\n\r\nIf a user adds a legacy xpub that generates too many used addresses rotki will properly add the xpub and all its addresses. But then after a restart it will fail to query all balances again.\r\n\r\nError is:\r\n\r\n```\r\nError at querying blockchain balances: bitcoin external API request for balances failed due to https://blockchain.info/multiaddr?active=XXXXXX\r\n```\r\n\r\nThe failure is due to attempting to query all addresses from blockchain.info using the multiaddr endpoint which seems to have a non-documented limit.\r\n\r\nblockchain.info errors with a 502 if too many bitcoin addresses are sent.\r\n\r\n## Task\r\n\r\nFix it by experimenting and figuring out a safe limit.\r\nThen apply that limit where we query for multiple legacy bitcoin addresses.\n", "before_files": [{"content": "from typing import Dict, List, Tuple\n\nimport requests\n\nfrom rotkehlchen.errors import RemoteError, UnableToDecryptRemoteData\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.typing import BTCAddress\nfrom rotkehlchen.utils.misc import satoshis_to_btc\nfrom rotkehlchen.utils.network import request_get_dict\n\n\ndef _have_bc1_accounts(accounts: List[BTCAddress]) -> bool:\n return any(account.lower()[0:3] == 'bc1' for account in accounts)\n\n\ndef get_bitcoin_addresses_balances(accounts: List[BTCAddress]) -> Dict[BTCAddress, FVal]:\n \"\"\"Queries blockchain.info or blockstream for the balances of accounts\n\n May raise:\n - RemotError if there is a problem querying blockchain.info or blockstream\n \"\"\"\n source = 'blockchain.info'\n balances: Dict[BTCAddress, FVal] = {}\n try:\n if _have_bc1_accounts(accounts):\n # if 1 account is bech32 we have to query blockstream. blockchaininfo won't work\n source = 'blockstream'\n balances = {}\n for account in accounts:\n url = f'https://blockstream.info/api/address/{account}'\n response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)\n stats = response_data['chain_stats']\n balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])\n balances[account] = satoshis_to_btc(balance)\n else:\n params = '|'.join(accounts)\n btc_resp = request_get_dict(\n url=f'https://blockchain.info/multiaddr?active={params}',\n handle_429=True,\n # If we get a 429 then their docs suggest 10 seconds\n # https://blockchain.info/q\n backoff_in_seconds=10,\n )\n for entry in btc_resp['addresses']:\n balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n requests.exceptions.Timeout,\n ) as e:\n raise RemoteError(f'bitcoin external API request for balances failed due to {str(e)}') from e # noqa: E501\n except KeyError as e:\n raise RemoteError(\n f'Malformed response when querying bitcoin blockchain via {source}.'\n f'Did not find key {e}',\n ) from e\n\n return balances\n\n\ndef _check_blockstream_for_transactions(\n accounts: List[BTCAddress],\n) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"May raise connection errors or KeyError\"\"\"\n have_transactions = {}\n for account in accounts:\n url = f'https://blockstream.info/api/address/{account}'\n response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)\n stats = response_data['chain_stats']\n balance = satoshis_to_btc(int(stats['funded_txo_sum']) - int(stats['spent_txo_sum']))\n have_txs = stats['tx_count'] != 0\n have_transactions[account] = (have_txs, balance)\n\n return have_transactions\n\n\ndef _check_blockchaininfo_for_transactions(\n accounts: List[BTCAddress],\n) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"May raise RemotError or KeyError\"\"\"\n have_transactions = {}\n params = '|'.join(accounts)\n btc_resp = request_get_dict(\n url=f'https://blockchain.info/multiaddr?active={params}',\n handle_429=True,\n # If we get a 429 then their docs suggest 10 seconds\n # https://blockchain.infoq/\n backoff_in_seconds=15,\n )\n for entry in btc_resp['addresses']:\n balance = satoshis_to_btc(entry['final_balance'])\n have_transactions[entry['address']] = (entry['n_tx'] != 0, balance)\n\n return have_transactions\n\n\ndef have_bitcoin_transactions(accounts: List[BTCAddress]) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"\n Takes a list of addresses and returns a mapping of which addresses have had transactions\n and also their current balance\n\n May raise:\n - RemoteError if any of the queried websites fail to be queried\n \"\"\"\n try:\n if _have_bc1_accounts(accounts):\n source = 'blockstream'\n have_transactions = _check_blockstream_for_transactions(accounts)\n else:\n source = 'blockchain.info'\n have_transactions = _check_blockchaininfo_for_transactions(accounts)\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n requests.exceptions.Timeout,\n ) as e:\n raise RemoteError(f'bitcoin external API request for transactions failed due to {str(e)}') from e # noqa: E501\n except KeyError as e:\n raise RemoteError(\n f'Malformed response when querying bitcoin blockchain via {source}.'\n f'Did not find key {str(e)}',\n ) from e\n\n return have_transactions\n", "path": "rotkehlchen/chain/bitcoin/__init__.py"}]}
| 2,111 | 449 |
gh_patches_debug_11893
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-4060
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
--output text results in "None" while --output json gives correct result on cloudformation changesets
Hi,
the following gives "CREATE_COMPLETE" which is correct
`aws cloudformation describe-change-set --change-set-name website-production-alb-a-1-0-7 --stack-name website-production-alb-a`
while this gives "None" which is incorrect
`aws cloudformation describe-change-set --change-set-name website-production-alb-a-1-0-7 --stack-name website-production-alb-a --output text`
`aws --version`
`aws-cli/1.16.110 Python/3.7.2 Darwin/18.2.0 botocore/1.12.100`
colleague has aws-cli/1.16.1 which also gives correct result on text.
</issue>
<code>
[start of awscli/formatter.py]
1 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6
7 # http://aws.amazon.com/apache2.0/
8
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import logging
14
15 from botocore.compat import json
16
17 from botocore.utils import set_value_from_jmespath
18 from botocore.paginate import PageIterator
19
20 from awscli.table import MultiTable, Styler, ColorizedStyler
21 from awscli import text
22 from awscli import compat
23 from awscli.utils import json_encoder
24
25
26 LOG = logging.getLogger(__name__)
27
28
29 def is_response_paginated(response):
30 return isinstance(response, PageIterator)
31
32
33 class Formatter(object):
34 def __init__(self, args):
35 self._args = args
36
37 def _remove_request_id(self, response_data):
38 # We only want to display the ResponseMetadata (which includes
39 # the request id) if there is an error in the response.
40 # Since all errors have been unified under the Errors key,
41 # this should be a reasonable way to filter.
42 if 'Errors' not in response_data:
43 if 'ResponseMetadata' in response_data:
44 if 'RequestId' in response_data['ResponseMetadata']:
45 request_id = response_data['ResponseMetadata']['RequestId']
46 LOG.debug('RequestId: %s', request_id)
47 del response_data['ResponseMetadata']
48
49 def _get_default_stream(self):
50 return compat.get_stdout_text_writer()
51
52 def _flush_stream(self, stream):
53 try:
54 stream.flush()
55 except IOError:
56 pass
57
58
59 class FullyBufferedFormatter(Formatter):
60 def __call__(self, command_name, response, stream=None):
61 if stream is None:
62 # Retrieve stdout on invocation instead of at import time
63 # so that if anything wraps stdout we'll pick up those changes
64 # (specifically colorama on windows wraps stdout).
65 stream = self._get_default_stream()
66 # I think the interfaces between non-paginated
67 # and paginated responses can still be cleaned up.
68 if is_response_paginated(response):
69 response_data = response.build_full_result()
70 else:
71 response_data = response
72 self._remove_request_id(response_data)
73 if self._args.query is not None:
74 response_data = self._args.query.search(response_data)
75 try:
76 self._format_response(command_name, response_data, stream)
77 except IOError as e:
78 # If the reading end of our stdout stream has closed the file
79 # we can just exit.
80 pass
81 finally:
82 # flush is needed to avoid the "close failed in file object
83 # destructor" in python2.x (see http://bugs.python.org/issue11380).
84 self._flush_stream(stream)
85
86
87 class JSONFormatter(FullyBufferedFormatter):
88
89 def _format_response(self, command_name, response, stream):
90 # For operations that have no response body (e.g. s3 put-object)
91 # the response will be an empty string. We don't want to print
92 # that out to the user but other "falsey" values like an empty
93 # dictionary should be printed.
94 if response != {}:
95 json.dump(response, stream, indent=4, default=json_encoder,
96 ensure_ascii=False)
97 stream.write('\n')
98
99
100 class TableFormatter(FullyBufferedFormatter):
101 """Pretty print a table from a given response.
102
103 The table formatter is able to take any generic response
104 and generate a pretty printed table. It does this without
105 using the output definition from the model.
106
107 """
108 def __init__(self, args, table=None):
109 super(TableFormatter, self).__init__(args)
110 if args.color == 'auto':
111 self.table = MultiTable(initial_section=False,
112 column_separator='|')
113 elif args.color == 'off':
114 styler = Styler()
115 self.table = MultiTable(initial_section=False,
116 column_separator='|', styler=styler)
117 elif args.color == 'on':
118 styler = ColorizedStyler()
119 self.table = MultiTable(initial_section=False,
120 column_separator='|', styler=styler)
121 else:
122 raise ValueError("Unknown color option: %s" % args.color)
123
124 def _format_response(self, command_name, response, stream):
125 if self._build_table(command_name, response):
126 try:
127 self.table.render(stream)
128 except IOError:
129 # If they're piping stdout to another process which exits before
130 # we're done writing all of our output, we'll get an error about a
131 # closed pipe which we can safely ignore.
132 pass
133
134 def _build_table(self, title, current, indent_level=0):
135 if not current:
136 return False
137 if title is not None:
138 self.table.new_section(title, indent_level=indent_level)
139 if isinstance(current, list):
140 if isinstance(current[0], dict):
141 self._build_sub_table_from_list(current, indent_level, title)
142 else:
143 for item in current:
144 if self._scalar_type(item):
145 self.table.add_row([item])
146 elif all(self._scalar_type(el) for el in item):
147 self.table.add_row(item)
148 else:
149 self._build_table(title=None, current=item)
150 if isinstance(current, dict):
151 # Render a single row section with keys as header
152 # and the row as the values, unless the value
153 # is a list.
154 self._build_sub_table_from_dict(current, indent_level)
155 return True
156
157 def _build_sub_table_from_dict(self, current, indent_level):
158 # Render a single row section with keys as header
159 # and the row as the values, unless the value
160 # is a list.
161 headers, more = self._group_scalar_keys(current)
162 if len(headers) == 1:
163 # Special casing if a dict has a single scalar key/value pair.
164 self.table.add_row([headers[0], current[headers[0]]])
165 elif headers:
166 self.table.add_row_header(headers)
167 self.table.add_row([current[k] for k in headers])
168 for remaining in more:
169 self._build_table(remaining, current[remaining],
170 indent_level=indent_level + 1)
171
172 def _build_sub_table_from_list(self, current, indent_level, title):
173 headers, more = self._group_scalar_keys_from_list(current)
174 self.table.add_row_header(headers)
175 first = True
176 for element in current:
177 if not first and more:
178 self.table.new_section(title,
179 indent_level=indent_level)
180 self.table.add_row_header(headers)
181 first = False
182 # Use .get() to account for the fact that sometimes an element
183 # may not have all the keys from the header.
184 self.table.add_row([element.get(header, '') for header in headers])
185 for remaining in more:
186 # Some of the non scalar attributes may not necessarily
187 # be in every single element of the list, so we need to
188 # check this condition before recursing.
189 if remaining in element:
190 self._build_table(remaining, element[remaining],
191 indent_level=indent_level + 1)
192
193 def _scalar_type(self, element):
194 return not isinstance(element, (list, dict))
195
196 def _group_scalar_keys_from_list(self, list_of_dicts):
197 # We want to make sure we catch all the keys in the list of dicts.
198 # Most of the time each list element has the same keys, but sometimes
199 # a list element will have keys not defined in other elements.
200 headers = set()
201 more = set()
202 for item in list_of_dicts:
203 current_headers, current_more = self._group_scalar_keys(item)
204 headers.update(current_headers)
205 more.update(current_more)
206 headers = list(sorted(headers))
207 more = list(sorted(more))
208 return headers, more
209
210 def _group_scalar_keys(self, current):
211 # Given a dict, separate the keys into those whose values are
212 # scalar, and those whose values aren't. Return two lists,
213 # one is the scalar value keys, the second is the remaining keys.
214 more = []
215 headers = []
216 for element in current:
217 if self._scalar_type(current[element]):
218 headers.append(element)
219 else:
220 more.append(element)
221 headers.sort()
222 more.sort()
223 return headers, more
224
225
226 class TextFormatter(Formatter):
227
228 def __call__(self, command_name, response, stream=None):
229 if stream is None:
230 stream = self._get_default_stream()
231 try:
232 if is_response_paginated(response):
233 result_keys = response.result_keys
234 for page in response:
235 current = {}
236 for result_key in result_keys:
237 data = result_key.search(page)
238 set_value_from_jmespath(
239 current,
240 result_key.expression,
241 data
242 )
243 self._format_response(current, stream)
244 if response.resume_token:
245 # Tell the user about the next token so they can continue
246 # if they want.
247 self._format_response(
248 {'NextToken': {'NextToken': response.resume_token}},
249 stream)
250 else:
251 self._remove_request_id(response)
252 self._format_response(response, stream)
253 finally:
254 # flush is needed to avoid the "close failed in file object
255 # destructor" in python2.x (see http://bugs.python.org/issue11380).
256 self._flush_stream(stream)
257
258 def _format_response(self, response, stream):
259 if self._args.query is not None:
260 expression = self._args.query
261 response = expression.search(response)
262 text.format_text(response, stream)
263
264
265 def get_formatter(format_type, args):
266 if format_type == 'json':
267 return JSONFormatter(args)
268 elif format_type == 'text':
269 return TextFormatter(args)
270 elif format_type == 'table':
271 return TableFormatter(args)
272 raise ValueError("Unknown output type: %s" % format_type)
273
[end of awscli/formatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/formatter.py b/awscli/formatter.py
--- a/awscli/formatter.py
+++ b/awscli/formatter.py
@@ -231,8 +231,12 @@
try:
if is_response_paginated(response):
result_keys = response.result_keys
- for page in response:
- current = {}
+ for i, page in enumerate(response):
+ if i > 0:
+ current = {}
+ else:
+ current = response.non_aggregate_part
+
for result_key in result_keys:
data = result_key.search(page)
set_value_from_jmespath(
|
{"golden_diff": "diff --git a/awscli/formatter.py b/awscli/formatter.py\n--- a/awscli/formatter.py\n+++ b/awscli/formatter.py\n@@ -231,8 +231,12 @@\n try:\n if is_response_paginated(response):\n result_keys = response.result_keys\n- for page in response:\n- current = {}\n+ for i, page in enumerate(response):\n+ if i > 0:\n+ current = {}\n+ else:\n+ current = response.non_aggregate_part\n+\n for result_key in result_keys:\n data = result_key.search(page)\n set_value_from_jmespath(\n", "issue": "--output text results in \"None\" while --output json gives correct result on cloudformation changesets\nHi,\r\n\r\nthe following gives \"CREATE_COMPLETE\" which is correct\r\n\r\n`aws cloudformation describe-change-set --change-set-name website-production-alb-a-1-0-7 --stack-name website-production-alb-a`\r\n\r\nwhile this gives \"None\" which is incorrect\r\n\r\n`aws cloudformation describe-change-set --change-set-name website-production-alb-a-1-0-7 --stack-name website-production-alb-a --output text`\r\n\r\n`aws --version`\r\n`aws-cli/1.16.110 Python/3.7.2 Darwin/18.2.0 botocore/1.12.100`\r\n\r\ncolleague has aws-cli/1.16.1 which also gives correct result on text.\r\n\n", "before_files": [{"content": "# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n\n# http://aws.amazon.com/apache2.0/\n\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport logging\n\nfrom botocore.compat import json\n\nfrom botocore.utils import set_value_from_jmespath\nfrom botocore.paginate import PageIterator\n\nfrom awscli.table import MultiTable, Styler, ColorizedStyler\nfrom awscli import text\nfrom awscli import compat\nfrom awscli.utils import json_encoder\n\n\nLOG = logging.getLogger(__name__)\n\n\ndef is_response_paginated(response):\n return isinstance(response, PageIterator)\n\n\nclass Formatter(object):\n def __init__(self, args):\n self._args = args\n\n def _remove_request_id(self, response_data):\n # We only want to display the ResponseMetadata (which includes\n # the request id) if there is an error in the response.\n # Since all errors have been unified under the Errors key,\n # this should be a reasonable way to filter.\n if 'Errors' not in response_data:\n if 'ResponseMetadata' in response_data:\n if 'RequestId' in response_data['ResponseMetadata']:\n request_id = response_data['ResponseMetadata']['RequestId']\n LOG.debug('RequestId: %s', request_id)\n del response_data['ResponseMetadata']\n\n def _get_default_stream(self):\n return compat.get_stdout_text_writer()\n\n def _flush_stream(self, stream):\n try:\n stream.flush()\n except IOError:\n pass\n\n\nclass FullyBufferedFormatter(Formatter):\n def __call__(self, command_name, response, stream=None):\n if stream is None:\n # Retrieve stdout on invocation instead of at import time\n # so that if anything wraps stdout we'll pick up those changes\n # (specifically colorama on windows wraps stdout).\n stream = self._get_default_stream()\n # I think the interfaces between non-paginated\n # and paginated responses can still be cleaned up.\n if is_response_paginated(response):\n response_data = response.build_full_result()\n else:\n response_data = response\n self._remove_request_id(response_data)\n if self._args.query is not None:\n response_data = self._args.query.search(response_data)\n try:\n self._format_response(command_name, response_data, stream)\n except IOError as e:\n # If the reading end of our stdout stream has closed the file\n # we can just exit.\n pass\n finally:\n # flush is needed to avoid the \"close failed in file object\n # destructor\" in python2.x (see http://bugs.python.org/issue11380).\n self._flush_stream(stream)\n\n\nclass JSONFormatter(FullyBufferedFormatter):\n\n def _format_response(self, command_name, response, stream):\n # For operations that have no response body (e.g. s3 put-object)\n # the response will be an empty string. We don't want to print\n # that out to the user but other \"falsey\" values like an empty\n # dictionary should be printed.\n if response != {}:\n json.dump(response, stream, indent=4, default=json_encoder,\n ensure_ascii=False)\n stream.write('\\n')\n\n\nclass TableFormatter(FullyBufferedFormatter):\n \"\"\"Pretty print a table from a given response.\n\n The table formatter is able to take any generic response\n and generate a pretty printed table. It does this without\n using the output definition from the model.\n\n \"\"\"\n def __init__(self, args, table=None):\n super(TableFormatter, self).__init__(args)\n if args.color == 'auto':\n self.table = MultiTable(initial_section=False,\n column_separator='|')\n elif args.color == 'off':\n styler = Styler()\n self.table = MultiTable(initial_section=False,\n column_separator='|', styler=styler)\n elif args.color == 'on':\n styler = ColorizedStyler()\n self.table = MultiTable(initial_section=False,\n column_separator='|', styler=styler)\n else:\n raise ValueError(\"Unknown color option: %s\" % args.color)\n\n def _format_response(self, command_name, response, stream):\n if self._build_table(command_name, response):\n try:\n self.table.render(stream)\n except IOError:\n # If they're piping stdout to another process which exits before\n # we're done writing all of our output, we'll get an error about a\n # closed pipe which we can safely ignore.\n pass\n\n def _build_table(self, title, current, indent_level=0):\n if not current:\n return False\n if title is not None:\n self.table.new_section(title, indent_level=indent_level)\n if isinstance(current, list):\n if isinstance(current[0], dict):\n self._build_sub_table_from_list(current, indent_level, title)\n else:\n for item in current:\n if self._scalar_type(item):\n self.table.add_row([item])\n elif all(self._scalar_type(el) for el in item):\n self.table.add_row(item)\n else:\n self._build_table(title=None, current=item)\n if isinstance(current, dict):\n # Render a single row section with keys as header\n # and the row as the values, unless the value\n # is a list.\n self._build_sub_table_from_dict(current, indent_level)\n return True\n\n def _build_sub_table_from_dict(self, current, indent_level):\n # Render a single row section with keys as header\n # and the row as the values, unless the value\n # is a list.\n headers, more = self._group_scalar_keys(current)\n if len(headers) == 1:\n # Special casing if a dict has a single scalar key/value pair.\n self.table.add_row([headers[0], current[headers[0]]])\n elif headers:\n self.table.add_row_header(headers)\n self.table.add_row([current[k] for k in headers])\n for remaining in more:\n self._build_table(remaining, current[remaining],\n indent_level=indent_level + 1)\n\n def _build_sub_table_from_list(self, current, indent_level, title):\n headers, more = self._group_scalar_keys_from_list(current)\n self.table.add_row_header(headers)\n first = True\n for element in current:\n if not first and more:\n self.table.new_section(title,\n indent_level=indent_level)\n self.table.add_row_header(headers)\n first = False\n # Use .get() to account for the fact that sometimes an element\n # may not have all the keys from the header.\n self.table.add_row([element.get(header, '') for header in headers])\n for remaining in more:\n # Some of the non scalar attributes may not necessarily\n # be in every single element of the list, so we need to\n # check this condition before recursing.\n if remaining in element:\n self._build_table(remaining, element[remaining],\n indent_level=indent_level + 1)\n\n def _scalar_type(self, element):\n return not isinstance(element, (list, dict))\n\n def _group_scalar_keys_from_list(self, list_of_dicts):\n # We want to make sure we catch all the keys in the list of dicts.\n # Most of the time each list element has the same keys, but sometimes\n # a list element will have keys not defined in other elements.\n headers = set()\n more = set()\n for item in list_of_dicts:\n current_headers, current_more = self._group_scalar_keys(item)\n headers.update(current_headers)\n more.update(current_more)\n headers = list(sorted(headers))\n more = list(sorted(more))\n return headers, more\n\n def _group_scalar_keys(self, current):\n # Given a dict, separate the keys into those whose values are\n # scalar, and those whose values aren't. Return two lists,\n # one is the scalar value keys, the second is the remaining keys.\n more = []\n headers = []\n for element in current:\n if self._scalar_type(current[element]):\n headers.append(element)\n else:\n more.append(element)\n headers.sort()\n more.sort()\n return headers, more\n\n\nclass TextFormatter(Formatter):\n\n def __call__(self, command_name, response, stream=None):\n if stream is None:\n stream = self._get_default_stream()\n try:\n if is_response_paginated(response):\n result_keys = response.result_keys\n for page in response:\n current = {}\n for result_key in result_keys:\n data = result_key.search(page)\n set_value_from_jmespath(\n current,\n result_key.expression,\n data\n )\n self._format_response(current, stream)\n if response.resume_token:\n # Tell the user about the next token so they can continue\n # if they want.\n self._format_response(\n {'NextToken': {'NextToken': response.resume_token}},\n stream)\n else:\n self._remove_request_id(response)\n self._format_response(response, stream)\n finally:\n # flush is needed to avoid the \"close failed in file object\n # destructor\" in python2.x (see http://bugs.python.org/issue11380).\n self._flush_stream(stream)\n\n def _format_response(self, response, stream):\n if self._args.query is not None:\n expression = self._args.query\n response = expression.search(response)\n text.format_text(response, stream)\n\n\ndef get_formatter(format_type, args):\n if format_type == 'json':\n return JSONFormatter(args)\n elif format_type == 'text':\n return TextFormatter(args)\n elif format_type == 'table':\n return TableFormatter(args)\n raise ValueError(\"Unknown output type: %s\" % format_type)\n", "path": "awscli/formatter.py"}]}
| 3,666 | 140 |
gh_patches_debug_21141
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-7505
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
matplotlib import issue: ImportError DLL load failed while importing _path...
Hey,
From searching around, matplotlib import errors occur here and there, usually the fix is to downgrade the matplotlib version, or to change the hook-matplotlib.py file. This did not resolve the issue.
matplotlib import is to blame because running the same `pyinstaller --onefile Script.py` command where matplotlib import is commented out works as expected.
Otherwise opening the newly created exe file writes these few lines before exiting (excuse me for not copying everything from the screenshot I took moments before it closed):

The error:
```
ImportError: DLL load failed while importing _path: The specific module could not be found.
[5780] Failed to execute script 'Script' due to unhandled exeption!
```
During the pyinstaller run, the WARNINGS regarding matplotlib packages:
```
2149 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\backends\_backend_agg.cp39-win_amd64.pyd
2189 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\_path.cp39-win_amd64.pyd
2209 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\backends\_tkagg.cp39-win_amd64.pyd
2239 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\_tri.cp39-win_amd64.pyd
2249 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\_qhull.cp39-win_amd64.pyd
2259 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\ft2font.cp39-win_amd64.pyd
```
A thing to note is that my python is Anaconda distribution, so I also get this warning:
`WARNING: Assuming this is not an Anaconda environment or an additional venv/pipenv/... environment manager is being used on top, because the conda-meta folder C:\Users\...\venv\conda-meta does not exist.`
But the warning comes up with matplotlib import commented out and everything works as expected, so I doubt this has something to do with the issue.
I tried:
- Reinstalling matplotlib and pyinstaller
- Im using the Anaconda Prompt, otherwise it doesn't even get to creating the Script.exe file.
- Installing Microsoft Visual C++ 2015-2022 Redistribution.
- Changing the hook-matplotlib.py in hooks folder (it is already set right - the suggestion is old).
Info:
- Windows 10 Enterprise 64-bit
- python --version
'Python 3.9.7'
- pyinstaller --version
'5.9.0'
- matplotlib.__version__
'3.7.1'
</issue>
<code>
[start of PyInstaller/hooks/hook-matplotlib.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2013-2023, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 from PyInstaller import isolated
13
14
15 @isolated.decorate
16 def mpl_data_dir():
17 import matplotlib
18 return matplotlib.get_data_path()
19
20
21 datas = [
22 (mpl_data_dir(), "matplotlib/mpl-data"),
23 ]
24
[end of PyInstaller/hooks/hook-matplotlib.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PyInstaller/hooks/hook-matplotlib.py b/PyInstaller/hooks/hook-matplotlib.py
--- a/PyInstaller/hooks/hook-matplotlib.py
+++ b/PyInstaller/hooks/hook-matplotlib.py
@@ -10,6 +10,8 @@
#-----------------------------------------------------------------------------
from PyInstaller import isolated
+from PyInstaller import compat
+from PyInstaller.utils import hooks as hookutils
@isolated.decorate
@@ -21,3 +23,16 @@
datas = [
(mpl_data_dir(), "matplotlib/mpl-data"),
]
+
+binaries = []
+
+# Windows PyPI wheels for `matplotlib` >= 3.7.0 use `delvewheel`.
+# In addition to DLLs from `matplotlib.libs` directory, which should be picked up automatically by dependency analysis
+# in contemporary PyInstaller versions, we also need to collect the load-order file. This used to be required for
+# python <= 3.7 (that lacked `os.add_dll_directory`), but is also needed for Anaconda python 3.8 and 3.9, where
+# `delvewheel` falls back to load-order file codepath due to Anaconda breaking `os.add_dll_directory` implementation.
+if compat.is_win and hookutils.is_module_satisfies('matplotlib >= 3.7.0'):
+ delvewheel_datas, delvewheel_binaries = hookutils.collect_delvewheel_libs_directory('matplotlib')
+
+ datas += delvewheel_datas
+ binaries += delvewheel_binaries
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-matplotlib.py b/PyInstaller/hooks/hook-matplotlib.py\n--- a/PyInstaller/hooks/hook-matplotlib.py\n+++ b/PyInstaller/hooks/hook-matplotlib.py\n@@ -10,6 +10,8 @@\n #-----------------------------------------------------------------------------\n \n from PyInstaller import isolated\n+from PyInstaller import compat\n+from PyInstaller.utils import hooks as hookutils\n \n \n @isolated.decorate\n@@ -21,3 +23,16 @@\n datas = [\n (mpl_data_dir(), \"matplotlib/mpl-data\"),\n ]\n+\n+binaries = []\n+\n+# Windows PyPI wheels for `matplotlib` >= 3.7.0 use `delvewheel`.\n+# In addition to DLLs from `matplotlib.libs` directory, which should be picked up automatically by dependency analysis\n+# in contemporary PyInstaller versions, we also need to collect the load-order file. This used to be required for\n+# python <= 3.7 (that lacked `os.add_dll_directory`), but is also needed for Anaconda python 3.8 and 3.9, where\n+# `delvewheel` falls back to load-order file codepath due to Anaconda breaking `os.add_dll_directory` implementation.\n+if compat.is_win and hookutils.is_module_satisfies('matplotlib >= 3.7.0'):\n+ delvewheel_datas, delvewheel_binaries = hookutils.collect_delvewheel_libs_directory('matplotlib')\n+\n+ datas += delvewheel_datas\n+ binaries += delvewheel_binaries\n", "issue": "matplotlib import issue: ImportError DLL load failed while importing _path...\nHey,\r\nFrom searching around, matplotlib import errors occur here and there, usually the fix is to downgrade the matplotlib version, or to change the hook-matplotlib.py file. This did not resolve the issue.\r\n\r\nmatplotlib import is to blame because running the same `pyinstaller --onefile Script.py` command where matplotlib import is commented out works as expected.\r\nOtherwise opening the newly created exe file writes these few lines before exiting (excuse me for not copying everything from the screenshot I took moments before it closed):\r\n\r\nThe error:\r\n```\r\nImportError: DLL load failed while importing _path: The specific module could not be found.\r\n[5780] Failed to execute script 'Script' due to unhandled exeption!\r\n\r\n```\r\n\r\nDuring the pyinstaller run, the WARNINGS regarding matplotlib packages:\r\n```\r\n2149 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\backends\\_backend_agg.cp39-win_amd64.pyd\r\n2189 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\_path.cp39-win_amd64.pyd\r\n2209 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\backends\\_tkagg.cp39-win_amd64.pyd\r\n2239 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\_tri.cp39-win_amd64.pyd\r\n2249 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\_qhull.cp39-win_amd64.pyd\r\n2259 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\ft2font.cp39-win_amd64.pyd\r\n```\r\n\r\nA thing to note is that my python is Anaconda distribution, so I also get this warning:\r\n`WARNING: Assuming this is not an Anaconda environment or an additional venv/pipenv/... environment manager is being used on top, because the conda-meta folder C:\\Users\\...\\venv\\conda-meta does not exist.`\r\nBut the warning comes up with matplotlib import commented out and everything works as expected, so I doubt this has something to do with the issue.\r\n\r\nI tried:\r\n- Reinstalling matplotlib and pyinstaller\r\n- Im using the Anaconda Prompt, otherwise it doesn't even get to creating the Script.exe file.\r\n- Installing Microsoft Visual C++ 2015-2022 Redistribution.\r\n- Changing the hook-matplotlib.py in hooks folder (it is already set right - the suggestion is old).\r\n\r\nInfo:\r\n- Windows 10 Enterprise 64-bit\r\n- python --version\r\n'Python 3.9.7'\r\n- pyinstaller --version\r\n'5.9.0'\r\n- matplotlib.__version__\r\n'3.7.1'\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller import isolated\n\n\[email protected]\ndef mpl_data_dir():\n import matplotlib\n return matplotlib.get_data_path()\n\n\ndatas = [\n (mpl_data_dir(), \"matplotlib/mpl-data\"),\n]\n", "path": "PyInstaller/hooks/hook-matplotlib.py"}]}
| 1,629 | 342 |
gh_patches_debug_21548
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-3329
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature request: optionally suppress "Manim Comunity v{version}"
## Description of proposed feature
Hello! As of recent versions of Manim, it is not possible to suppress the `Manim Community v{version}` message that is printed out when importing `manim`.
Actually, this is because importing `manim` imports `manim.__init__`, which in turn imports `manim.__main__` with this: https://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__init__.py#L104-L113
Before, I could suppress the message using some redirect, but now if fails
```python
>>> import os
>>> from contextlib import redirect_stdout
>>>
>>> with redirect_stdout(open(os.devnull, "w")):
>>> print("HELLO")
>>> import manim
Manim Community v0.17.2
```
because Rich will still print to `stdout` (since the console was created before `stdout` was redirected).
Rich's console has an optional `file` argument, but there currently seem to be a bug with it, reported in https://github.com/Textualize/rich/issues/3083, that makes Rich's consoles only print to stdout.
A **nice feature** would be to make the behavior optional.
## How can the new feature be used?
There are a few possible solutions:
- Not printing the version at all;
- Moving the print message inside the `__main__`:
https://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__main__.py#L59-L60
- Optionally print (or disable print) with some environ variable
- ...
## Additional comments
I develop Manim Slides, a Manim plugin, and I'd like to avoid printing the Manim Community version every time I import this package, because the user does not really need to know that.
</issue>
<code>
[start of manim/__main__.py]
1 from __future__ import annotations
2
3 import sys
4
5 import cloup
6
7 from . import __version__, cli_ctx_settings, console
8 from .cli.cfg.group import cfg
9 from .cli.checkhealth.commands import checkhealth
10 from .cli.default_group import DefaultGroup
11 from .cli.init.commands import init
12 from .cli.plugins.commands import plugins
13 from .cli.render.commands import render
14 from .constants import EPILOG
15
16
17 def exit_early(ctx, param, value):
18 if value:
19 sys.exit()
20
21
22 console.print(f"Manim Community [green]v{__version__}[/green]\n")
23
24
25 @cloup.group(
26 context_settings=cli_ctx_settings,
27 cls=DefaultGroup,
28 default="render",
29 no_args_is_help=True,
30 help="Animation engine for explanatory math videos.",
31 epilog="See 'manim <command>' to read about a specific subcommand.\n\n"
32 "Note: the subcommand 'manim render' is called if no other subcommand "
33 "is specified. Run 'manim render --help' if you would like to know what the "
34 f"'-ql' or '-p' flags do, for example.\n\n{EPILOG}",
35 )
36 @cloup.option(
37 "--version",
38 is_flag=True,
39 help="Show version and exit.",
40 callback=exit_early,
41 is_eager=True,
42 expose_value=False,
43 )
44 @cloup.pass_context
45 def main(ctx):
46 """The entry point for manim."""
47 pass
48
49
50 main.add_command(checkhealth)
51 main.add_command(cfg)
52 main.add_command(plugins)
53 main.add_command(init)
54 main.add_command(render)
55
56 if __name__ == "__main__":
57 main()
58
[end of manim/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/manim/__main__.py b/manim/__main__.py
--- a/manim/__main__.py
+++ b/manim/__main__.py
@@ -2,6 +2,7 @@
import sys
+import click
import cloup
from . import __version__, cli_ctx_settings, console
@@ -14,12 +15,15 @@
from .constants import EPILOG
-def exit_early(ctx, param, value):
+def show_splash(ctx, param, value):
if value:
- sys.exit()
+ console.print(f"Manim Community [green]v{__version__}[/green]\n")
-console.print(f"Manim Community [green]v{__version__}[/green]\n")
+def print_version_and_exit(ctx, param, value):
+ show_splash(ctx, param, value)
+ if value:
+ ctx.exit()
@cloup.group(
@@ -37,7 +41,16 @@
"--version",
is_flag=True,
help="Show version and exit.",
- callback=exit_early,
+ callback=print_version_and_exit,
+ is_eager=True,
+ expose_value=False,
+)
[email protected](
+ "--show-splash/--hide-splash",
+ is_flag=True,
+ default=True,
+ help="Print splash message with version information.",
+ callback=show_splash,
is_eager=True,
expose_value=False,
)
|
{"golden_diff": "diff --git a/manim/__main__.py b/manim/__main__.py\n--- a/manim/__main__.py\n+++ b/manim/__main__.py\n@@ -2,6 +2,7 @@\n \n import sys\n \n+import click\n import cloup\n \n from . import __version__, cli_ctx_settings, console\n@@ -14,12 +15,15 @@\n from .constants import EPILOG\n \n \n-def exit_early(ctx, param, value):\n+def show_splash(ctx, param, value):\n if value:\n- sys.exit()\n+ console.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n \n \n-console.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n+def print_version_and_exit(ctx, param, value):\n+ show_splash(ctx, param, value)\n+ if value:\n+ ctx.exit()\n \n \n @cloup.group(\n@@ -37,7 +41,16 @@\n \"--version\",\n is_flag=True,\n help=\"Show version and exit.\",\n- callback=exit_early,\n+ callback=print_version_and_exit,\n+ is_eager=True,\n+ expose_value=False,\n+)\[email protected](\n+ \"--show-splash/--hide-splash\",\n+ is_flag=True,\n+ default=True,\n+ help=\"Print splash message with version information.\",\n+ callback=show_splash,\n is_eager=True,\n expose_value=False,\n )\n", "issue": "Feature request: optionally suppress \"Manim Comunity v{version}\"\n## Description of proposed feature\r\n\r\nHello! As of recent versions of Manim, it is not possible to suppress the `Manim Community v{version}` message that is printed out when importing `manim`.\r\n\r\nActually, this is because importing `manim` imports `manim.__init__`, which in turn imports `manim.__main__` with this: https://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__init__.py#L104-L113\r\n\r\nBefore, I could suppress the message using some redirect, but now if fails\r\n\r\n```python\r\n>>> import os\r\n>>> from contextlib import redirect_stdout\r\n>>> \r\n>>> with redirect_stdout(open(os.devnull, \"w\")):\r\n>>> print(\"HELLO\")\r\n>>> import manim\r\nManim Community v0.17.2\r\n```\r\nbecause Rich will still print to `stdout` (since the console was created before `stdout` was redirected). \r\n\r\nRich's console has an optional `file` argument, but there currently seem to be a bug with it, reported in https://github.com/Textualize/rich/issues/3083, that makes Rich's consoles only print to stdout.\r\n\r\nA **nice feature** would be to make the behavior optional.\r\n\r\n## How can the new feature be used?\r\n\r\nThere are a few possible solutions:\r\n\r\n- Not printing the version at all;\r\n- Moving the print message inside the `__main__`:\r\nhttps://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__main__.py#L59-L60\r\n- Optionally print (or disable print) with some environ variable\r\n- ...\r\n\r\n## Additional comments\r\n\r\nI develop Manim Slides, a Manim plugin, and I'd like to avoid printing the Manim Community version every time I import this package, because the user does not really need to know that.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport sys\n\nimport cloup\n\nfrom . import __version__, cli_ctx_settings, console\nfrom .cli.cfg.group import cfg\nfrom .cli.checkhealth.commands import checkhealth\nfrom .cli.default_group import DefaultGroup\nfrom .cli.init.commands import init\nfrom .cli.plugins.commands import plugins\nfrom .cli.render.commands import render\nfrom .constants import EPILOG\n\n\ndef exit_early(ctx, param, value):\n if value:\n sys.exit()\n\n\nconsole.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n\n\[email protected](\n context_settings=cli_ctx_settings,\n cls=DefaultGroup,\n default=\"render\",\n no_args_is_help=True,\n help=\"Animation engine for explanatory math videos.\",\n epilog=\"See 'manim <command>' to read about a specific subcommand.\\n\\n\"\n \"Note: the subcommand 'manim render' is called if no other subcommand \"\n \"is specified. Run 'manim render --help' if you would like to know what the \"\n f\"'-ql' or '-p' flags do, for example.\\n\\n{EPILOG}\",\n)\[email protected](\n \"--version\",\n is_flag=True,\n help=\"Show version and exit.\",\n callback=exit_early,\n is_eager=True,\n expose_value=False,\n)\[email protected]_context\ndef main(ctx):\n \"\"\"The entry point for manim.\"\"\"\n pass\n\n\nmain.add_command(checkhealth)\nmain.add_command(cfg)\nmain.add_command(plugins)\nmain.add_command(init)\nmain.add_command(render)\n\nif __name__ == \"__main__\":\n main()\n", "path": "manim/__main__.py"}]}
| 1,486 | 327 |
gh_patches_debug_8547
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-1943
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove st.deck_gl_chart
### Problem
Remove st.deck_gl_chart now that the deprecation date is past
</issue>
<code>
[start of lib/streamlit/elements/deck_gl.py]
1 # Copyright 2018-2020 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """A Python wrapper around DeckGl."""
16
17 import json
18 from typing import Any, List
19
20 from streamlit import case_converters
21 from streamlit import config
22 from streamlit.proto.DeckGlChart_pb2 import DeckGlChart as DeckGlChartProto
23 import streamlit.elements.lib.dicttools as dicttools
24 import streamlit.elements.data_frame_proto as data_frame_proto
25
26 from streamlit.logger import get_logger
27
28 LOGGER = get_logger(__name__)
29
30
31 class DeckGlMixin:
32 def deck_gl_chart(dg, spec=None, use_container_width=False, **kwargs):
33 """Draw a map chart using the Deck.GL library.
34
35 This API closely follows Deck.GL's JavaScript API
36 (https://deck.gl/#/documentation), with a few small adaptations and
37 some syntax sugar.
38
39 When using this command, we advise all users to use a personal Mapbox
40 token. This ensures the map tiles used in this chart are more
41 robust. You can do this with the mapbox.token config option.
42
43 To get a token for yourself, create an account at
44 https://mapbox.com. It's free! (for moderate usage levels) See
45 https://docs.streamlit.io/en/latest/cli.html#view-all-config-options for more
46 info on how to set config options.
47
48 Parameters
49 ----------
50
51 spec : dict
52 Keys in this dict can be:
53
54 - Anything accepted by Deck.GL's top level element, such as
55 "viewport", "height", "width".
56
57 - "layers": a list of dicts containing information to build a new
58 Deck.GL layer in the map. Each layer accepts the following keys:
59
60 - "data" : DataFrame
61 The data for the current layer.
62
63 - "type" : str
64 One of the Deck.GL layer types that are currently supported
65 by Streamlit: ArcLayer, GridLayer, HexagonLayer, LineLayer,
66 PointCloudLayer, ScatterplotLayer, ScreenGridLayer,
67 TextLayer.
68
69 - Plus anything accepted by that layer type. The exact keys that
70 are accepted depend on the "type" field, above. For example, for
71 ScatterplotLayer you can set fields like "opacity", "filled",
72 "stroked", and so on.
73
74 In addition, Deck.GL"s documentation for ScatterplotLayer
75 shows you can use a "getRadius" field to individually set
76 the radius of each circle in the plot. So here you would
77 set "getRadius": "my_column" where "my_column" is the name
78 of the column containing the radius data.
79
80 For things like "getPosition", which expect an array rather
81 than a scalar value, we provide alternates that make the
82 API simpler to use with dataframes:
83
84 - Instead of "getPosition" : use "getLatitude" and
85 "getLongitude".
86 - Instead of "getSourcePosition" : use "getLatitude" and
87 "getLongitude".
88 - Instead of "getTargetPosition" : use "getTargetLatitude"
89 and "getTargetLongitude".
90 - Instead of "getColor" : use "getColorR", "getColorG",
91 "getColorB", and (optionally) "getColorA", for red,
92 green, blue and alpha.
93 - Instead of "getSourceColor" : use the same as above.
94 - Instead of "getTargetColor" : use "getTargetColorR", etc.
95
96 use_container_width : bool
97 If True, set the chart width to the column width. This takes
98 precedence over the figure's native `width` value.
99
100 **kwargs : any
101 Same as spec, but as keywords. Keys are "unflattened" at the
102 underscore characters. For example, foo_bar_baz=123 becomes
103 foo={'bar': {'bar': 123}}.
104
105 Example
106 -------
107 >>> st.deck_gl_chart(
108 ... viewport={
109 ... 'latitude': 37.76,
110 ... 'longitude': -122.4,
111 ... 'zoom': 11,
112 ... 'pitch': 50,
113 ... },
114 ... layers=[{
115 ... 'type': 'HexagonLayer',
116 ... 'data': df,
117 ... 'radius': 200,
118 ... 'elevationScale': 4,
119 ... 'elevationRange': [0, 1000],
120 ... 'pickable': True,
121 ... 'extruded': True,
122 ... }, {
123 ... 'type': 'ScatterplotLayer',
124 ... 'data': df,
125 ... }])
126 ...
127
128 .. output::
129 https://share.streamlit.io/0.50.0-td2L/index.html?id=3GfRygWqxuqB5UitZLjz9i
130 height: 530px
131
132 """
133
134 suppress_deprecation_warning = config.get_option(
135 "global.suppressDeprecationWarnings"
136 )
137 if not suppress_deprecation_warning:
138 import streamlit as st
139
140 st.warning(
141 """
142 The `deck_gl_chart` widget is deprecated and will be removed on
143 2020-05-01. To render a map, you should use `st.pydeck_chart` widget.
144 """
145 )
146
147 deck_gl_proto = DeckGlChartProto()
148 marshall(deck_gl_proto, spec, use_container_width, **kwargs)
149 return dg._enqueue("deck_gl_chart", deck_gl_proto) # type: ignore
150
151
152 def marshall(proto, spec=None, use_container_width=False, **kwargs):
153 """Marshall a proto with DeckGL chart info.
154
155 See DeltaGenerator.deck_gl_chart for docs.
156 """
157 data = [] # type: List[Any]
158
159 if spec is None:
160 spec = dict()
161
162 # Merge spec with unflattened kwargs, where kwargs take precedence.
163 # This only works for string keys, but kwarg keys are strings anyways.
164 spec = dict(spec, **dicttools.unflatten(kwargs))
165
166 if "layers" not in spec:
167 spec["layers"] = []
168
169 # Syntax sugar: if no layers defined and data is passed at the top
170 # level, create a scatterplot layer with the top-level data by default.
171 if data is not None:
172 spec["layers"].append({"data": data, "type": "ScatterplotLayer"})
173
174 for layer in spec["layers"]:
175 # Don't add layers that have no data.
176 if "data" not in layer:
177 continue
178
179 # Remove DataFrame because it's not JSON-serializable
180 data = layer.pop("data")
181
182 layer_proto = proto.layers.add()
183 fixed_layer = case_converters.convert_dict_keys(
184 case_converters.to_lower_camel_case, layer
185 )
186 layer_proto.spec = json.dumps(fixed_layer)
187 # TODO: If several layers use the same data frame, the data gets resent
188 # for each layer. Need to improve this.
189 data_frame_proto.marshall_data_frame(data, layer_proto.data)
190
191 del spec["layers"]
192
193 # Dump JSON after removing DataFrames (see loop above), because DataFrames
194 # are not JSON-serializable.
195 proto.spec = json.dumps(spec)
196 proto.use_container_width = use_container_width
197
[end of lib/streamlit/elements/deck_gl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/streamlit/elements/deck_gl.py b/lib/streamlit/elements/deck_gl.py
--- a/lib/streamlit/elements/deck_gl.py
+++ b/lib/streamlit/elements/deck_gl.py
@@ -30,7 +30,11 @@
class DeckGlMixin:
def deck_gl_chart(dg, spec=None, use_container_width=False, **kwargs):
- """Draw a map chart using the Deck.GL library.
+ """
+ .. deprecated:: 0.53.0
+ Use :func:`streamlit.pydeck_chart` instead.
+
+ Draw a map chart using the Deck.GL library.
This API closely follows Deck.GL's JavaScript API
(https://deck.gl/#/documentation), with a few small adaptations and
|
{"golden_diff": "diff --git a/lib/streamlit/elements/deck_gl.py b/lib/streamlit/elements/deck_gl.py\n--- a/lib/streamlit/elements/deck_gl.py\n+++ b/lib/streamlit/elements/deck_gl.py\n@@ -30,7 +30,11 @@\n \n class DeckGlMixin:\n def deck_gl_chart(dg, spec=None, use_container_width=False, **kwargs):\n- \"\"\"Draw a map chart using the Deck.GL library.\n+ \"\"\"\n+ .. deprecated:: 0.53.0\n+ Use :func:`streamlit.pydeck_chart` instead.\n+\n+ Draw a map chart using the Deck.GL library.\n \n This API closely follows Deck.GL's JavaScript API\n (https://deck.gl/#/documentation), with a few small adaptations and\n", "issue": "Remove st.deck_gl_chart\n### Problem\r\n\r\nRemove st.deck_gl_chart now that the deprecation date is past\r\n\n", "before_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"A Python wrapper around DeckGl.\"\"\"\n\nimport json\nfrom typing import Any, List\n\nfrom streamlit import case_converters\nfrom streamlit import config\nfrom streamlit.proto.DeckGlChart_pb2 import DeckGlChart as DeckGlChartProto\nimport streamlit.elements.lib.dicttools as dicttools\nimport streamlit.elements.data_frame_proto as data_frame_proto\n\nfrom streamlit.logger import get_logger\n\nLOGGER = get_logger(__name__)\n\n\nclass DeckGlMixin:\n def deck_gl_chart(dg, spec=None, use_container_width=False, **kwargs):\n \"\"\"Draw a map chart using the Deck.GL library.\n\n This API closely follows Deck.GL's JavaScript API\n (https://deck.gl/#/documentation), with a few small adaptations and\n some syntax sugar.\n\n When using this command, we advise all users to use a personal Mapbox\n token. This ensures the map tiles used in this chart are more\n robust. You can do this with the mapbox.token config option.\n\n To get a token for yourself, create an account at\n https://mapbox.com. It's free! (for moderate usage levels) See\n https://docs.streamlit.io/en/latest/cli.html#view-all-config-options for more\n info on how to set config options.\n\n Parameters\n ----------\n\n spec : dict\n Keys in this dict can be:\n\n - Anything accepted by Deck.GL's top level element, such as\n \"viewport\", \"height\", \"width\".\n\n - \"layers\": a list of dicts containing information to build a new\n Deck.GL layer in the map. Each layer accepts the following keys:\n\n - \"data\" : DataFrame\n The data for the current layer.\n\n - \"type\" : str\n One of the Deck.GL layer types that are currently supported\n by Streamlit: ArcLayer, GridLayer, HexagonLayer, LineLayer,\n PointCloudLayer, ScatterplotLayer, ScreenGridLayer,\n TextLayer.\n\n - Plus anything accepted by that layer type. The exact keys that\n are accepted depend on the \"type\" field, above. For example, for\n ScatterplotLayer you can set fields like \"opacity\", \"filled\",\n \"stroked\", and so on.\n\n In addition, Deck.GL\"s documentation for ScatterplotLayer\n shows you can use a \"getRadius\" field to individually set\n the radius of each circle in the plot. So here you would\n set \"getRadius\": \"my_column\" where \"my_column\" is the name\n of the column containing the radius data.\n\n For things like \"getPosition\", which expect an array rather\n than a scalar value, we provide alternates that make the\n API simpler to use with dataframes:\n\n - Instead of \"getPosition\" : use \"getLatitude\" and\n \"getLongitude\".\n - Instead of \"getSourcePosition\" : use \"getLatitude\" and\n \"getLongitude\".\n - Instead of \"getTargetPosition\" : use \"getTargetLatitude\"\n and \"getTargetLongitude\".\n - Instead of \"getColor\" : use \"getColorR\", \"getColorG\",\n \"getColorB\", and (optionally) \"getColorA\", for red,\n green, blue and alpha.\n - Instead of \"getSourceColor\" : use the same as above.\n - Instead of \"getTargetColor\" : use \"getTargetColorR\", etc.\n\n use_container_width : bool\n If True, set the chart width to the column width. This takes\n precedence over the figure's native `width` value.\n\n **kwargs : any\n Same as spec, but as keywords. Keys are \"unflattened\" at the\n underscore characters. For example, foo_bar_baz=123 becomes\n foo={'bar': {'bar': 123}}.\n\n Example\n -------\n >>> st.deck_gl_chart(\n ... viewport={\n ... 'latitude': 37.76,\n ... 'longitude': -122.4,\n ... 'zoom': 11,\n ... 'pitch': 50,\n ... },\n ... layers=[{\n ... 'type': 'HexagonLayer',\n ... 'data': df,\n ... 'radius': 200,\n ... 'elevationScale': 4,\n ... 'elevationRange': [0, 1000],\n ... 'pickable': True,\n ... 'extruded': True,\n ... }, {\n ... 'type': 'ScatterplotLayer',\n ... 'data': df,\n ... }])\n ...\n\n .. output::\n https://share.streamlit.io/0.50.0-td2L/index.html?id=3GfRygWqxuqB5UitZLjz9i\n height: 530px\n\n \"\"\"\n\n suppress_deprecation_warning = config.get_option(\n \"global.suppressDeprecationWarnings\"\n )\n if not suppress_deprecation_warning:\n import streamlit as st\n\n st.warning(\n \"\"\"\n The `deck_gl_chart` widget is deprecated and will be removed on\n 2020-05-01. To render a map, you should use `st.pydeck_chart` widget.\n \"\"\"\n )\n\n deck_gl_proto = DeckGlChartProto()\n marshall(deck_gl_proto, spec, use_container_width, **kwargs)\n return dg._enqueue(\"deck_gl_chart\", deck_gl_proto) # type: ignore\n\n\ndef marshall(proto, spec=None, use_container_width=False, **kwargs):\n \"\"\"Marshall a proto with DeckGL chart info.\n\n See DeltaGenerator.deck_gl_chart for docs.\n \"\"\"\n data = [] # type: List[Any]\n\n if spec is None:\n spec = dict()\n\n # Merge spec with unflattened kwargs, where kwargs take precedence.\n # This only works for string keys, but kwarg keys are strings anyways.\n spec = dict(spec, **dicttools.unflatten(kwargs))\n\n if \"layers\" not in spec:\n spec[\"layers\"] = []\n\n # Syntax sugar: if no layers defined and data is passed at the top\n # level, create a scatterplot layer with the top-level data by default.\n if data is not None:\n spec[\"layers\"].append({\"data\": data, \"type\": \"ScatterplotLayer\"})\n\n for layer in spec[\"layers\"]:\n # Don't add layers that have no data.\n if \"data\" not in layer:\n continue\n\n # Remove DataFrame because it's not JSON-serializable\n data = layer.pop(\"data\")\n\n layer_proto = proto.layers.add()\n fixed_layer = case_converters.convert_dict_keys(\n case_converters.to_lower_camel_case, layer\n )\n layer_proto.spec = json.dumps(fixed_layer)\n # TODO: If several layers use the same data frame, the data gets resent\n # for each layer. Need to improve this.\n data_frame_proto.marshall_data_frame(data, layer_proto.data)\n\n del spec[\"layers\"]\n\n # Dump JSON after removing DataFrames (see loop above), because DataFrames\n # are not JSON-serializable.\n proto.spec = json.dumps(spec)\n proto.use_container_width = use_container_width\n", "path": "lib/streamlit/elements/deck_gl.py"}]}
| 2,782 | 171 |
gh_patches_debug_708
|
rasdani/github-patches
|
git_diff
|
RedHatInsights__insights-core-2879
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'TypeError' object has no attribute 'tb_frame'
While fetching object details from insight inspect, getting kicked out from the ipython console with the following error.
'TypeError' object has no attribute 'tb_frame'
(gss-rules) ⌊gss-rules⌋»$ insights inspect insights.parsers.installed_rpms.InstalledRpms ~/scripts/rhel7_sosreport/
IPython Console Usage Info:
Enter 'InstalledRpms.' and tab to get a list of properties
Example:
In [1]: InstalledRpms.<property_name>
Out[1]: <property value>
To exit ipython enter 'exit' and hit enter or use 'CTL D'
Starting IPython Interpreter Now
In [1]: InstalledRpms
'TypeError' object has no attribute 'tb_frame'
</issue>
<code>
[start of setup.py]
1 import os
2 import sys
3 from setuptools import setup, find_packages
4
5 __here__ = os.path.dirname(os.path.abspath(__file__))
6
7 package_info = dict.fromkeys(["RELEASE", "COMMIT", "VERSION", "NAME"])
8
9 for name in package_info:
10 with open(os.path.join(__here__, "insights", name)) as f:
11 package_info[name] = f.read().strip()
12
13 entry_points = {
14 'console_scripts': [
15 'insights-collect = insights.collect:main',
16 'insights-run = insights:main',
17 'insights = insights.command_parser:main',
18 'insights-cat = insights.tools.cat:main',
19 'insights-dupkeycheck = insights.tools.dupkeycheck:main',
20 'insights-inspect = insights.tools.insights_inspect:main',
21 'insights-info = insights.tools.query:main',
22 'insights-ocpshell= insights.ocpshell:main',
23 'client = insights.client:run',
24 'mangle = insights.util.mangle:main'
25 ]
26 }
27
28 runtime = set([
29 'six',
30 'requests',
31 'redis',
32 'cachecontrol',
33 'cachecontrol[redis]',
34 'cachecontrol[filecache]',
35 'defusedxml',
36 'lockfile',
37 'jinja2',
38 ])
39
40 if (sys.version_info < (2, 7)):
41 runtime.add('pyyaml>=3.10,<=3.13')
42 else:
43 runtime.add('pyyaml')
44
45
46 def maybe_require(pkg):
47 try:
48 __import__(pkg)
49 except ImportError:
50 runtime.add(pkg)
51
52
53 maybe_require("importlib")
54 maybe_require("argparse")
55
56
57 client = set([
58 'requests'
59 ])
60
61 develop = set([
62 'futures==3.0.5',
63 'wheel',
64 ])
65
66 docs = set([
67 'Sphinx<=3.0.2',
68 'nbsphinx',
69 'sphinx_rtd_theme',
70 'ipython',
71 'colorama',
72 'jinja2',
73 'Pygments'
74 ])
75
76 testing = set([
77 'coverage==4.3.4',
78 'pytest==3.0.6',
79 'pytest-cov==2.4.0',
80 'mock==2.0.0',
81 ])
82
83 cluster = set([
84 'ansible',
85 'pandas',
86 'colorama',
87 ])
88
89 openshift = set([
90 'openshift'
91 ])
92
93 linting = set([
94 'flake8==2.6.2',
95 ])
96
97 optional = set([
98 'python-cjson',
99 'python-logstash',
100 'python-statsd',
101 'watchdog',
102 ])
103
104 if __name__ == "__main__":
105 # allows for runtime modification of rpm name
106 name = os.environ.get("INSIGHTS_CORE_NAME", package_info["NAME"])
107
108 setup(
109 name=name,
110 version=package_info["VERSION"],
111 description="Insights Core is a data collection and analysis framework",
112 long_description=open("README.rst").read(),
113 url="https://github.com/redhatinsights/insights-core",
114 author="Red Hat, Inc.",
115 author_email="[email protected]",
116 packages=find_packages(),
117 install_requires=list(runtime),
118 package_data={'': ['LICENSE']},
119 license='Apache 2.0',
120 extras_require={
121 'develop': list(runtime | develop | client | docs | linting | testing | cluster),
122 'develop26': list(runtime | develop | client | linting | testing | cluster),
123 'client': list(runtime | client),
124 'client-develop': list(runtime | develop | client | linting | testing),
125 'cluster': list(runtime | cluster),
126 'openshift': list(runtime | openshift),
127 'optional': list(optional),
128 'docs': list(docs),
129 'linting': list(linting | client),
130 'testing': list(testing | client)
131 },
132 classifiers=[
133 'Development Status :: 5 - Production/Stable',
134 'Intended Audience :: Developers',
135 'Natural Language :: English',
136 'License :: OSI Approved :: Apache Software License',
137 'Programming Language :: Python',
138 'Programming Language :: Python :: 2.6',
139 'Programming Language :: Python :: 2.7',
140 'Programming Language :: Python :: 3.3',
141 'Programming Language :: Python :: 3.4',
142 'Programming Language :: Python :: 3.5',
143 'Programming Language :: Python :: 3.6'
144 ],
145 entry_points=entry_points,
146 include_package_data=True
147 )
148
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -70,7 +70,9 @@
'ipython',
'colorama',
'jinja2',
- 'Pygments'
+ 'Pygments',
+ 'jedi<0.18.0' # Open issue with jedi 0.18.0 and iPython <= 7.19
+ # https://github.com/davidhalter/jedi/issues/1714
])
testing = set([
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -70,7 +70,9 @@\n 'ipython',\n 'colorama',\n 'jinja2',\n- 'Pygments'\n+ 'Pygments',\n+ 'jedi<0.18.0' # Open issue with jedi 0.18.0 and iPython <= 7.19\n+ # https://github.com/davidhalter/jedi/issues/1714\n ])\n \n testing = set([\n", "issue": "'TypeError' object has no attribute 'tb_frame'\nWhile fetching object details from insight inspect, getting kicked out from the ipython console with the following error. \r\n 'TypeError' object has no attribute 'tb_frame'\r\n\r\n(gss-rules) \u230agss-rules\u230b\u00bb$ insights inspect insights.parsers.installed_rpms.InstalledRpms ~/scripts/rhel7_sosreport/\r\n\r\nIPython Console Usage Info:\r\n\r\nEnter 'InstalledRpms.' and tab to get a list of properties \r\nExample:\r\nIn [1]: InstalledRpms.<property_name>\r\nOut[1]: <property value>\r\n\r\nTo exit ipython enter 'exit' and hit enter or use 'CTL D'\r\n\r\nStarting IPython Interpreter Now \r\n\r\nIn [1]: InstalledRpms\r\n'TypeError' object has no attribute 'tb_frame'\r\n\n", "before_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-collect = insights.collect:main',\n 'insights-run = insights:main',\n 'insights = insights.command_parser:main',\n 'insights-cat = insights.tools.cat:main',\n 'insights-dupkeycheck = insights.tools.dupkeycheck:main',\n 'insights-inspect = insights.tools.insights_inspect:main',\n 'insights-info = insights.tools.query:main',\n 'insights-ocpshell= insights.ocpshell:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'six',\n 'requests',\n 'redis',\n 'cachecontrol',\n 'cachecontrol[redis]',\n 'cachecontrol[filecache]',\n 'defusedxml',\n 'lockfile',\n 'jinja2',\n])\n\nif (sys.version_info < (2, 7)):\n runtime.add('pyyaml>=3.10,<=3.13')\nelse:\n runtime.add('pyyaml')\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests'\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx<=3.0.2',\n 'nbsphinx',\n 'sphinx_rtd_theme',\n 'ipython',\n 'colorama',\n 'jinja2',\n 'Pygments'\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\ncluster = set([\n 'ansible',\n 'pandas',\n 'colorama',\n])\n\nopenshift = set([\n 'openshift'\n])\n\nlinting = set([\n 'flake8==2.6.2',\n])\n\noptional = set([\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing | cluster),\n 'develop26': list(runtime | develop | client | linting | testing | cluster),\n 'client': list(runtime | client),\n 'client-develop': list(runtime | develop | client | linting | testing),\n 'cluster': list(runtime | cluster),\n 'openshift': list(runtime | openshift),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}]}
| 2,009 | 123 |
gh_patches_debug_2433
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-1723
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Saving game settings causes a traceback
```
Traceback (most recent call last):
File "/mnt/extrastorage/lutris/lutris/gui/lutriswindow.py", line 666, in on_game_updated
self.view.set_selected_game(game.id)
File "/mnt/extrastorage/lutris/lutris/gui/views/list.py", line 123, in set_selected_game
row = self.get_row_by_id(game_id, filtered=True)
AttributeError: 'GameListView' object has no attribute 'get_row_by_id'
```
`GameListView` does not seem to provide that method.
After the traceback occurs, Lutris will try to update the default wine prefix (`~/.wine`) rather than the correct one and must be restarted.
</issue>
<code>
[start of lutris/gui/views/list.py]
1 # pylint: disable=no-member
2 from gi.repository import Gtk, Pango
3 from lutris import settings
4 from lutris.gui.views.base import GameView
5 from lutris.gui.views import (
6 COL_NAME,
7 COL_ICON,
8 COL_YEAR,
9 COL_RUNNER_HUMAN_NAME,
10 COL_PLATFORM,
11 COL_LASTPLAYED,
12 COL_LASTPLAYED_TEXT,
13 COL_INSTALLED_AT,
14 COL_INSTALLED_AT_TEXT,
15 COL_PLAYTIME_TEXT,
16 COLUMN_NAMES
17 )
18
19
20 class GameListView(Gtk.TreeView, GameView):
21 """Show the main list of games."""
22
23 __gsignals__ = GameView.__gsignals__
24
25 def __init__(self, store):
26 self.game_store = store
27 self.model = self.game_store.modelsort
28 super().__init__(self.model)
29 self.set_rules_hint(True)
30
31 # Icon column
32 image_cell = Gtk.CellRendererPixbuf()
33 column = Gtk.TreeViewColumn("", image_cell, pixbuf=COL_ICON)
34 column.set_reorderable(True)
35 column.set_sort_indicator(False)
36 self.append_column(column)
37
38 # Text columns
39 default_text_cell = self.set_text_cell()
40 name_cell = self.set_text_cell()
41 name_cell.set_padding(5, 0)
42
43 self.set_column(name_cell, "Name", COL_NAME, 200)
44 self.set_column(default_text_cell, "Year", COL_YEAR, 60)
45 self.set_column(default_text_cell, "Runner", COL_RUNNER_HUMAN_NAME, 120)
46 self.set_column(default_text_cell, "Platform", COL_PLATFORM, 120)
47 self.set_column(default_text_cell, "Last Played", COL_LASTPLAYED_TEXT, 120)
48 self.set_sort_with_column(COL_LASTPLAYED_TEXT, COL_LASTPLAYED)
49 self.set_column(default_text_cell, "Installed At", COL_INSTALLED_AT_TEXT, 120)
50 self.set_sort_with_column(COL_INSTALLED_AT_TEXT, COL_INSTALLED_AT)
51 self.set_column(default_text_cell, "Play Time", COL_PLAYTIME_TEXT, 100)
52
53 self.get_selection().set_mode(Gtk.SelectionMode.SINGLE)
54
55 self.connect_signals()
56 self.connect("row-activated", self.on_row_activated)
57 self.get_selection().connect("changed", self.on_cursor_changed)
58
59 @staticmethod
60 def set_text_cell():
61 text_cell = Gtk.CellRendererText()
62 text_cell.set_padding(10, 0)
63 text_cell.set_property("ellipsize", Pango.EllipsizeMode.END)
64 return text_cell
65
66 def set_column(self, cell, header, column_id, default_width, sort_id=None):
67 column = Gtk.TreeViewColumn(header, cell, markup=column_id)
68 column.set_sort_indicator(True)
69 column.set_sort_column_id(column_id if sort_id is None else sort_id)
70 self.set_column_sort(column_id if sort_id is None else sort_id)
71 column.set_resizable(True)
72 column.set_reorderable(True)
73 width = settings.read_setting(
74 "%s_column_width" % COLUMN_NAMES[column_id], "list view"
75 )
76 column.set_fixed_width(int(width) if width else default_width)
77 self.append_column(column)
78 column.connect("notify::width", self.on_column_width_changed)
79 return column
80
81 def set_column_sort(self, col):
82 """Sort a column and fallback to sorting by name and runner."""
83
84 def sort_func(model, row1, row2, user_data):
85 v1 = model.get_value(row1, col)
86 v2 = model.get_value(row2, col)
87 diff = -1 if v1 < v2 else 0 if v1 == v2 else 1
88 if diff is 0:
89 v1 = model.get_value(row1, COL_NAME)
90 v2 = model.get_value(row2, COL_NAME)
91 diff = -1 if v1 < v2 else 0 if v1 == v2 else 1
92 if diff is 0:
93 v1 = model.get_value(row1, COL_RUNNER_HUMAN_NAME)
94 v2 = model.get_value(row2, COL_RUNNER_HUMAN_NAME)
95 diff = -1 if v1 < v2 else 0 if v1 == v2 else 1
96 return diff
97
98 self.model.set_sort_func(col, sort_func)
99
100 def set_sort_with_column(self, col, sort_col):
101 """Set to sort a column by using another column"""
102
103 def sort_func(model, row1, row2, _user_data):
104 value1 = model.get_value(row1, sort_col)
105 value2 = model.get_value(row2, sort_col)
106 return -1 if value1 < value2 else 0 if value1 == value2 else 1
107
108 self.model.set_sort_func(col, sort_func)
109
110 def get_selected_item(self):
111 """Return the currently selected game's id."""
112 selection = self.get_selection()
113 if not selection:
114 return None
115 model, select_iter = selection.get_selected()
116 if select_iter:
117 return select_iter
118
119 def select(self):
120 self.set_cursor(self.current_path[0])
121
122 def set_selected_game(self, game_id):
123 row = self.get_row_by_id(game_id, filtered=True)
124 if row:
125 self.set_cursor(row.path)
126
127 def on_row_activated(self, widget, line=None, column=None):
128 """Handles double clicks"""
129 selected_item = self.get_selected_item()
130 if selected_item:
131 selected_game = self.get_selected_game(selected_item)
132 else:
133 selected_game = None
134 self.emit("game-activated", selected_game)
135
136 def on_cursor_changed(self, widget, line=None, column=None):
137 selected_item = self.get_selected_item()
138 if selected_item:
139 self.selected_game = self.get_selected_game(selected_item)
140 else:
141 self.selected_game = None
142 self.emit("game-selected", self.selected_game)
143
144 @staticmethod
145 def on_column_width_changed(col, *args):
146 col_name = col.get_title()
147 if col_name:
148 settings.write_setting(
149 col_name.replace(" ", "") + "_column_width",
150 col.get_fixed_width(),
151 "list view",
152 )
153
[end of lutris/gui/views/list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/gui/views/list.py b/lutris/gui/views/list.py
--- a/lutris/gui/views/list.py
+++ b/lutris/gui/views/list.py
@@ -120,7 +120,7 @@
self.set_cursor(self.current_path[0])
def set_selected_game(self, game_id):
- row = self.get_row_by_id(game_id, filtered=True)
+ row = self.game_store.get_row_by_id(game_id, filtered=True)
if row:
self.set_cursor(row.path)
|
{"golden_diff": "diff --git a/lutris/gui/views/list.py b/lutris/gui/views/list.py\n--- a/lutris/gui/views/list.py\n+++ b/lutris/gui/views/list.py\n@@ -120,7 +120,7 @@\n self.set_cursor(self.current_path[0])\n \n def set_selected_game(self, game_id):\n- row = self.get_row_by_id(game_id, filtered=True)\n+ row = self.game_store.get_row_by_id(game_id, filtered=True)\n if row:\n self.set_cursor(row.path)\n", "issue": "Saving game settings causes a traceback\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/extrastorage/lutris/lutris/gui/lutriswindow.py\", line 666, in on_game_updated\r\n self.view.set_selected_game(game.id)\r\n File \"/mnt/extrastorage/lutris/lutris/gui/views/list.py\", line 123, in set_selected_game\r\n row = self.get_row_by_id(game_id, filtered=True)\r\nAttributeError: 'GameListView' object has no attribute 'get_row_by_id'\r\n```\r\n\r\n`GameListView` does not seem to provide that method. \r\n\r\nAfter the traceback occurs, Lutris will try to update the default wine prefix (`~/.wine`) rather than the correct one and must be restarted.\n", "before_files": [{"content": "# pylint: disable=no-member\nfrom gi.repository import Gtk, Pango\nfrom lutris import settings\nfrom lutris.gui.views.base import GameView\nfrom lutris.gui.views import (\n COL_NAME,\n COL_ICON,\n COL_YEAR,\n COL_RUNNER_HUMAN_NAME,\n COL_PLATFORM,\n COL_LASTPLAYED,\n COL_LASTPLAYED_TEXT,\n COL_INSTALLED_AT,\n COL_INSTALLED_AT_TEXT,\n COL_PLAYTIME_TEXT,\n COLUMN_NAMES\n)\n\n\nclass GameListView(Gtk.TreeView, GameView):\n \"\"\"Show the main list of games.\"\"\"\n\n __gsignals__ = GameView.__gsignals__\n\n def __init__(self, store):\n self.game_store = store\n self.model = self.game_store.modelsort\n super().__init__(self.model)\n self.set_rules_hint(True)\n\n # Icon column\n image_cell = Gtk.CellRendererPixbuf()\n column = Gtk.TreeViewColumn(\"\", image_cell, pixbuf=COL_ICON)\n column.set_reorderable(True)\n column.set_sort_indicator(False)\n self.append_column(column)\n\n # Text columns\n default_text_cell = self.set_text_cell()\n name_cell = self.set_text_cell()\n name_cell.set_padding(5, 0)\n\n self.set_column(name_cell, \"Name\", COL_NAME, 200)\n self.set_column(default_text_cell, \"Year\", COL_YEAR, 60)\n self.set_column(default_text_cell, \"Runner\", COL_RUNNER_HUMAN_NAME, 120)\n self.set_column(default_text_cell, \"Platform\", COL_PLATFORM, 120)\n self.set_column(default_text_cell, \"Last Played\", COL_LASTPLAYED_TEXT, 120)\n self.set_sort_with_column(COL_LASTPLAYED_TEXT, COL_LASTPLAYED)\n self.set_column(default_text_cell, \"Installed At\", COL_INSTALLED_AT_TEXT, 120)\n self.set_sort_with_column(COL_INSTALLED_AT_TEXT, COL_INSTALLED_AT)\n self.set_column(default_text_cell, \"Play Time\", COL_PLAYTIME_TEXT, 100)\n\n self.get_selection().set_mode(Gtk.SelectionMode.SINGLE)\n\n self.connect_signals()\n self.connect(\"row-activated\", self.on_row_activated)\n self.get_selection().connect(\"changed\", self.on_cursor_changed)\n\n @staticmethod\n def set_text_cell():\n text_cell = Gtk.CellRendererText()\n text_cell.set_padding(10, 0)\n text_cell.set_property(\"ellipsize\", Pango.EllipsizeMode.END)\n return text_cell\n\n def set_column(self, cell, header, column_id, default_width, sort_id=None):\n column = Gtk.TreeViewColumn(header, cell, markup=column_id)\n column.set_sort_indicator(True)\n column.set_sort_column_id(column_id if sort_id is None else sort_id)\n self.set_column_sort(column_id if sort_id is None else sort_id)\n column.set_resizable(True)\n column.set_reorderable(True)\n width = settings.read_setting(\n \"%s_column_width\" % COLUMN_NAMES[column_id], \"list view\"\n )\n column.set_fixed_width(int(width) if width else default_width)\n self.append_column(column)\n column.connect(\"notify::width\", self.on_column_width_changed)\n return column\n\n def set_column_sort(self, col):\n \"\"\"Sort a column and fallback to sorting by name and runner.\"\"\"\n\n def sort_func(model, row1, row2, user_data):\n v1 = model.get_value(row1, col)\n v2 = model.get_value(row2, col)\n diff = -1 if v1 < v2 else 0 if v1 == v2 else 1\n if diff is 0:\n v1 = model.get_value(row1, COL_NAME)\n v2 = model.get_value(row2, COL_NAME)\n diff = -1 if v1 < v2 else 0 if v1 == v2 else 1\n if diff is 0:\n v1 = model.get_value(row1, COL_RUNNER_HUMAN_NAME)\n v2 = model.get_value(row2, COL_RUNNER_HUMAN_NAME)\n diff = -1 if v1 < v2 else 0 if v1 == v2 else 1\n return diff\n\n self.model.set_sort_func(col, sort_func)\n\n def set_sort_with_column(self, col, sort_col):\n \"\"\"Set to sort a column by using another column\"\"\"\n\n def sort_func(model, row1, row2, _user_data):\n value1 = model.get_value(row1, sort_col)\n value2 = model.get_value(row2, sort_col)\n return -1 if value1 < value2 else 0 if value1 == value2 else 1\n\n self.model.set_sort_func(col, sort_func)\n\n def get_selected_item(self):\n \"\"\"Return the currently selected game's id.\"\"\"\n selection = self.get_selection()\n if not selection:\n return None\n model, select_iter = selection.get_selected()\n if select_iter:\n return select_iter\n\n def select(self):\n self.set_cursor(self.current_path[0])\n\n def set_selected_game(self, game_id):\n row = self.get_row_by_id(game_id, filtered=True)\n if row:\n self.set_cursor(row.path)\n\n def on_row_activated(self, widget, line=None, column=None):\n \"\"\"Handles double clicks\"\"\"\n selected_item = self.get_selected_item()\n if selected_item:\n selected_game = self.get_selected_game(selected_item)\n else:\n selected_game = None\n self.emit(\"game-activated\", selected_game)\n\n def on_cursor_changed(self, widget, line=None, column=None):\n selected_item = self.get_selected_item()\n if selected_item:\n self.selected_game = self.get_selected_game(selected_item)\n else:\n self.selected_game = None\n self.emit(\"game-selected\", self.selected_game)\n\n @staticmethod\n def on_column_width_changed(col, *args):\n col_name = col.get_title()\n if col_name:\n settings.write_setting(\n col_name.replace(\" \", \"\") + \"_column_width\",\n col.get_fixed_width(),\n \"list view\",\n )\n", "path": "lutris/gui/views/list.py"}]}
| 2,389 | 118 |
gh_patches_debug_7449
|
rasdani/github-patches
|
git_diff
|
conda-forge__conda-smithy-140
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve lint messages on whitespace
The message about the spacing for selectors isn't clear to see the problem in the number of spaces:
`Selectors are suggested to take a " # []" form.`
It would be clearer to have a message like: `<two spaces> # <one space> [selector]`
</issue>
<code>
[start of conda_smithy/lint_recipe.py]
1 import os
2 import re
3
4 import jinja2
5 import ruamel.yaml
6
7
8 EXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',
9 'test', 'app', 'about', 'extra']
10
11 REQUIREMENTS_ORDER = ['build', 'run']
12
13
14 class NullUndefined(jinja2.Undefined):
15 def __unicode__(self):
16 return unicode(self._undefined_name)
17
18
19 def get_section(parent, name, lints):
20 section = parent.get(name, {})
21 if not isinstance(section, dict):
22 lints.append('The "{}" section was expected to be a dictionary, but '
23 'got a {}.'.format(name, type(section).__name__))
24 section = {}
25 return section
26
27
28 def lintify(meta, recipe_dir=None):
29 lints = []
30 major_sections = list(meta.keys())
31
32 # If the recipe_dir exists (no guarantee within this function) , we can
33 # find the meta.yaml within it.
34 meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')
35
36 source_section = get_section(meta, 'source', lints)
37 build_section = get_section(meta, 'build', lints)
38 requirements_section = get_section(meta, 'requirements', lints)
39 about_section = get_section(meta, 'about', lints)
40 extra_section = get_section(meta, 'extra', lints)
41
42 # 1: Top level meta.yaml keys should have a specific order.
43 section_order_sorted = sorted(major_sections,
44 key=EXPECTED_SECTION_ORDER.index)
45 if major_sections != section_order_sorted:
46 lints.append('The top level meta keys are in an unexpected order. '
47 'Expecting {}.'.format(section_order_sorted))
48
49 # 2: The about section should have a home, license and summary.
50 for about_item in ['home', 'license', 'summary']:
51 # if the section doesn't exist, or is just empty, lint it.
52 if not about_section.get(about_item, ''):
53 lints.append('The {} item is expected in the about section.'
54 ''.format(about_item))
55
56 # 3: The recipe should have some maintainers.
57 if not extra_section.get('recipe-maintainers', []):
58 lints.append('The recipe could do with some maintainers listed in '
59 'the "extra/recipe-maintainers" section.')
60
61 # 4: The recipe should have some tests.
62 if 'test' not in major_sections:
63 test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',
64 'run_test.pl']
65 a_test_file_exists = (recipe_dir is not None and
66 any(os.path.exists(os.path.join(recipe_dir,
67 test_file))
68 for test_file in test_files))
69 if not a_test_file_exists:
70 lints.append('The recipe must have some tests.')
71
72 # 5: License cannot be 'unknown.'
73 license = about_section.get('license', '').lower()
74 if 'unknown' == license.strip():
75 lints.append('The recipe license cannot be unknown.')
76
77 # 6: Selectors should be in a tidy form.
78 if recipe_dir is not None and os.path.exists(meta_fname):
79 bad_selectors = []
80 # Good selectors look like ".*\s\s#\s[...]"
81 good_selectors_pat = re.compile(r'(.+?)\s{2,}#\s\[(.+)\](?(2).*)$')
82 with open(meta_fname, 'r') as fh:
83 for selector_line in selector_lines(fh):
84 if not good_selectors_pat.match(selector_line):
85 bad_selectors.append(selector_line)
86 if bad_selectors:
87 lints.append('Selectors are suggested to take a '
88 '" # [<selector>]" form.')
89
90 # 7: The build section should have a build number.
91 if build_section.get('number', None) is None:
92 lints.append('The recipe must have a `build/number` section.')
93
94 # 8: The build section should be before the run section in requirements.
95 requirements_order_sorted = sorted(requirements_section,
96 key=REQUIREMENTS_ORDER.index)
97 if requirements_section.keys() != requirements_order_sorted:
98 lints.append('The `requirements/build` section should be defined '
99 'before the `requirements/run` section.')
100
101 # 9: Files downloaded should have a hash.
102 if ('url' in source_section and
103 not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):
104 lints.append('When defining a source/url please add a sha256, sha1 '
105 'or md5 checksum (sha256 preferably).')
106
107 return lints
108
109
110 def selector_lines(lines):
111 # Using the same pattern defined in conda-build (metadata.py),
112 # we identify selectors.
113 sel_pat = re.compile(r'(.+?)\s*(#.*)?\[(.+)\](?(2).*)$')
114
115 for line in lines:
116 line = line.rstrip()
117 if line.lstrip().startswith('#'):
118 # Don't bother with comment only lines
119 continue
120 m = sel_pat.match(line)
121 if m:
122 m.group(3)
123 yield line
124
125
126 def main(recipe_dir):
127 recipe_dir = os.path.abspath(recipe_dir)
128 recipe_meta = os.path.join(recipe_dir, 'meta.yaml')
129 if not os.path.exists(recipe_dir):
130 raise IOError('Feedstock has no recipe/meta.yaml.')
131
132 env = jinja2.Environment(undefined=NullUndefined)
133
134 with open(recipe_meta, 'r') as fh:
135 content = env.from_string(''.join(fh)).render()
136 meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
137 results = lintify(meta, recipe_dir)
138 return results
139
[end of conda_smithy/lint_recipe.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py
--- a/conda_smithy/lint_recipe.py
+++ b/conda_smithy/lint_recipe.py
@@ -85,7 +85,7 @@
bad_selectors.append(selector_line)
if bad_selectors:
lints.append('Selectors are suggested to take a '
- '" # [<selector>]" form.')
+ '``<two spaces>#<one space>[<expression>]`` form.')
# 7: The build section should have a build number.
if build_section.get('number', None) is None:
|
{"golden_diff": "diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py\n--- a/conda_smithy/lint_recipe.py\n+++ b/conda_smithy/lint_recipe.py\n@@ -85,7 +85,7 @@\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n- '\" # [<selector>]\" form.')\n+ '``<two spaces>#<one space>[<expression>]`` form.')\n \n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n", "issue": "Improve lint messages on whitespace\nThe message about the spacing for selectors isn't clear to see the problem in the number of spaces:\n\n`Selectors are suggested to take a \" # []\" form.`\n\nIt would be clearer to have a message like: `<two spaces> # <one space> [selector]`\n\n", "before_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '\" # [<selector>]\" form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}]}
| 2,162 | 143 |
gh_patches_debug_27070
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-692
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: '_NamespacePath' object is not subscriptable
Looks like it was related to this change: https://github.com/scoutapp/scout_apm_python/pull/668/files
I'm not entirely sure what causes this.
We're also using a custom integration, incase thats a factor.
The particular module it is trying to load (when we get the exception) is `repoze.who.middleware`
We've not made changes to our integration, and has been breaking since, 2.21, but is also happening on 2.23.2
Below is a stack trace, starting from when we run `.install()`
Please let me know if theres anything else I can provide, I'm not sure how to debug this further
```
scout_apm.api.install(config=scout_config)
File "/usr/local/lib/python3.7/site-packages/scout_apm/compat.py", line 142, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/__init__.py", line 45, in install
launched = CoreAgentManager().launch()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py", line 49, in launch
self.download()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py", line 58, in download
self.downloader.download()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py", line 161, in download
downloaded = self.download_package()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py", line 209, in download_package
"GET", full_url, preload_content=False, timeout=10.0, retries=3
File "/usr/local/lib/python3.7/site-packages/urllib3/request.py", line 75, in request
method, url, fields=fields, headers=headers, **urlopen_kw
File "/usr/local/lib/python3.7/site-packages/urllib3/request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/local/lib/python3.7/site-packages/urllib3/poolmanager.py", line 375, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/local/lib/python3.7/site-packages/scout_apm/instruments/urllib3.py", line 68, in wrapped_urlopen
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py", line 134, in span
self.stop_span()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py", line 122, in stop_span
stopping_span.annotate()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py", line 258, in annotate
self.capture_backtrace()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py", line 293, in capture_backtrace
self.tag("stack", backtrace.capture_backtrace())
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py", line 122, in capture_backtrace
return list(itertools.islice(walker, LIMIT))
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py", line 24, in filter_frames
for frame in frames:
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py", line 84, in backtrace_walker
yield {"file": filepath(frame), "line": lineno, "function": name}
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py", line 63, in filepath
return module_filepath(module, filepath)
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py", line 40, in module_filepath
module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]
TypeError: '_NamespacePath' object is not subscriptable
```
</issue>
<code>
[start of src/scout_apm/core/backtrace.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import itertools
5 import os
6 import sys
7 import sysconfig
8 import traceback
9 import warnings
10 from logging import getLogger
11
12 logger = getLogger(__name__)
13
14 # Maximum non-Scout frames to target retrieving
15 LIMIT = 50
16 # How many upper frames from inside Scout to ignore
17 IGNORED = 1
18
19
20 def filter_frames(frames):
21 """Filter the stack trace frames down to non-library code."""
22 paths = sysconfig.get_paths()
23 library_paths = {paths["purelib"], paths["platlib"]}
24 for frame in frames:
25 if not any(frame["file"].startswith(exclusion) for exclusion in library_paths):
26 yield frame
27
28
29 def module_filepath(module, filepath):
30 """Get the filepath relative to the base module."""
31 root_module_name = module.split(".", 1)[0]
32 if root_module_name == module:
33 return os.path.basename(filepath)
34
35 root_module = sys.modules[root_module_name]
36 if root_module.__file__:
37 module_dir = root_module.__file__.rsplit(os.sep, 2)[0]
38 elif root_module.__path__:
39 # Default to using the first path specified for the module.
40 module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]
41 if len(root_module.__path__) > 1:
42 logger.debug(
43 "{} has {} paths. Use the first and ignore the rest.".format(
44 root_module, len(root_module.__path__)
45 )
46 )
47 else:
48 # If the file path don't exist, then return the full path.
49 return filepath
50 return filepath.split(module_dir, 1)[-1].lstrip(os.sep)
51
52
53 def filepath(frame):
54 """Get the filepath for frame."""
55 module = frame.f_globals.get("__name__", None)
56 filepath = frame.f_code.co_filename
57
58 if filepath.endswith(".pyc"):
59 filepath = filepath[:-1]
60
61 if not module:
62 return filepath
63 return module_filepath(module, filepath)
64
65
66 if sys.version_info >= (3, 5):
67
68 def stacktrace_walker(tb):
69 """Iterate over each frame of the stack downards for exceptions."""
70 for frame, lineno in traceback.walk_tb(tb):
71 name = frame.f_code.co_name
72 yield {"file": filepath(frame), "line": lineno, "function": name}
73
74 def backtrace_walker():
75 """Iterate over each frame of the stack upwards.
76
77 Taken from python3/traceback.ExtractSummary.extract to support
78 iterating over the entire stack, but without creating a large
79 data structure.
80 """
81 start_frame = sys._getframe().f_back
82 for frame, lineno in traceback.walk_stack(start_frame):
83 name = frame.f_code.co_name
84 yield {"file": filepath(frame), "line": lineno, "function": name}
85
86
87 else:
88
89 def stacktrace_walker(tb):
90 """Iterate over each frame of the stack downards for exceptions."""
91 while tb is not None:
92 lineno = tb.tb_lineno
93 name = tb.tb_frame.f_code.co_name
94 yield {
95 "file": filepath(tb.tb_frame),
96 "line": lineno,
97 "function": name,
98 }
99 tb = tb.tb_next
100
101 def backtrace_walker():
102 """Iterate over each frame of the stack upwards.
103
104 Taken from python2.7/traceback.extract_stack to support iterating
105 over the entire stack, but without creating a large data structure.
106 """
107 try:
108 raise ZeroDivisionError
109 except ZeroDivisionError:
110 # Get the current frame
111 frame = sys.exc_info()[2].tb_frame.f_back
112
113 while frame is not None:
114 lineno = frame.f_lineno
115 name = frame.f_code.co_name
116 yield {"file": filepath(frame), "line": lineno, "function": name}
117 frame = frame.f_back
118
119
120 def capture_backtrace():
121 walker = filter_frames(backtrace_walker())
122 return list(itertools.islice(walker, LIMIT))
123
124
125 def capture_stacktrace(tb):
126 walker = stacktrace_walker(tb)
127 return list(reversed(list(itertools.islice(walker, LIMIT))))
128
129
130 def capture():
131 warnings.warn(
132 "capture is deprecated, instead use capture_backtrace instead.",
133 DeprecationWarning,
134 2,
135 )
136 return capture_backtrace()
137
[end of src/scout_apm/core/backtrace.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/core/backtrace.py b/src/scout_apm/core/backtrace.py
--- a/src/scout_apm/core/backtrace.py
+++ b/src/scout_apm/core/backtrace.py
@@ -32,22 +32,27 @@
if root_module_name == module:
return os.path.basename(filepath)
+ module_dir = None
root_module = sys.modules[root_module_name]
- if root_module.__file__:
- module_dir = root_module.__file__.rsplit(os.sep, 2)[0]
- elif root_module.__path__:
- # Default to using the first path specified for the module.
- module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]
- if len(root_module.__path__) > 1:
- logger.debug(
- "{} has {} paths. Use the first and ignore the rest.".format(
- root_module, len(root_module.__path__)
+ try:
+ if root_module.__file__:
+ module_dir = root_module.__file__.rsplit(os.sep, 2)[0]
+ elif root_module.__path__ and isinstance(root_module.__path__, (list, tuple)):
+ # Default to using the first path specified for the module.
+ module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]
+ if len(root_module.__path__) > 1:
+ logger.debug(
+ "{} has {} paths. Use the first and ignore the rest.".format(
+ root_module, len(root_module.__path__)
+ )
)
- )
- else:
- # If the file path don't exist, then return the full path.
- return filepath
- return filepath.split(module_dir, 1)[-1].lstrip(os.sep)
+ except Exception as exc:
+ logger.debug(
+ "Error processing module {} and filepath {}".format(root_module, filepath),
+ exc_info=exc,
+ )
+
+ return filepath.split(module_dir, 1)[-1].lstrip(os.sep) if module_dir else filepath
def filepath(frame):
|
{"golden_diff": "diff --git a/src/scout_apm/core/backtrace.py b/src/scout_apm/core/backtrace.py\n--- a/src/scout_apm/core/backtrace.py\n+++ b/src/scout_apm/core/backtrace.py\n@@ -32,22 +32,27 @@\n if root_module_name == module:\n return os.path.basename(filepath)\n \n+ module_dir = None\n root_module = sys.modules[root_module_name]\n- if root_module.__file__:\n- module_dir = root_module.__file__.rsplit(os.sep, 2)[0]\n- elif root_module.__path__:\n- # Default to using the first path specified for the module.\n- module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]\n- if len(root_module.__path__) > 1:\n- logger.debug(\n- \"{} has {} paths. Use the first and ignore the rest.\".format(\n- root_module, len(root_module.__path__)\n+ try:\n+ if root_module.__file__:\n+ module_dir = root_module.__file__.rsplit(os.sep, 2)[0]\n+ elif root_module.__path__ and isinstance(root_module.__path__, (list, tuple)):\n+ # Default to using the first path specified for the module.\n+ module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]\n+ if len(root_module.__path__) > 1:\n+ logger.debug(\n+ \"{} has {} paths. Use the first and ignore the rest.\".format(\n+ root_module, len(root_module.__path__)\n+ )\n )\n- )\n- else:\n- # If the file path don't exist, then return the full path.\n- return filepath\n- return filepath.split(module_dir, 1)[-1].lstrip(os.sep)\n+ except Exception as exc:\n+ logger.debug(\n+ \"Error processing module {} and filepath {}\".format(root_module, filepath),\n+ exc_info=exc,\n+ )\n+\n+ return filepath.split(module_dir, 1)[-1].lstrip(os.sep) if module_dir else filepath\n \n \n def filepath(frame):\n", "issue": "TypeError: '_NamespacePath' object is not subscriptable\nLooks like it was related to this change: https://github.com/scoutapp/scout_apm_python/pull/668/files\r\n\r\nI'm not entirely sure what causes this. \r\nWe're also using a custom integration, incase thats a factor.\r\n\r\nThe particular module it is trying to load (when we get the exception) is `repoze.who.middleware`\r\n\r\nWe've not made changes to our integration, and has been breaking since, 2.21, but is also happening on 2.23.2\r\n\r\nBelow is a stack trace, starting from when we run `.install()`\r\n\r\nPlease let me know if theres anything else I can provide, I'm not sure how to debug this further\r\n\r\n```\r\nscout_apm.api.install(config=scout_config)\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/compat.py\", line 142, in wrapper\r\n return func(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/__init__.py\", line 45, in install\r\n launched = CoreAgentManager().launch()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py\", line 49, in launch\r\n self.download()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py\", line 58, in download\r\n self.downloader.download()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py\", line 161, in download\r\n downloaded = self.download_package()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/agent/manager.py\", line 209, in download_package\r\n \"GET\", full_url, preload_content=False, timeout=10.0, retries=3\r\nFile \"/usr/local/lib/python3.7/site-packages/urllib3/request.py\", line 75, in request\r\n method, url, fields=fields, headers=headers, **urlopen_kw\r\nFile \"/usr/local/lib/python3.7/site-packages/urllib3/request.py\", line 96, in request_encode_url\r\n return self.urlopen(method, url, **extra_kw)\r\nFile \"/usr/local/lib/python3.7/site-packages/urllib3/poolmanager.py\", line 375, in urlopen\r\n response = conn.urlopen(method, u.request_uri, **kw)\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/instruments/urllib3.py\", line 68, in wrapped_urlopen\r\n return wrapped(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.7/contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py\", line 134, in span\r\n self.stop_span()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py\", line 122, in stop_span\r\n stopping_span.annotate()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py\", line 258, in annotate\r\n self.capture_backtrace()\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/tracked_request.py\", line 293, in capture_backtrace\r\n self.tag(\"stack\", backtrace.capture_backtrace())\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py\", line 122, in capture_backtrace\r\n return list(itertools.islice(walker, LIMIT))\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py\", line 24, in filter_frames\r\n for frame in frames:\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py\", line 84, in backtrace_walker\r\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py\", line 63, in filepath\r\n return module_filepath(module, filepath)\r\nFile \"/usr/local/lib/python3.7/site-packages/scout_apm/core/backtrace.py\", line 40, in module_filepath\r\n module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]\r\nTypeError: '_NamespacePath' object is not subscriptable\r\n```\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport itertools\nimport os\nimport sys\nimport sysconfig\nimport traceback\nimport warnings\nfrom logging import getLogger\n\nlogger = getLogger(__name__)\n\n# Maximum non-Scout frames to target retrieving\nLIMIT = 50\n# How many upper frames from inside Scout to ignore\nIGNORED = 1\n\n\ndef filter_frames(frames):\n \"\"\"Filter the stack trace frames down to non-library code.\"\"\"\n paths = sysconfig.get_paths()\n library_paths = {paths[\"purelib\"], paths[\"platlib\"]}\n for frame in frames:\n if not any(frame[\"file\"].startswith(exclusion) for exclusion in library_paths):\n yield frame\n\n\ndef module_filepath(module, filepath):\n \"\"\"Get the filepath relative to the base module.\"\"\"\n root_module_name = module.split(\".\", 1)[0]\n if root_module_name == module:\n return os.path.basename(filepath)\n\n root_module = sys.modules[root_module_name]\n if root_module.__file__:\n module_dir = root_module.__file__.rsplit(os.sep, 2)[0]\n elif root_module.__path__:\n # Default to using the first path specified for the module.\n module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]\n if len(root_module.__path__) > 1:\n logger.debug(\n \"{} has {} paths. Use the first and ignore the rest.\".format(\n root_module, len(root_module.__path__)\n )\n )\n else:\n # If the file path don't exist, then return the full path.\n return filepath\n return filepath.split(module_dir, 1)[-1].lstrip(os.sep)\n\n\ndef filepath(frame):\n \"\"\"Get the filepath for frame.\"\"\"\n module = frame.f_globals.get(\"__name__\", None)\n filepath = frame.f_code.co_filename\n\n if filepath.endswith(\".pyc\"):\n filepath = filepath[:-1]\n\n if not module:\n return filepath\n return module_filepath(module, filepath)\n\n\nif sys.version_info >= (3, 5):\n\n def stacktrace_walker(tb):\n \"\"\"Iterate over each frame of the stack downards for exceptions.\"\"\"\n for frame, lineno in traceback.walk_tb(tb):\n name = frame.f_code.co_name\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\n\n def backtrace_walker():\n \"\"\"Iterate over each frame of the stack upwards.\n\n Taken from python3/traceback.ExtractSummary.extract to support\n iterating over the entire stack, but without creating a large\n data structure.\n \"\"\"\n start_frame = sys._getframe().f_back\n for frame, lineno in traceback.walk_stack(start_frame):\n name = frame.f_code.co_name\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\n\n\nelse:\n\n def stacktrace_walker(tb):\n \"\"\"Iterate over each frame of the stack downards for exceptions.\"\"\"\n while tb is not None:\n lineno = tb.tb_lineno\n name = tb.tb_frame.f_code.co_name\n yield {\n \"file\": filepath(tb.tb_frame),\n \"line\": lineno,\n \"function\": name,\n }\n tb = tb.tb_next\n\n def backtrace_walker():\n \"\"\"Iterate over each frame of the stack upwards.\n\n Taken from python2.7/traceback.extract_stack to support iterating\n over the entire stack, but without creating a large data structure.\n \"\"\"\n try:\n raise ZeroDivisionError\n except ZeroDivisionError:\n # Get the current frame\n frame = sys.exc_info()[2].tb_frame.f_back\n\n while frame is not None:\n lineno = frame.f_lineno\n name = frame.f_code.co_name\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\n frame = frame.f_back\n\n\ndef capture_backtrace():\n walker = filter_frames(backtrace_walker())\n return list(itertools.islice(walker, LIMIT))\n\n\ndef capture_stacktrace(tb):\n walker = stacktrace_walker(tb)\n return list(reversed(list(itertools.islice(walker, LIMIT))))\n\n\ndef capture():\n warnings.warn(\n \"capture is deprecated, instead use capture_backtrace instead.\",\n DeprecationWarning,\n 2,\n )\n return capture_backtrace()\n", "path": "src/scout_apm/core/backtrace.py"}]}
| 2,761 | 470 |
gh_patches_debug_40598
|
rasdani/github-patches
|
git_diff
|
encode__httpx-2423
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LineDecoder is accidentally quadratic: iter_lines() seems to hang forever
When calling `Response.iter_lines()`, things can seem to hang forever.
The problem is that `LineDecoder` is quadratic in it's string copying behaviour. If a 31MB chunk with 18,768 lines is passed in to `LineDecoder()` then it takes 1m45s to process it vs 0.1s for a simple `text.splitlines()`. This is readily reproduced by simply using `LineDecoder()` to decode a file, like `/usr/share/dict/words` (do not attempt, you may be waiting until the heat death of the universe).
It may be a bug somewhere else that `iter_text()` is returning chunks that are too big.
But either way, you probably want to reconsider the wisdom of slicing the string to chop the beginning off it vs. just keeping an index.
You can slice the string before returning from the decoder so it's only done once per chunk.
Anyhoo, thanks for the great work!
</issue>
<code>
[start of httpx/_decoders.py]
1 """
2 Handlers for Content-Encoding.
3
4 See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
5 """
6 import codecs
7 import io
8 import typing
9 import zlib
10
11 from ._compat import brotli
12 from ._exceptions import DecodingError
13
14
15 class ContentDecoder:
16 def decode(self, data: bytes) -> bytes:
17 raise NotImplementedError() # pragma: no cover
18
19 def flush(self) -> bytes:
20 raise NotImplementedError() # pragma: no cover
21
22
23 class IdentityDecoder(ContentDecoder):
24 """
25 Handle unencoded data.
26 """
27
28 def decode(self, data: bytes) -> bytes:
29 return data
30
31 def flush(self) -> bytes:
32 return b""
33
34
35 class DeflateDecoder(ContentDecoder):
36 """
37 Handle 'deflate' decoding.
38
39 See: https://stackoverflow.com/questions/1838699
40 """
41
42 def __init__(self) -> None:
43 self.first_attempt = True
44 self.decompressor = zlib.decompressobj()
45
46 def decode(self, data: bytes) -> bytes:
47 was_first_attempt = self.first_attempt
48 self.first_attempt = False
49 try:
50 return self.decompressor.decompress(data)
51 except zlib.error as exc:
52 if was_first_attempt:
53 self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)
54 return self.decode(data)
55 raise DecodingError(str(exc)) from exc
56
57 def flush(self) -> bytes:
58 try:
59 return self.decompressor.flush()
60 except zlib.error as exc: # pragma: no cover
61 raise DecodingError(str(exc)) from exc
62
63
64 class GZipDecoder(ContentDecoder):
65 """
66 Handle 'gzip' decoding.
67
68 See: https://stackoverflow.com/questions/1838699
69 """
70
71 def __init__(self) -> None:
72 self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)
73
74 def decode(self, data: bytes) -> bytes:
75 try:
76 return self.decompressor.decompress(data)
77 except zlib.error as exc:
78 raise DecodingError(str(exc)) from exc
79
80 def flush(self) -> bytes:
81 try:
82 return self.decompressor.flush()
83 except zlib.error as exc: # pragma: no cover
84 raise DecodingError(str(exc)) from exc
85
86
87 class BrotliDecoder(ContentDecoder):
88 """
89 Handle 'brotli' decoding.
90
91 Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/
92 or `pip install brotli`. See https://github.com/google/brotli
93 Supports both 'brotlipy' and 'Brotli' packages since they share an import
94 name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'
95 """
96
97 def __init__(self) -> None:
98 if brotli is None: # pragma: no cover
99 raise ImportError(
100 "Using 'BrotliDecoder', but neither of the 'brotlicffi' or 'brotli' "
101 "packages have been installed. "
102 "Make sure to install httpx using `pip install httpx[brotli]`."
103 ) from None
104
105 self.decompressor = brotli.Decompressor()
106 self.seen_data = False
107 self._decompress: typing.Callable[[bytes], bytes]
108 if hasattr(self.decompressor, "decompress"):
109 # The 'brotlicffi' package.
110 self._decompress = self.decompressor.decompress # pragma: no cover
111 else:
112 # The 'brotli' package.
113 self._decompress = self.decompressor.process # pragma: no cover
114
115 def decode(self, data: bytes) -> bytes:
116 if not data:
117 return b""
118 self.seen_data = True
119 try:
120 return self._decompress(data)
121 except brotli.error as exc:
122 raise DecodingError(str(exc)) from exc
123
124 def flush(self) -> bytes:
125 if not self.seen_data:
126 return b""
127 try:
128 if hasattr(self.decompressor, "finish"):
129 # Only available in the 'brotlicffi' package.
130
131 # As the decompressor decompresses eagerly, this
132 # will never actually emit any data. However, it will potentially throw
133 # errors if a truncated or damaged data stream has been used.
134 self.decompressor.finish() # pragma: no cover
135 return b""
136 except brotli.error as exc: # pragma: no cover
137 raise DecodingError(str(exc)) from exc
138
139
140 class MultiDecoder(ContentDecoder):
141 """
142 Handle the case where multiple encodings have been applied.
143 """
144
145 def __init__(self, children: typing.Sequence[ContentDecoder]) -> None:
146 """
147 'children' should be a sequence of decoders in the order in which
148 each was applied.
149 """
150 # Note that we reverse the order for decoding.
151 self.children = list(reversed(children))
152
153 def decode(self, data: bytes) -> bytes:
154 for child in self.children:
155 data = child.decode(data)
156 return data
157
158 def flush(self) -> bytes:
159 data = b""
160 for child in self.children:
161 data = child.decode(data) + child.flush()
162 return data
163
164
165 class ByteChunker:
166 """
167 Handles returning byte content in fixed-size chunks.
168 """
169
170 def __init__(self, chunk_size: typing.Optional[int] = None) -> None:
171 self._buffer = io.BytesIO()
172 self._chunk_size = chunk_size
173
174 def decode(self, content: bytes) -> typing.List[bytes]:
175 if self._chunk_size is None:
176 return [content] if content else []
177
178 self._buffer.write(content)
179 if self._buffer.tell() >= self._chunk_size:
180 value = self._buffer.getvalue()
181 chunks = [
182 value[i : i + self._chunk_size]
183 for i in range(0, len(value), self._chunk_size)
184 ]
185 if len(chunks[-1]) == self._chunk_size:
186 self._buffer.seek(0)
187 self._buffer.truncate()
188 return chunks
189 else:
190 self._buffer.seek(0)
191 self._buffer.write(chunks[-1])
192 self._buffer.truncate()
193 return chunks[:-1]
194 else:
195 return []
196
197 def flush(self) -> typing.List[bytes]:
198 value = self._buffer.getvalue()
199 self._buffer.seek(0)
200 self._buffer.truncate()
201 return [value] if value else []
202
203
204 class TextChunker:
205 """
206 Handles returning text content in fixed-size chunks.
207 """
208
209 def __init__(self, chunk_size: typing.Optional[int] = None) -> None:
210 self._buffer = io.StringIO()
211 self._chunk_size = chunk_size
212
213 def decode(self, content: str) -> typing.List[str]:
214 if self._chunk_size is None:
215 return [content]
216
217 self._buffer.write(content)
218 if self._buffer.tell() >= self._chunk_size:
219 value = self._buffer.getvalue()
220 chunks = [
221 value[i : i + self._chunk_size]
222 for i in range(0, len(value), self._chunk_size)
223 ]
224 if len(chunks[-1]) == self._chunk_size:
225 self._buffer.seek(0)
226 self._buffer.truncate()
227 return chunks
228 else:
229 self._buffer.seek(0)
230 self._buffer.write(chunks[-1])
231 self._buffer.truncate()
232 return chunks[:-1]
233 else:
234 return []
235
236 def flush(self) -> typing.List[str]:
237 value = self._buffer.getvalue()
238 self._buffer.seek(0)
239 self._buffer.truncate()
240 return [value] if value else []
241
242
243 class TextDecoder:
244 """
245 Handles incrementally decoding bytes into text
246 """
247
248 def __init__(self, encoding: str = "utf-8"):
249 self.decoder = codecs.getincrementaldecoder(encoding)(errors="replace")
250
251 def decode(self, data: bytes) -> str:
252 return self.decoder.decode(data)
253
254 def flush(self) -> str:
255 return self.decoder.decode(b"", True)
256
257
258 class LineDecoder:
259 """
260 Handles incrementally reading lines from text.
261
262 Uses universal line decoding, supporting any of `\n`, `\r`, or `\r\n`
263 as line endings, normalizing to `\n`.
264 """
265
266 def __init__(self) -> None:
267 self.buffer = ""
268
269 def decode(self, text: str) -> typing.List[str]:
270 lines = []
271
272 if text and self.buffer and self.buffer[-1] == "\r":
273 if text.startswith("\n"):
274 # Handle the case where we have an "\r\n" split across
275 # our previous input, and our new chunk.
276 lines.append(self.buffer[:-1] + "\n")
277 self.buffer = ""
278 text = text[1:]
279 else:
280 # Handle the case where we have "\r" at the end of our
281 # previous input.
282 lines.append(self.buffer[:-1] + "\n")
283 self.buffer = ""
284
285 while text:
286 num_chars = len(text)
287 for idx in range(num_chars):
288 char = text[idx]
289 next_char = None if idx + 1 == num_chars else text[idx + 1]
290 if char == "\n":
291 lines.append(self.buffer + text[: idx + 1])
292 self.buffer = ""
293 text = text[idx + 1 :]
294 break
295 elif char == "\r" and next_char == "\n":
296 lines.append(self.buffer + text[:idx] + "\n")
297 self.buffer = ""
298 text = text[idx + 2 :]
299 break
300 elif char == "\r" and next_char is not None:
301 lines.append(self.buffer + text[:idx] + "\n")
302 self.buffer = ""
303 text = text[idx + 1 :]
304 break
305 elif next_char is None:
306 self.buffer += text
307 text = ""
308 break
309
310 return lines
311
312 def flush(self) -> typing.List[str]:
313 if self.buffer.endswith("\r"):
314 # Handle the case where we had a trailing '\r', which could have
315 # been a '\r\n' pair.
316 lines = [self.buffer[:-1] + "\n"]
317 elif self.buffer:
318 lines = [self.buffer]
319 else:
320 lines = []
321 self.buffer = ""
322 return lines
323
324
325 SUPPORTED_DECODERS = {
326 "identity": IdentityDecoder,
327 "gzip": GZipDecoder,
328 "deflate": DeflateDecoder,
329 "br": BrotliDecoder,
330 }
331
332
333 if brotli is None:
334 SUPPORTED_DECODERS.pop("br") # pragma: no cover
335
[end of httpx/_decoders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/httpx/_decoders.py b/httpx/_decoders.py
--- a/httpx/_decoders.py
+++ b/httpx/_decoders.py
@@ -259,66 +259,56 @@
"""
Handles incrementally reading lines from text.
- Uses universal line decoding, supporting any of `\n`, `\r`, or `\r\n`
- as line endings, normalizing to `\n`.
+ Has the same behaviour as the stdllib splitlines, but handling the input iteratively.
"""
def __init__(self) -> None:
- self.buffer = ""
+ self.buffer: typing.List[str] = []
+ self.trailing_cr: bool = False
def decode(self, text: str) -> typing.List[str]:
- lines = []
-
- if text and self.buffer and self.buffer[-1] == "\r":
- if text.startswith("\n"):
- # Handle the case where we have an "\r\n" split across
- # our previous input, and our new chunk.
- lines.append(self.buffer[:-1] + "\n")
- self.buffer = ""
- text = text[1:]
- else:
- # Handle the case where we have "\r" at the end of our
- # previous input.
- lines.append(self.buffer[:-1] + "\n")
- self.buffer = ""
-
- while text:
- num_chars = len(text)
- for idx in range(num_chars):
- char = text[idx]
- next_char = None if idx + 1 == num_chars else text[idx + 1]
- if char == "\n":
- lines.append(self.buffer + text[: idx + 1])
- self.buffer = ""
- text = text[idx + 1 :]
- break
- elif char == "\r" and next_char == "\n":
- lines.append(self.buffer + text[:idx] + "\n")
- self.buffer = ""
- text = text[idx + 2 :]
- break
- elif char == "\r" and next_char is not None:
- lines.append(self.buffer + text[:idx] + "\n")
- self.buffer = ""
- text = text[idx + 1 :]
- break
- elif next_char is None:
- self.buffer += text
- text = ""
- break
+ # See https://docs.python.org/3/library/stdtypes.html#str.splitlines
+ NEWLINE_CHARS = "\n\r\x0b\x0c\x1c\x1d\x1e\x85\u2028\u2029"
+
+ # We always push a trailing `\r` into the next decode iteration.
+ if self.trailing_cr:
+ text = "\r" + text
+ self.trailing_cr = False
+ if text.endswith("\r"):
+ self.trailing_cr = True
+ text = text[:-1]
+
+ if not text:
+ return []
+
+ trailing_newline = text[-1] in NEWLINE_CHARS
+ lines = text.splitlines()
+
+ if len(lines) == 1 and not trailing_newline:
+ # No new lines, buffer the input and continue.
+ self.buffer.append(lines[0])
+ return []
+
+ if self.buffer:
+ # Include any existing buffer in the first portion of the
+ # splitlines result.
+ lines = ["".join(self.buffer) + lines[0]] + lines[1:]
+ self.buffer = []
+
+ if not trailing_newline:
+ # If the last segment of splitlines is not newline terminated,
+ # then drop it from our output and start a new buffer.
+ self.buffer = [lines.pop()]
return lines
def flush(self) -> typing.List[str]:
- if self.buffer.endswith("\r"):
- # Handle the case where we had a trailing '\r', which could have
- # been a '\r\n' pair.
- lines = [self.buffer[:-1] + "\n"]
- elif self.buffer:
- lines = [self.buffer]
- else:
- lines = []
- self.buffer = ""
+ if not self.buffer and not self.trailing_cr:
+ return []
+
+ lines = ["".join(self.buffer)]
+ self.buffer = []
+ self.trailing_cr = False
return lines
|
{"golden_diff": "diff --git a/httpx/_decoders.py b/httpx/_decoders.py\n--- a/httpx/_decoders.py\n+++ b/httpx/_decoders.py\n@@ -259,66 +259,56 @@\n \"\"\"\n Handles incrementally reading lines from text.\n \n- Uses universal line decoding, supporting any of `\\n`, `\\r`, or `\\r\\n`\n- as line endings, normalizing to `\\n`.\n+ Has the same behaviour as the stdllib splitlines, but handling the input iteratively.\n \"\"\"\n \n def __init__(self) -> None:\n- self.buffer = \"\"\n+ self.buffer: typing.List[str] = []\n+ self.trailing_cr: bool = False\n \n def decode(self, text: str) -> typing.List[str]:\n- lines = []\n-\n- if text and self.buffer and self.buffer[-1] == \"\\r\":\n- if text.startswith(\"\\n\"):\n- # Handle the case where we have an \"\\r\\n\" split across\n- # our previous input, and our new chunk.\n- lines.append(self.buffer[:-1] + \"\\n\")\n- self.buffer = \"\"\n- text = text[1:]\n- else:\n- # Handle the case where we have \"\\r\" at the end of our\n- # previous input.\n- lines.append(self.buffer[:-1] + \"\\n\")\n- self.buffer = \"\"\n-\n- while text:\n- num_chars = len(text)\n- for idx in range(num_chars):\n- char = text[idx]\n- next_char = None if idx + 1 == num_chars else text[idx + 1]\n- if char == \"\\n\":\n- lines.append(self.buffer + text[: idx + 1])\n- self.buffer = \"\"\n- text = text[idx + 1 :]\n- break\n- elif char == \"\\r\" and next_char == \"\\n\":\n- lines.append(self.buffer + text[:idx] + \"\\n\")\n- self.buffer = \"\"\n- text = text[idx + 2 :]\n- break\n- elif char == \"\\r\" and next_char is not None:\n- lines.append(self.buffer + text[:idx] + \"\\n\")\n- self.buffer = \"\"\n- text = text[idx + 1 :]\n- break\n- elif next_char is None:\n- self.buffer += text\n- text = \"\"\n- break\n+ # See https://docs.python.org/3/library/stdtypes.html#str.splitlines\n+ NEWLINE_CHARS = \"\\n\\r\\x0b\\x0c\\x1c\\x1d\\x1e\\x85\\u2028\\u2029\"\n+\n+ # We always push a trailing `\\r` into the next decode iteration.\n+ if self.trailing_cr:\n+ text = \"\\r\" + text\n+ self.trailing_cr = False\n+ if text.endswith(\"\\r\"):\n+ self.trailing_cr = True\n+ text = text[:-1]\n+\n+ if not text:\n+ return []\n+\n+ trailing_newline = text[-1] in NEWLINE_CHARS\n+ lines = text.splitlines()\n+\n+ if len(lines) == 1 and not trailing_newline:\n+ # No new lines, buffer the input and continue.\n+ self.buffer.append(lines[0])\n+ return []\n+\n+ if self.buffer:\n+ # Include any existing buffer in the first portion of the\n+ # splitlines result.\n+ lines = [\"\".join(self.buffer) + lines[0]] + lines[1:]\n+ self.buffer = []\n+\n+ if not trailing_newline:\n+ # If the last segment of splitlines is not newline terminated,\n+ # then drop it from our output and start a new buffer.\n+ self.buffer = [lines.pop()]\n \n return lines\n \n def flush(self) -> typing.List[str]:\n- if self.buffer.endswith(\"\\r\"):\n- # Handle the case where we had a trailing '\\r', which could have\n- # been a '\\r\\n' pair.\n- lines = [self.buffer[:-1] + \"\\n\"]\n- elif self.buffer:\n- lines = [self.buffer]\n- else:\n- lines = []\n- self.buffer = \"\"\n+ if not self.buffer and not self.trailing_cr:\n+ return []\n+\n+ lines = [\"\".join(self.buffer)]\n+ self.buffer = []\n+ self.trailing_cr = False\n return lines\n", "issue": "LineDecoder is accidentally quadratic: iter_lines() seems to hang forever\nWhen calling `Response.iter_lines()`, things can seem to hang forever.\r\n\r\nThe problem is that `LineDecoder` is quadratic in it's string copying behaviour. If a 31MB chunk with 18,768 lines is passed in to `LineDecoder()` then it takes 1m45s to process it vs 0.1s for a simple `text.splitlines()`. This is readily reproduced by simply using `LineDecoder()` to decode a file, like `/usr/share/dict/words` (do not attempt, you may be waiting until the heat death of the universe).\r\n\r\nIt may be a bug somewhere else that `iter_text()` is returning chunks that are too big.\r\n\r\nBut either way, you probably want to reconsider the wisdom of slicing the string to chop the beginning off it vs. just keeping an index.\r\n\r\nYou can slice the string before returning from the decoder so it's only done once per chunk.\r\n\r\nAnyhoo, thanks for the great work!\n", "before_files": [{"content": "\"\"\"\nHandlers for Content-Encoding.\n\nSee: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding\n\"\"\"\nimport codecs\nimport io\nimport typing\nimport zlib\n\nfrom ._compat import brotli\nfrom ._exceptions import DecodingError\n\n\nclass ContentDecoder:\n def decode(self, data: bytes) -> bytes:\n raise NotImplementedError() # pragma: no cover\n\n def flush(self) -> bytes:\n raise NotImplementedError() # pragma: no cover\n\n\nclass IdentityDecoder(ContentDecoder):\n \"\"\"\n Handle unencoded data.\n \"\"\"\n\n def decode(self, data: bytes) -> bytes:\n return data\n\n def flush(self) -> bytes:\n return b\"\"\n\n\nclass DeflateDecoder(ContentDecoder):\n \"\"\"\n Handle 'deflate' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.first_attempt = True\n self.decompressor = zlib.decompressobj()\n\n def decode(self, data: bytes) -> bytes:\n was_first_attempt = self.first_attempt\n self.first_attempt = False\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n if was_first_attempt:\n self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)\n return self.decode(data)\n raise DecodingError(str(exc)) from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: no cover\n raise DecodingError(str(exc)) from exc\n\n\nclass GZipDecoder(ContentDecoder):\n \"\"\"\n Handle 'gzip' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)\n\n def decode(self, data: bytes) -> bytes:\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n raise DecodingError(str(exc)) from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: no cover\n raise DecodingError(str(exc)) from exc\n\n\nclass BrotliDecoder(ContentDecoder):\n \"\"\"\n Handle 'brotli' decoding.\n\n Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/\n or `pip install brotli`. See https://github.com/google/brotli\n Supports both 'brotlipy' and 'Brotli' packages since they share an import\n name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'\n \"\"\"\n\n def __init__(self) -> None:\n if brotli is None: # pragma: no cover\n raise ImportError(\n \"Using 'BrotliDecoder', but neither of the 'brotlicffi' or 'brotli' \"\n \"packages have been installed. \"\n \"Make sure to install httpx using `pip install httpx[brotli]`.\"\n ) from None\n\n self.decompressor = brotli.Decompressor()\n self.seen_data = False\n self._decompress: typing.Callable[[bytes], bytes]\n if hasattr(self.decompressor, \"decompress\"):\n # The 'brotlicffi' package.\n self._decompress = self.decompressor.decompress # pragma: no cover\n else:\n # The 'brotli' package.\n self._decompress = self.decompressor.process # pragma: no cover\n\n def decode(self, data: bytes) -> bytes:\n if not data:\n return b\"\"\n self.seen_data = True\n try:\n return self._decompress(data)\n except brotli.error as exc:\n raise DecodingError(str(exc)) from exc\n\n def flush(self) -> bytes:\n if not self.seen_data:\n return b\"\"\n try:\n if hasattr(self.decompressor, \"finish\"):\n # Only available in the 'brotlicffi' package.\n\n # As the decompressor decompresses eagerly, this\n # will never actually emit any data. However, it will potentially throw\n # errors if a truncated or damaged data stream has been used.\n self.decompressor.finish() # pragma: no cover\n return b\"\"\n except brotli.error as exc: # pragma: no cover\n raise DecodingError(str(exc)) from exc\n\n\nclass MultiDecoder(ContentDecoder):\n \"\"\"\n Handle the case where multiple encodings have been applied.\n \"\"\"\n\n def __init__(self, children: typing.Sequence[ContentDecoder]) -> None:\n \"\"\"\n 'children' should be a sequence of decoders in the order in which\n each was applied.\n \"\"\"\n # Note that we reverse the order for decoding.\n self.children = list(reversed(children))\n\n def decode(self, data: bytes) -> bytes:\n for child in self.children:\n data = child.decode(data)\n return data\n\n def flush(self) -> bytes:\n data = b\"\"\n for child in self.children:\n data = child.decode(data) + child.flush()\n return data\n\n\nclass ByteChunker:\n \"\"\"\n Handles returning byte content in fixed-size chunks.\n \"\"\"\n\n def __init__(self, chunk_size: typing.Optional[int] = None) -> None:\n self._buffer = io.BytesIO()\n self._chunk_size = chunk_size\n\n def decode(self, content: bytes) -> typing.List[bytes]:\n if self._chunk_size is None:\n return [content] if content else []\n\n self._buffer.write(content)\n if self._buffer.tell() >= self._chunk_size:\n value = self._buffer.getvalue()\n chunks = [\n value[i : i + self._chunk_size]\n for i in range(0, len(value), self._chunk_size)\n ]\n if len(chunks[-1]) == self._chunk_size:\n self._buffer.seek(0)\n self._buffer.truncate()\n return chunks\n else:\n self._buffer.seek(0)\n self._buffer.write(chunks[-1])\n self._buffer.truncate()\n return chunks[:-1]\n else:\n return []\n\n def flush(self) -> typing.List[bytes]:\n value = self._buffer.getvalue()\n self._buffer.seek(0)\n self._buffer.truncate()\n return [value] if value else []\n\n\nclass TextChunker:\n \"\"\"\n Handles returning text content in fixed-size chunks.\n \"\"\"\n\n def __init__(self, chunk_size: typing.Optional[int] = None) -> None:\n self._buffer = io.StringIO()\n self._chunk_size = chunk_size\n\n def decode(self, content: str) -> typing.List[str]:\n if self._chunk_size is None:\n return [content]\n\n self._buffer.write(content)\n if self._buffer.tell() >= self._chunk_size:\n value = self._buffer.getvalue()\n chunks = [\n value[i : i + self._chunk_size]\n for i in range(0, len(value), self._chunk_size)\n ]\n if len(chunks[-1]) == self._chunk_size:\n self._buffer.seek(0)\n self._buffer.truncate()\n return chunks\n else:\n self._buffer.seek(0)\n self._buffer.write(chunks[-1])\n self._buffer.truncate()\n return chunks[:-1]\n else:\n return []\n\n def flush(self) -> typing.List[str]:\n value = self._buffer.getvalue()\n self._buffer.seek(0)\n self._buffer.truncate()\n return [value] if value else []\n\n\nclass TextDecoder:\n \"\"\"\n Handles incrementally decoding bytes into text\n \"\"\"\n\n def __init__(self, encoding: str = \"utf-8\"):\n self.decoder = codecs.getincrementaldecoder(encoding)(errors=\"replace\")\n\n def decode(self, data: bytes) -> str:\n return self.decoder.decode(data)\n\n def flush(self) -> str:\n return self.decoder.decode(b\"\", True)\n\n\nclass LineDecoder:\n \"\"\"\n Handles incrementally reading lines from text.\n\n Uses universal line decoding, supporting any of `\\n`, `\\r`, or `\\r\\n`\n as line endings, normalizing to `\\n`.\n \"\"\"\n\n def __init__(self) -> None:\n self.buffer = \"\"\n\n def decode(self, text: str) -> typing.List[str]:\n lines = []\n\n if text and self.buffer and self.buffer[-1] == \"\\r\":\n if text.startswith(\"\\n\"):\n # Handle the case where we have an \"\\r\\n\" split across\n # our previous input, and our new chunk.\n lines.append(self.buffer[:-1] + \"\\n\")\n self.buffer = \"\"\n text = text[1:]\n else:\n # Handle the case where we have \"\\r\" at the end of our\n # previous input.\n lines.append(self.buffer[:-1] + \"\\n\")\n self.buffer = \"\"\n\n while text:\n num_chars = len(text)\n for idx in range(num_chars):\n char = text[idx]\n next_char = None if idx + 1 == num_chars else text[idx + 1]\n if char == \"\\n\":\n lines.append(self.buffer + text[: idx + 1])\n self.buffer = \"\"\n text = text[idx + 1 :]\n break\n elif char == \"\\r\" and next_char == \"\\n\":\n lines.append(self.buffer + text[:idx] + \"\\n\")\n self.buffer = \"\"\n text = text[idx + 2 :]\n break\n elif char == \"\\r\" and next_char is not None:\n lines.append(self.buffer + text[:idx] + \"\\n\")\n self.buffer = \"\"\n text = text[idx + 1 :]\n break\n elif next_char is None:\n self.buffer += text\n text = \"\"\n break\n\n return lines\n\n def flush(self) -> typing.List[str]:\n if self.buffer.endswith(\"\\r\"):\n # Handle the case where we had a trailing '\\r', which could have\n # been a '\\r\\n' pair.\n lines = [self.buffer[:-1] + \"\\n\"]\n elif self.buffer:\n lines = [self.buffer]\n else:\n lines = []\n self.buffer = \"\"\n return lines\n\n\nSUPPORTED_DECODERS = {\n \"identity\": IdentityDecoder,\n \"gzip\": GZipDecoder,\n \"deflate\": DeflateDecoder,\n \"br\": BrotliDecoder,\n}\n\n\nif brotli is None:\n SUPPORTED_DECODERS.pop(\"br\") # pragma: no cover\n", "path": "httpx/_decoders.py"}]}
| 4,014 | 967 |
gh_patches_debug_14032
|
rasdani/github-patches
|
git_diff
|
ultrabug__py3status-775
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
group cycle/click events fail
Since happy-endings we get an AttributeError L154 of events.py
</issue>
<code>
[start of py3status/events.py]
1 import select
2 import sys
3
4 from threading import Thread
5 from subprocess import Popen, PIPE
6 from json import loads
7
8 from py3status.profiling import profile
9
10 try:
11 # Python 3
12 from shlex import quote as shell_quote
13 except ImportError:
14 # Python 2
15 from pipes import quote as shell_quote
16
17
18 class IOPoller:
19 """
20 This class implements a predictive and timing-out I/O reader
21 using select and the poll() mechanism for greater compatibility.
22 """
23
24 def __init__(self, io, eventmask=select.POLLIN):
25 """
26 Our default is to read (POLLIN) the specified 'io' file descriptor.
27 """
28 self.io = io
29 self.poller = select.poll()
30 self.poller.register(io, eventmask)
31
32 def readline(self, timeout=500):
33 """
34 Try to read our I/O for 'timeout' milliseconds, return None otherwise.
35 This makes calling and reading I/O non blocking !
36 """
37 poll_result = self.poller.poll(timeout)
38 if poll_result:
39 line = self.io.readline().strip()
40 if self.io == sys.stdin and line == '[':
41 # skip first event line wrt issue #19
42 line = self.io.readline().strip()
43 try:
44 # python3 compatibility code
45 line = line.decode()
46 except (AttributeError, UnicodeDecodeError):
47 pass
48 return line
49 else:
50 return None
51
52
53 class Events(Thread):
54 """
55 This class is responsible for dispatching event JSONs sent by the i3bar.
56 """
57
58 def __init__(self, py3_wrapper):
59 """
60 We need to poll stdin to receive i3bar messages.
61 """
62 Thread.__init__(self)
63 self.config = py3_wrapper.config
64 self.error = None
65 self.py3_config = py3_wrapper.config['py3_config']
66 self.lock = py3_wrapper.lock
67 self.modules = py3_wrapper.modules
68 self.on_click = self.py3_config['on_click']
69 self.output_modules = py3_wrapper.output_modules
70 self.poller_inp = IOPoller(sys.stdin)
71 self.py3_wrapper = py3_wrapper
72
73 def get_module_text(self, module_name, event):
74 """
75 Get the full text for the module as well as the partial text if the
76 module is a composite. Partial text is the text for just the single
77 section of a composite.
78 """
79 index = event.get('index')
80 module_info = self.py3_wrapper.output_modules.get(module_name)
81 if module_info:
82 output = module_info['module'].get_latest()
83 full_text = u''.join([out['full_text'] for out in output])
84
85 partial = None
86 if index is not None:
87 if isinstance(index, int):
88 partial = output[index]
89 else:
90 for item in output:
91 if item.get('index') == index:
92 partial = item
93 break
94 if partial:
95 partial_text = partial['full_text']
96 else:
97 partial_text = full_text
98 return full_text, partial_text
99
100 def on_click_dispatcher(self, module_name, event, command):
101 """
102 Dispatch on_click config parameters to either:
103 - Our own methods for special py3status commands (listed below)
104 - The i3-msg program which is part of i3wm
105 """
106 if command is None:
107 return
108 elif command == 'refresh_all':
109 self.py3_wrapper.refresh_modules()
110 elif command == 'refresh':
111 self.py3_wrapper.refresh_modules(module_name)
112 else:
113 # In commands we are able to use substitutions for the text output
114 # of a module
115 if '$OUTPUT' in command or '$OUTPUT_PART' in command:
116 full_text, partial_text = self.get_module_text(module_name,
117 event)
118 command = command.replace('$OUTPUT_PART',
119 shell_quote(partial_text))
120 command = command.replace('$OUTPUT', shell_quote(full_text))
121
122 # this is a i3 message
123 self.i3_msg(module_name, command)
124 # to make the bar more responsive to users we ask for a refresh
125 # of the module or of i3status if the module is an i3status one
126 self.py3_wrapper.refresh_modules(module_name)
127
128 def i3_msg(self, module_name, command):
129 """
130 Execute the given i3 message and log its output.
131 """
132 i3_msg_pipe = Popen(['i3-msg', command], stdout=PIPE)
133 self.py3_wrapper.log('i3-msg module="{}" command="{}" stdout={}'.format(
134 module_name, command, i3_msg_pipe.stdout.read()))
135
136 def process_event(self, module_name, event, top_level=True):
137 """
138 Process the event for the named module.
139 Events may have been declared in i3status.conf, modules may have
140 on_click() functions. There is a default middle click event etc.
141 """
142 button = event.get('button', 0)
143 default_event = False
144
145 # get the module that the event is for
146 module_info = self.output_modules.get(module_name)
147 if not module_info:
148 return
149 module = module_info['module']
150
151 # execute any configured i3-msg command
152 # we do not do this for containers
153 # modules that have failed do not execute their config on_click
154 if top_level and not module.error_messages:
155 click_module = event['name']
156 if event['instance']:
157 click_module += ' ' + event['instance']
158 btn = str(button)
159 if self.on_click.get(click_module, {}).get(btn):
160 self.on_click_dispatcher(click_module,
161 event,
162 self.on_click[module_name].get(btn))
163 # otherwise setup default action on button 2 press
164 elif button == 2:
165 default_event = True
166
167 # if module is a py3status one call it.
168 if module_info['type'] == 'py3status':
169 module.click_event(event)
170 if self.config['debug']:
171 self.py3_wrapper.log('dispatching event {}'.format(event))
172
173 # to make the bar more responsive to users we refresh the module
174 # unless the on_click event called py3.prevent_refresh()
175 if not module.prevent_refresh:
176 self.py3_wrapper.refresh_modules(module_name)
177 default_event = False
178
179 if default_event:
180 # default button 2 action is to clear this method's cache
181 if self.config['debug']:
182 self.py3_wrapper.log(
183 'dispatching default event {}'.format(event))
184 self.py3_wrapper.refresh_modules(module_name)
185
186 # find container that holds the module and call its onclick
187 module_groups = self.py3_config['.module_groups']
188 containers = module_groups.get(module_name, [])
189 for container in containers:
190 self.process_event(container, event, top_level=False)
191
192 def dispatch_event(self, event):
193 '''
194 Takes an event dict. Logs the event if needed and cleans up the dict
195 such as setting the index needed for composits.
196 '''
197 if self.config['debug']:
198 self.py3_wrapper.log('received event {}'.format(event))
199 # usage variables
200 instance = event.get('instance', '')
201 name = event.get('name', '')
202
203 # composites have an index which is passed to i3bar with
204 # the instance. We need to separate this out here and
205 # clean up the event. If index
206 # is an integer type then cast it as such.
207 if ' ' in instance:
208 instance, index = instance.split(' ', 1)
209 try:
210 index = int(index)
211 except ValueError:
212 pass
213 event['index'] = index
214 event['instance'] = instance
215
216 if self.config['debug']:
217 self.py3_wrapper.log(
218 'trying to dispatch event to module "{}"'.format(
219 '{} {}'.format(name, instance).strip()))
220
221 # guess the module config name
222 module_name = '{} {}'.format(name, instance).strip()
223 # do the work
224 self.process_event(module_name, event)
225
226 @profile
227 def run(self):
228 """
229 Wait for an i3bar JSON event, then find the right module to dispatch
230 the message to based on the 'name' and 'instance' of the event.
231
232 In case the module does NOT support click_events, the default
233 implementation is to clear the module's cache
234 when the MIDDLE button (2) is pressed on it.
235
236 Example event:
237 {'y': 13, 'x': 1737, 'button': 1, 'name': 'empty', 'instance': 'first'}
238 """
239 while self.lock.is_set():
240 event_str = self.poller_inp.readline()
241 if not event_str:
242 continue
243 try:
244 # remove leading comma if present
245 if event_str[0] == ',':
246 event_str = event_str[1:]
247 event = loads(event_str)
248 self.dispatch_event(event)
249 except Exception:
250 self.py3_wrapper.report_exception('Event failed')
251
[end of py3status/events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py3status/events.py b/py3status/events.py
--- a/py3status/events.py
+++ b/py3status/events.py
@@ -148,10 +148,15 @@
return
module = module_info['module']
+ try:
+ errors = module.error_messages
+ except AttributeError:
+ errors = False
+
# execute any configured i3-msg command
# we do not do this for containers
# modules that have failed do not execute their config on_click
- if top_level and not module.error_messages:
+ if top_level and not errors:
click_module = event['name']
if event['instance']:
click_module += ' ' + event['instance']
|
{"golden_diff": "diff --git a/py3status/events.py b/py3status/events.py\n--- a/py3status/events.py\n+++ b/py3status/events.py\n@@ -148,10 +148,15 @@\n return\n module = module_info['module']\n \n+ try:\n+ errors = module.error_messages\n+ except AttributeError:\n+ errors = False\n+\n # execute any configured i3-msg command\n # we do not do this for containers\n # modules that have failed do not execute their config on_click\n- if top_level and not module.error_messages:\n+ if top_level and not errors:\n click_module = event['name']\n if event['instance']:\n click_module += ' ' + event['instance']\n", "issue": "group cycle/click events fail\nSince happy-endings we get an AttributeError L154 of events.py\n", "before_files": [{"content": "import select\nimport sys\n\nfrom threading import Thread\nfrom subprocess import Popen, PIPE\nfrom json import loads\n\nfrom py3status.profiling import profile\n\ntry:\n # Python 3\n from shlex import quote as shell_quote\nexcept ImportError:\n # Python 2\n from pipes import quote as shell_quote\n\n\nclass IOPoller:\n \"\"\"\n This class implements a predictive and timing-out I/O reader\n using select and the poll() mechanism for greater compatibility.\n \"\"\"\n\n def __init__(self, io, eventmask=select.POLLIN):\n \"\"\"\n Our default is to read (POLLIN) the specified 'io' file descriptor.\n \"\"\"\n self.io = io\n self.poller = select.poll()\n self.poller.register(io, eventmask)\n\n def readline(self, timeout=500):\n \"\"\"\n Try to read our I/O for 'timeout' milliseconds, return None otherwise.\n This makes calling and reading I/O non blocking !\n \"\"\"\n poll_result = self.poller.poll(timeout)\n if poll_result:\n line = self.io.readline().strip()\n if self.io == sys.stdin and line == '[':\n # skip first event line wrt issue #19\n line = self.io.readline().strip()\n try:\n # python3 compatibility code\n line = line.decode()\n except (AttributeError, UnicodeDecodeError):\n pass\n return line\n else:\n return None\n\n\nclass Events(Thread):\n \"\"\"\n This class is responsible for dispatching event JSONs sent by the i3bar.\n \"\"\"\n\n def __init__(self, py3_wrapper):\n \"\"\"\n We need to poll stdin to receive i3bar messages.\n \"\"\"\n Thread.__init__(self)\n self.config = py3_wrapper.config\n self.error = None\n self.py3_config = py3_wrapper.config['py3_config']\n self.lock = py3_wrapper.lock\n self.modules = py3_wrapper.modules\n self.on_click = self.py3_config['on_click']\n self.output_modules = py3_wrapper.output_modules\n self.poller_inp = IOPoller(sys.stdin)\n self.py3_wrapper = py3_wrapper\n\n def get_module_text(self, module_name, event):\n \"\"\"\n Get the full text for the module as well as the partial text if the\n module is a composite. Partial text is the text for just the single\n section of a composite.\n \"\"\"\n index = event.get('index')\n module_info = self.py3_wrapper.output_modules.get(module_name)\n if module_info:\n output = module_info['module'].get_latest()\n full_text = u''.join([out['full_text'] for out in output])\n\n partial = None\n if index is not None:\n if isinstance(index, int):\n partial = output[index]\n else:\n for item in output:\n if item.get('index') == index:\n partial = item\n break\n if partial:\n partial_text = partial['full_text']\n else:\n partial_text = full_text\n return full_text, partial_text\n\n def on_click_dispatcher(self, module_name, event, command):\n \"\"\"\n Dispatch on_click config parameters to either:\n - Our own methods for special py3status commands (listed below)\n - The i3-msg program which is part of i3wm\n \"\"\"\n if command is None:\n return\n elif command == 'refresh_all':\n self.py3_wrapper.refresh_modules()\n elif command == 'refresh':\n self.py3_wrapper.refresh_modules(module_name)\n else:\n # In commands we are able to use substitutions for the text output\n # of a module\n if '$OUTPUT' in command or '$OUTPUT_PART' in command:\n full_text, partial_text = self.get_module_text(module_name,\n event)\n command = command.replace('$OUTPUT_PART',\n shell_quote(partial_text))\n command = command.replace('$OUTPUT', shell_quote(full_text))\n\n # this is a i3 message\n self.i3_msg(module_name, command)\n # to make the bar more responsive to users we ask for a refresh\n # of the module or of i3status if the module is an i3status one\n self.py3_wrapper.refresh_modules(module_name)\n\n def i3_msg(self, module_name, command):\n \"\"\"\n Execute the given i3 message and log its output.\n \"\"\"\n i3_msg_pipe = Popen(['i3-msg', command], stdout=PIPE)\n self.py3_wrapper.log('i3-msg module=\"{}\" command=\"{}\" stdout={}'.format(\n module_name, command, i3_msg_pipe.stdout.read()))\n\n def process_event(self, module_name, event, top_level=True):\n \"\"\"\n Process the event for the named module.\n Events may have been declared in i3status.conf, modules may have\n on_click() functions. There is a default middle click event etc.\n \"\"\"\n button = event.get('button', 0)\n default_event = False\n\n # get the module that the event is for\n module_info = self.output_modules.get(module_name)\n if not module_info:\n return\n module = module_info['module']\n\n # execute any configured i3-msg command\n # we do not do this for containers\n # modules that have failed do not execute their config on_click\n if top_level and not module.error_messages:\n click_module = event['name']\n if event['instance']:\n click_module += ' ' + event['instance']\n btn = str(button)\n if self.on_click.get(click_module, {}).get(btn):\n self.on_click_dispatcher(click_module,\n event,\n self.on_click[module_name].get(btn))\n # otherwise setup default action on button 2 press\n elif button == 2:\n default_event = True\n\n # if module is a py3status one call it.\n if module_info['type'] == 'py3status':\n module.click_event(event)\n if self.config['debug']:\n self.py3_wrapper.log('dispatching event {}'.format(event))\n\n # to make the bar more responsive to users we refresh the module\n # unless the on_click event called py3.prevent_refresh()\n if not module.prevent_refresh:\n self.py3_wrapper.refresh_modules(module_name)\n default_event = False\n\n if default_event:\n # default button 2 action is to clear this method's cache\n if self.config['debug']:\n self.py3_wrapper.log(\n 'dispatching default event {}'.format(event))\n self.py3_wrapper.refresh_modules(module_name)\n\n # find container that holds the module and call its onclick\n module_groups = self.py3_config['.module_groups']\n containers = module_groups.get(module_name, [])\n for container in containers:\n self.process_event(container, event, top_level=False)\n\n def dispatch_event(self, event):\n '''\n Takes an event dict. Logs the event if needed and cleans up the dict\n such as setting the index needed for composits.\n '''\n if self.config['debug']:\n self.py3_wrapper.log('received event {}'.format(event))\n # usage variables\n instance = event.get('instance', '')\n name = event.get('name', '')\n\n # composites have an index which is passed to i3bar with\n # the instance. We need to separate this out here and\n # clean up the event. If index\n # is an integer type then cast it as such.\n if ' ' in instance:\n instance, index = instance.split(' ', 1)\n try:\n index = int(index)\n except ValueError:\n pass\n event['index'] = index\n event['instance'] = instance\n\n if self.config['debug']:\n self.py3_wrapper.log(\n 'trying to dispatch event to module \"{}\"'.format(\n '{} {}'.format(name, instance).strip()))\n\n # guess the module config name\n module_name = '{} {}'.format(name, instance).strip()\n # do the work\n self.process_event(module_name, event)\n\n @profile\n def run(self):\n \"\"\"\n Wait for an i3bar JSON event, then find the right module to dispatch\n the message to based on the 'name' and 'instance' of the event.\n\n In case the module does NOT support click_events, the default\n implementation is to clear the module's cache\n when the MIDDLE button (2) is pressed on it.\n\n Example event:\n {'y': 13, 'x': 1737, 'button': 1, 'name': 'empty', 'instance': 'first'}\n \"\"\"\n while self.lock.is_set():\n event_str = self.poller_inp.readline()\n if not event_str:\n continue\n try:\n # remove leading comma if present\n if event_str[0] == ',':\n event_str = event_str[1:]\n event = loads(event_str)\n self.dispatch_event(event)\n except Exception:\n self.py3_wrapper.report_exception('Event failed')\n", "path": "py3status/events.py"}]}
| 3,156 | 163 |
gh_patches_debug_36431
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-235
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`check` cmd output differs from documentation.
Running `python manage.py elasticapm check` returns an error if no `Secret_Token` is configured. Documentation states that configuration is optional.
</issue>
<code>
[start of elasticapm/contrib/django/management/commands/elasticapm.py]
1 from __future__ import absolute_import
2
3 import sys
4
5 from django.conf import settings
6 from django.core.management.base import BaseCommand
7 from django.core.management.color import color_style
8 from django.utils import termcolors
9
10 from elasticapm.contrib.django.client import DjangoClient
11 from elasticapm.utils.compat import urlparse
12
13 try:
14 from django.core.management.base import OutputWrapper
15 except ImportError:
16 OutputWrapper = None
17
18
19 blue = termcolors.make_style(opts=('bold',), fg='blue')
20 cyan = termcolors.make_style(opts=('bold',), fg='cyan')
21 green = termcolors.make_style(fg='green')
22 magenta = termcolors.make_style(opts=('bold',), fg='magenta')
23 red = termcolors.make_style(opts=('bold',), fg='red')
24 white = termcolors.make_style(opts=('bold',), fg='white')
25 yellow = termcolors.make_style(opts=('bold',), fg='yellow')
26
27
28 class TestException(Exception):
29 pass
30
31
32 class ColoredLogger(object):
33 def __init__(self, stream):
34 self.stream = stream
35 self.errors = []
36 self.color = color_style()
37
38 def log(self, level, *args, **kwargs):
39 style = kwargs.pop('style', self.color.NOTICE)
40 msg = ' '.join((level.upper(), args[0] % args[1:], '\n'))
41 if OutputWrapper is None:
42 self.stream.write(msg)
43 else:
44 self.stream.write(msg, style_func=style)
45
46 def error(self, *args, **kwargs):
47 kwargs['style'] = red
48 self.log('error', *args, **kwargs)
49 self.errors.append((args,))
50
51 def warning(self, *args, **kwargs):
52 kwargs['style'] = yellow
53 self.log('warning', *args, **kwargs)
54
55 def info(self, *args, **kwargs):
56 kwargs['style'] = green
57 self.log('info', *args, **kwargs)
58
59
60 CONFIG_EXAMPLE = """
61
62 You can set it in your settings file:
63
64 ELASTIC_APM = {
65 'SERVICE_NAME': '<YOUR-SERVICE-NAME>',
66 'SECRET_TOKEN': '<YOUR-SECRET-TOKEN>',
67 }
68
69 or with environment variables:
70
71 $ export ELASTIC_APM_SERVICE_NAME="<YOUR-SERVICE-NAME>"
72 $ export ELASTIC_APM_SECRET_TOKEN="<YOUR-SECRET-TOKEN>"
73 $ python manage.py elasticapm check
74
75 """
76
77
78 class Command(BaseCommand):
79 arguments = (
80 (('-s', '--service-name'),
81 {'default': None, 'dest': 'service_name', 'help': 'Specifies the service name.'}),
82
83 (('-t', '--token'),
84 {'default': None, 'dest': 'secret_token', 'help': 'Specifies the secret token.'})
85 )
86
87 args = 'test check'
88
89 def add_arguments(self, parser):
90 parser.add_argument('subcommand')
91 for args, kwargs in self.arguments:
92 parser.add_argument(*args, **kwargs)
93
94 def handle(self, *args, **options):
95 if 'subcommand' in options:
96 subcommand = options['subcommand']
97 else:
98 return self.handle_command_not_found('No command specified.')
99 if subcommand not in self.dispatch:
100 self.handle_command_not_found('No such command "%s".' % subcommand)
101 else:
102 self.dispatch.get(
103 subcommand,
104 self.handle_command_not_found
105 )(self, subcommand, **options)
106
107 def handle_test(self, command, **options):
108 """Send a test error to APM Server"""
109 # can't be async for testing
110 config = {'async_mode': False, }
111 for key in ('service_name', 'secret_token'):
112 if options.get(key):
113 config[key] = options[key]
114 client = DjangoClient(**config)
115 client.error_logger = ColoredLogger(self.stderr)
116 client.logger = ColoredLogger(self.stderr)
117 client.state.logger = client.logger
118 client.state.error_logger = client.error_logger
119 self.write(
120 "Trying to send a test error to APM Server using these settings:\n\n"
121 "SERVICE_NAME:\t%s\n"
122 "SECRET_TOKEN:\t%s\n"
123 "SERVER:\t\t%s\n\n" % (
124 client.config.service_name,
125 client.config.secret_token,
126 client.config.server_url,
127 )
128 )
129
130 try:
131 raise TestException('Hi there!')
132 except TestException:
133 client.capture_exception()
134 if not client.error_logger.errors:
135 self.write(
136 'Success! We tracked the error successfully! You should be'
137 ' able to see it in a few seconds at the above URL'
138 )
139 finally:
140 client.close()
141
142 def handle_check(self, command, **options):
143 """Check your settings for common misconfigurations"""
144 passed = True
145 client = DjangoClient()
146 # check if org/app and token are set:
147 is_set = lambda x: x and x != 'None'
148 values = [client.config.service_name, client.config.secret_token]
149 if all(map(is_set, values)):
150 self.write(
151 'Service name and secret token are set, good job!',
152 green
153 )
154 else:
155 passed = False
156 self.write(
157 'Configuration errors detected!', red, ending='\n\n'
158 )
159 if not is_set(client.config.service_name):
160 self.write(" * SERVICE_NAME not set! ", red, ending='\n')
161 if not is_set(client.config.secret_token):
162 self.write(" * SECRET_TOKEN not set!", red, ending='\n')
163 self.write(CONFIG_EXAMPLE)
164 self.write('')
165
166 server_url = client.config.server_url
167 if server_url:
168 parsed_url = urlparse.urlparse(server_url)
169 if parsed_url.scheme.lower() in ('http', 'https', ):
170 # parse netloc, making sure people did not supply basic auth
171 if '@' in parsed_url.netloc:
172 credentials, _, path = parsed_url.netloc.rpartition('@')
173 passed = False
174 self.write(
175 'Configuration errors detected!', red, ending='\n\n'
176 )
177 if ':' in credentials:
178 self.write(' * SERVER_URL cannot contain authentication '
179 'credentials', red, ending='\n')
180 else:
181 self.write(' * SERVER_URL contains an unexpected at-sign!'
182 ' This is usually used for basic authentication, '
183 'but the colon is left out', red, ending='\n')
184 else:
185 self.write('SERVER_URL {0} looks fine'.format(server_url, ), green)
186 else:
187 self.write(' * SERVER_URL has scheme {0} and we require '
188 'http or https!'.format(parsed_url.scheme, ), red, ending='\n')
189 passed = False
190 else:
191 self.write(
192 'Configuration errors detected!', red, ending='\n\n'
193 )
194 self.write(" * SERVER_URL appears to be empty", red, ending='\n')
195 passed = False
196 self.write('')
197
198 # check if we're disabled due to DEBUG:
199 if settings.DEBUG:
200 if getattr(settings, 'ELASTIC_APM', {}).get('DEBUG'):
201 self.write(
202 'Note: even though you are running in DEBUG mode, we will '
203 'send data to the APM Server, because you set ELASTIC_APM["DEBUG"] to '
204 'True. You can disable ElasticAPM while in DEBUG mode like this'
205 '\n\n',
206 yellow
207 )
208 self.write(
209 ' ELASTIC_APM = {\n'
210 ' "DEBUG": False,\n'
211 ' # your other ELASTIC_APM settings\n'
212 ' }'
213
214 )
215 else:
216 self.write(
217 'Looks like you\'re running in DEBUG mode. ElasticAPM will NOT '
218 'gather any data while DEBUG is set to True.\n\n',
219 red,
220 )
221 self.write(
222 'If you want to test ElasticAPM while DEBUG is set to True, you'
223 ' can force ElasticAPM to gather data by setting'
224 ' ELASTIC_APM["DEBUG"] to True, like this\n\n'
225 ' ELASTIC_APM = {\n'
226 ' "DEBUG": True,\n'
227 ' # your other ELASTIC_APM settings\n'
228 ' }'
229 )
230 passed = False
231 else:
232 self.write(
233 'DEBUG mode is disabled! Looking good!',
234 green
235 )
236 self.write('')
237
238 # check if middleware is set, and if it is at the first position
239 middleware_attr = 'MIDDLEWARE' if getattr(settings, 'MIDDLEWARE', None) is not None else 'MIDDLEWARE_CLASSES'
240 middleware = list(getattr(settings, middleware_attr))
241 try:
242 pos = middleware.index('elasticapm.contrib.django.middleware.TracingMiddleware')
243 if pos == 0:
244 self.write('Tracing middleware is configured! Awesome!', green)
245 else:
246 self.write('Tracing middleware is configured, but not at the first position\n', yellow)
247 self.write('ElasticAPM works best if you add it at the top of your %s setting' % middleware_attr)
248 except ValueError:
249 self.write('Tracing middleware not configured!', red)
250 self.write(
251 '\n'
252 'Add it to your %(name)s setting like this:\n\n'
253 ' %(name)s = (\n'
254 ' "elasticapm.contrib.django.middleware.TracingMiddleware",\n'
255 ' # your other middleware classes\n'
256 ' )\n' % {'name': middleware_attr}
257 )
258 self.write('')
259 if passed:
260 self.write('Looks like everything should be ready!', green)
261 else:
262 self.write(
263 'Please fix the above errors.',
264 red
265 )
266 self.write('')
267 return passed
268
269 def handle_command_not_found(self, message):
270 self.write(message, red, ending='')
271 self.write(
272 ' Please use one of the following commands:\n\n',
273 red
274 )
275 self.write(
276 ''.join(
277 ' * %s\t%s\n' % (k.ljust(8), v.__doc__)
278 for k, v in self.dispatch.items()
279 )
280 )
281 self.write('\n')
282 argv = self._get_argv()
283 self.write(
284 'Usage:\n\t%s elasticapm <command>' % (
285 ' '.join(argv[:argv.index('elasticapm')])
286 )
287 )
288
289 def write(self, msg, style_func=None, ending=None, stream=None):
290 """
291 wrapper around self.stdout/stderr to ensure Django 1.4 compatibility
292 """
293 if stream is None:
294 stream = self.stdout
295 if OutputWrapper is None:
296 ending = '\n' if ending is None else ending
297 msg += ending
298 stream.write(msg)
299 else:
300 stream.write(msg, style_func=style_func, ending=ending)
301
302 def _get_argv(self):
303 """allow cleaner mocking of sys.argv"""
304 return sys.argv
305
306 dispatch = {
307 'test': handle_test,
308 'check': handle_check,
309 }
310
[end of elasticapm/contrib/django/management/commands/elasticapm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/contrib/django/management/commands/elasticapm.py b/elasticapm/contrib/django/management/commands/elasticapm.py
--- a/elasticapm/contrib/django/management/commands/elasticapm.py
+++ b/elasticapm/contrib/django/management/commands/elasticapm.py
@@ -143,12 +143,14 @@
"""Check your settings for common misconfigurations"""
passed = True
client = DjangoClient()
- # check if org/app and token are set:
- is_set = lambda x: x and x != 'None'
- values = [client.config.service_name, client.config.secret_token]
- if all(map(is_set, values)):
+
+ def is_set(x):
+ return x and x != 'None'
+
+ # check if org/app is set:
+ if is_set(client.config.service_name):
self.write(
- 'Service name and secret token are set, good job!',
+ 'Service name is set, good job!',
green
)
else:
@@ -156,11 +158,12 @@
self.write(
'Configuration errors detected!', red, ending='\n\n'
)
- if not is_set(client.config.service_name):
- self.write(" * SERVICE_NAME not set! ", red, ending='\n')
- if not is_set(client.config.secret_token):
- self.write(" * SECRET_TOKEN not set!", red, ending='\n')
+ self.write(" * SERVICE_NAME not set! ", red, ending='\n')
self.write(CONFIG_EXAMPLE)
+
+ # secret token is optional but recommended
+ if not is_set(client.config.secret_token):
+ self.write(" * optional SECRET_TOKEN not set", yellow, ending='\n')
self.write('')
server_url = client.config.server_url
@@ -183,6 +186,9 @@
'but the colon is left out', red, ending='\n')
else:
self.write('SERVER_URL {0} looks fine'.format(server_url, ), green)
+ # secret token in the clear not recommended
+ if is_set(client.config.secret_token) and parsed_url.scheme.lower() == 'http':
+ self.write(" * SECRET_TOKEN set but server not using https", yellow, ending='\n')
else:
self.write(' * SERVER_URL has scheme {0} and we require '
'http or https!'.format(parsed_url.scheme, ), red, ending='\n')
|
{"golden_diff": "diff --git a/elasticapm/contrib/django/management/commands/elasticapm.py b/elasticapm/contrib/django/management/commands/elasticapm.py\n--- a/elasticapm/contrib/django/management/commands/elasticapm.py\n+++ b/elasticapm/contrib/django/management/commands/elasticapm.py\n@@ -143,12 +143,14 @@\n \"\"\"Check your settings for common misconfigurations\"\"\"\n passed = True\n client = DjangoClient()\n- # check if org/app and token are set:\n- is_set = lambda x: x and x != 'None'\n- values = [client.config.service_name, client.config.secret_token]\n- if all(map(is_set, values)):\n+\n+ def is_set(x):\n+ return x and x != 'None'\n+\n+ # check if org/app is set:\n+ if is_set(client.config.service_name):\n self.write(\n- 'Service name and secret token are set, good job!',\n+ 'Service name is set, good job!',\n green\n )\n else:\n@@ -156,11 +158,12 @@\n self.write(\n 'Configuration errors detected!', red, ending='\\n\\n'\n )\n- if not is_set(client.config.service_name):\n- self.write(\" * SERVICE_NAME not set! \", red, ending='\\n')\n- if not is_set(client.config.secret_token):\n- self.write(\" * SECRET_TOKEN not set!\", red, ending='\\n')\n+ self.write(\" * SERVICE_NAME not set! \", red, ending='\\n')\n self.write(CONFIG_EXAMPLE)\n+\n+ # secret token is optional but recommended\n+ if not is_set(client.config.secret_token):\n+ self.write(\" * optional SECRET_TOKEN not set\", yellow, ending='\\n')\n self.write('')\n \n server_url = client.config.server_url\n@@ -183,6 +186,9 @@\n 'but the colon is left out', red, ending='\\n')\n else:\n self.write('SERVER_URL {0} looks fine'.format(server_url, ), green)\n+ # secret token in the clear not recommended\n+ if is_set(client.config.secret_token) and parsed_url.scheme.lower() == 'http':\n+ self.write(\" * SECRET_TOKEN set but server not using https\", yellow, ending='\\n')\n else:\n self.write(' * SERVER_URL has scheme {0} and we require '\n 'http or https!'.format(parsed_url.scheme, ), red, ending='\\n')\n", "issue": "`check` cmd output differs from documentation. \nRunning `python manage.py elasticapm check` returns an error if no `Secret_Token` is configured. Documentation states that configuration is optional. \r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.core.management.color import color_style\nfrom django.utils import termcolors\n\nfrom elasticapm.contrib.django.client import DjangoClient\nfrom elasticapm.utils.compat import urlparse\n\ntry:\n from django.core.management.base import OutputWrapper\nexcept ImportError:\n OutputWrapper = None\n\n\nblue = termcolors.make_style(opts=('bold',), fg='blue')\ncyan = termcolors.make_style(opts=('bold',), fg='cyan')\ngreen = termcolors.make_style(fg='green')\nmagenta = termcolors.make_style(opts=('bold',), fg='magenta')\nred = termcolors.make_style(opts=('bold',), fg='red')\nwhite = termcolors.make_style(opts=('bold',), fg='white')\nyellow = termcolors.make_style(opts=('bold',), fg='yellow')\n\n\nclass TestException(Exception):\n pass\n\n\nclass ColoredLogger(object):\n def __init__(self, stream):\n self.stream = stream\n self.errors = []\n self.color = color_style()\n\n def log(self, level, *args, **kwargs):\n style = kwargs.pop('style', self.color.NOTICE)\n msg = ' '.join((level.upper(), args[0] % args[1:], '\\n'))\n if OutputWrapper is None:\n self.stream.write(msg)\n else:\n self.stream.write(msg, style_func=style)\n\n def error(self, *args, **kwargs):\n kwargs['style'] = red\n self.log('error', *args, **kwargs)\n self.errors.append((args,))\n\n def warning(self, *args, **kwargs):\n kwargs['style'] = yellow\n self.log('warning', *args, **kwargs)\n\n def info(self, *args, **kwargs):\n kwargs['style'] = green\n self.log('info', *args, **kwargs)\n\n\nCONFIG_EXAMPLE = \"\"\"\n\nYou can set it in your settings file:\n\n ELASTIC_APM = {\n 'SERVICE_NAME': '<YOUR-SERVICE-NAME>',\n 'SECRET_TOKEN': '<YOUR-SECRET-TOKEN>',\n }\n\nor with environment variables:\n\n $ export ELASTIC_APM_SERVICE_NAME=\"<YOUR-SERVICE-NAME>\"\n $ export ELASTIC_APM_SECRET_TOKEN=\"<YOUR-SECRET-TOKEN>\"\n $ python manage.py elasticapm check\n\n\"\"\"\n\n\nclass Command(BaseCommand):\n arguments = (\n (('-s', '--service-name'),\n {'default': None, 'dest': 'service_name', 'help': 'Specifies the service name.'}),\n\n (('-t', '--token'),\n {'default': None, 'dest': 'secret_token', 'help': 'Specifies the secret token.'})\n )\n\n args = 'test check'\n\n def add_arguments(self, parser):\n parser.add_argument('subcommand')\n for args, kwargs in self.arguments:\n parser.add_argument(*args, **kwargs)\n\n def handle(self, *args, **options):\n if 'subcommand' in options:\n subcommand = options['subcommand']\n else:\n return self.handle_command_not_found('No command specified.')\n if subcommand not in self.dispatch:\n self.handle_command_not_found('No such command \"%s\".' % subcommand)\n else:\n self.dispatch.get(\n subcommand,\n self.handle_command_not_found\n )(self, subcommand, **options)\n\n def handle_test(self, command, **options):\n \"\"\"Send a test error to APM Server\"\"\"\n # can't be async for testing\n config = {'async_mode': False, }\n for key in ('service_name', 'secret_token'):\n if options.get(key):\n config[key] = options[key]\n client = DjangoClient(**config)\n client.error_logger = ColoredLogger(self.stderr)\n client.logger = ColoredLogger(self.stderr)\n client.state.logger = client.logger\n client.state.error_logger = client.error_logger\n self.write(\n \"Trying to send a test error to APM Server using these settings:\\n\\n\"\n \"SERVICE_NAME:\\t%s\\n\"\n \"SECRET_TOKEN:\\t%s\\n\"\n \"SERVER:\\t\\t%s\\n\\n\" % (\n client.config.service_name,\n client.config.secret_token,\n client.config.server_url,\n )\n )\n\n try:\n raise TestException('Hi there!')\n except TestException:\n client.capture_exception()\n if not client.error_logger.errors:\n self.write(\n 'Success! We tracked the error successfully! You should be'\n ' able to see it in a few seconds at the above URL'\n )\n finally:\n client.close()\n\n def handle_check(self, command, **options):\n \"\"\"Check your settings for common misconfigurations\"\"\"\n passed = True\n client = DjangoClient()\n # check if org/app and token are set:\n is_set = lambda x: x and x != 'None'\n values = [client.config.service_name, client.config.secret_token]\n if all(map(is_set, values)):\n self.write(\n 'Service name and secret token are set, good job!',\n green\n )\n else:\n passed = False\n self.write(\n 'Configuration errors detected!', red, ending='\\n\\n'\n )\n if not is_set(client.config.service_name):\n self.write(\" * SERVICE_NAME not set! \", red, ending='\\n')\n if not is_set(client.config.secret_token):\n self.write(\" * SECRET_TOKEN not set!\", red, ending='\\n')\n self.write(CONFIG_EXAMPLE)\n self.write('')\n\n server_url = client.config.server_url\n if server_url:\n parsed_url = urlparse.urlparse(server_url)\n if parsed_url.scheme.lower() in ('http', 'https', ):\n # parse netloc, making sure people did not supply basic auth\n if '@' in parsed_url.netloc:\n credentials, _, path = parsed_url.netloc.rpartition('@')\n passed = False\n self.write(\n 'Configuration errors detected!', red, ending='\\n\\n'\n )\n if ':' in credentials:\n self.write(' * SERVER_URL cannot contain authentication '\n 'credentials', red, ending='\\n')\n else:\n self.write(' * SERVER_URL contains an unexpected at-sign!'\n ' This is usually used for basic authentication, '\n 'but the colon is left out', red, ending='\\n')\n else:\n self.write('SERVER_URL {0} looks fine'.format(server_url, ), green)\n else:\n self.write(' * SERVER_URL has scheme {0} and we require '\n 'http or https!'.format(parsed_url.scheme, ), red, ending='\\n')\n passed = False\n else:\n self.write(\n 'Configuration errors detected!', red, ending='\\n\\n'\n )\n self.write(\" * SERVER_URL appears to be empty\", red, ending='\\n')\n passed = False\n self.write('')\n\n # check if we're disabled due to DEBUG:\n if settings.DEBUG:\n if getattr(settings, 'ELASTIC_APM', {}).get('DEBUG'):\n self.write(\n 'Note: even though you are running in DEBUG mode, we will '\n 'send data to the APM Server, because you set ELASTIC_APM[\"DEBUG\"] to '\n 'True. You can disable ElasticAPM while in DEBUG mode like this'\n '\\n\\n',\n yellow\n )\n self.write(\n ' ELASTIC_APM = {\\n'\n ' \"DEBUG\": False,\\n'\n ' # your other ELASTIC_APM settings\\n'\n ' }'\n\n )\n else:\n self.write(\n 'Looks like you\\'re running in DEBUG mode. ElasticAPM will NOT '\n 'gather any data while DEBUG is set to True.\\n\\n',\n red,\n )\n self.write(\n 'If you want to test ElasticAPM while DEBUG is set to True, you'\n ' can force ElasticAPM to gather data by setting'\n ' ELASTIC_APM[\"DEBUG\"] to True, like this\\n\\n'\n ' ELASTIC_APM = {\\n'\n ' \"DEBUG\": True,\\n'\n ' # your other ELASTIC_APM settings\\n'\n ' }'\n )\n passed = False\n else:\n self.write(\n 'DEBUG mode is disabled! Looking good!',\n green\n )\n self.write('')\n\n # check if middleware is set, and if it is at the first position\n middleware_attr = 'MIDDLEWARE' if getattr(settings, 'MIDDLEWARE', None) is not None else 'MIDDLEWARE_CLASSES'\n middleware = list(getattr(settings, middleware_attr))\n try:\n pos = middleware.index('elasticapm.contrib.django.middleware.TracingMiddleware')\n if pos == 0:\n self.write('Tracing middleware is configured! Awesome!', green)\n else:\n self.write('Tracing middleware is configured, but not at the first position\\n', yellow)\n self.write('ElasticAPM works best if you add it at the top of your %s setting' % middleware_attr)\n except ValueError:\n self.write('Tracing middleware not configured!', red)\n self.write(\n '\\n'\n 'Add it to your %(name)s setting like this:\\n\\n'\n ' %(name)s = (\\n'\n ' \"elasticapm.contrib.django.middleware.TracingMiddleware\",\\n'\n ' # your other middleware classes\\n'\n ' )\\n' % {'name': middleware_attr}\n )\n self.write('')\n if passed:\n self.write('Looks like everything should be ready!', green)\n else:\n self.write(\n 'Please fix the above errors.',\n red\n )\n self.write('')\n return passed\n\n def handle_command_not_found(self, message):\n self.write(message, red, ending='')\n self.write(\n ' Please use one of the following commands:\\n\\n',\n red\n )\n self.write(\n ''.join(\n ' * %s\\t%s\\n' % (k.ljust(8), v.__doc__)\n for k, v in self.dispatch.items()\n )\n )\n self.write('\\n')\n argv = self._get_argv()\n self.write(\n 'Usage:\\n\\t%s elasticapm <command>' % (\n ' '.join(argv[:argv.index('elasticapm')])\n )\n )\n\n def write(self, msg, style_func=None, ending=None, stream=None):\n \"\"\"\n wrapper around self.stdout/stderr to ensure Django 1.4 compatibility\n \"\"\"\n if stream is None:\n stream = self.stdout\n if OutputWrapper is None:\n ending = '\\n' if ending is None else ending\n msg += ending\n stream.write(msg)\n else:\n stream.write(msg, style_func=style_func, ending=ending)\n\n def _get_argv(self):\n \"\"\"allow cleaner mocking of sys.argv\"\"\"\n return sys.argv\n\n dispatch = {\n 'test': handle_test,\n 'check': handle_check,\n }\n", "path": "elasticapm/contrib/django/management/commands/elasticapm.py"}]}
| 3,809 | 564 |
gh_patches_debug_35500
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-328
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Store pipeline tasks in the index, and update Taskcluster URLs in scripts
The scripts are still using the old URLs from releng-services.
Store pipeline tasks in the index, and update Taskcluster URLs in scripts
The scripts are still using the old URLs from releng-services.
</issue>
<code>
[start of scripts/trainer.py]
1 # -*- coding: utf-8 -*-
2
3 import argparse
4 import lzma
5 import os
6 import shutil
7 from logging import INFO, basicConfig, getLogger
8 from urllib.request import urlretrieve
9
10 from bugbug.models.component import ComponentModel
11 from bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel
12 from bugbug.models.regression import RegressionModel
13 from bugbug.models.tracking import TrackingModel
14
15 basicConfig(level=INFO)
16 logger = getLogger(__name__)
17
18 BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public"
19
20
21 class Trainer(object):
22 def decompress_file(self, path):
23 with lzma.open(f"{path}.xz", "rb") as input_f:
24 with open(path, "wb") as output_f:
25 shutil.copyfileobj(input_f, output_f)
26
27 def compress_file(self, path):
28 with open(path, "rb") as input_f:
29 with lzma.open(f"{path}.xz", "wb") as output_f:
30 shutil.copyfileobj(input_f, output_f)
31
32 def train_defect_enhancement_task(self):
33 logger.info("Training *defect vs enhancement vs task* model")
34 model = DefectEnhancementTaskModel()
35 model.train()
36 self.compress_file("defectenhancementtaskmodel")
37
38 def train_component(self):
39 logger.info("Training *component* model")
40 model = ComponentModel()
41 model.train()
42 self.compress_file("componentmodel")
43
44 def train_regression(self):
45 logger.info("Training *regression vs non-regression* model")
46 model = RegressionModel()
47 model.train()
48 self.compress_file("regressionmodel")
49
50 def train_tracking(self):
51 logger.info("Training *tracking* model")
52 model = TrackingModel()
53 model.train()
54 self.compress_file("trackingmodel")
55
56 def go(self, model):
57 # TODO: Stop hard-coding them
58 valid_models = ["defect", "component", "regression", "tracking"]
59
60 if model not in valid_models:
61 exception = (
62 f"Invalid model {model!r} name, use one of {valid_models!r} instead"
63 )
64 raise ValueError(exception)
65
66 # Download datasets that were built by bugbug_data.
67 os.makedirs("data", exist_ok=True)
68
69 # Bugs.json
70 logger.info("Downloading bugs database")
71 urlretrieve(f"{BASE_URL}/bugs.json.xz", "data/bugs.json.xz")
72 logger.info("Decompressing bugs database")
73 self.decompress_file("data/bugs.json")
74
75 # Commits.json
76 logger.info("Downloading commits database")
77 urlretrieve(f"{BASE_URL}/commits.json.xz", "data/commits.json.xz")
78 logger.info("Decompressing commits database")
79 self.decompress_file("data/commits.json")
80
81 if model == "defect":
82 # Train classifier for defect-vs-enhancement-vs-task.
83 self.train_defect_enhancement_task()
84 elif model == "component":
85 # Train classifier for the component of a bug.
86 self.train_component()
87 elif model == "regression":
88 # Train classifier for regression-vs-nonregression.
89 self.train_regression()
90 elif model == "tracking":
91 # Train classifier for tracking bugs.
92 self.train_tracking()
93 else:
94 # We shouldn't be here
95 raise Exception("valid_models is likely not up-to-date anymore")
96
97
98 def main():
99 description = "Train the models"
100 parser = argparse.ArgumentParser(description=description)
101
102 parser.add_argument("model", help="Which model to train.")
103
104 args = parser.parse_args()
105
106 retriever = Trainer()
107 retriever.go(args.model)
108
[end of scripts/trainer.py]
[start of http_service/download_models.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7 import lzma
8 import os
9 import shutil
10 from urllib.request import urlretrieve
11
12 import requests
13
14 logging.basicConfig(level=logging.INFO)
15 LOGGER = logging.getLogger()
16
17 BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public"
18
19 MODELS_NAMES = ("defectenhancementtask", "component", "regression")
20 MODELS_DIR = os.path.join(os.path.dirname(__file__), "models")
21
22
23 def retrieve_model(name):
24 os.makedirs(MODELS_DIR, exist_ok=True)
25
26 file_name = f"{name}model"
27 file_path = os.path.join(MODELS_DIR, file_name)
28
29 model_url = f"{BASE_URL}/{file_name}.xz"
30 LOGGER.info(f"Checking ETAG of {model_url}")
31 r = requests.head(model_url, allow_redirects=True)
32 r.raise_for_status()
33 new_etag = r.headers["ETag"]
34
35 try:
36 with open(f"{file_path}.etag", "r") as f:
37 old_etag = f.read()
38 except IOError:
39 old_etag = None
40
41 if old_etag != new_etag:
42 LOGGER.info(f"Downloading the model from {model_url}")
43 urlretrieve(model_url, f"{file_path}.xz")
44
45 with lzma.open(f"{file_path}.xz", "rb") as input_f:
46 with open(file_path, "wb") as output_f:
47 shutil.copyfileobj(input_f, output_f)
48 LOGGER.info(f"Written model in {file_path}")
49
50 with open(f"{file_path}.etag", "w") as f:
51 f.write(new_etag)
52 else:
53 LOGGER.info(f"ETAG for {model_url} is ok")
54
55 return file_path
56
57
58 def preload_models():
59 for model_name in MODELS_NAMES:
60 retrieve_model(model_name)
61
62
63 if __name__ == "__main__":
64 preload_models()
65
[end of http_service/download_models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/http_service/download_models.py b/http_service/download_models.py
--- a/http_service/download_models.py
+++ b/http_service/download_models.py
@@ -14,7 +14,7 @@
logging.basicConfig(level=logging.INFO)
LOGGER = logging.getLogger()
-BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public"
+BASE_URL = "https://index.taskcluster.net/v1/task/project.relman.bugbug.train_{}.latest/artifacts/public"
MODELS_NAMES = ("defectenhancementtask", "component", "regression")
MODELS_DIR = os.path.join(os.path.dirname(__file__), "models")
@@ -26,7 +26,8 @@
file_name = f"{name}model"
file_path = os.path.join(MODELS_DIR, file_name)
- model_url = f"{BASE_URL}/{file_name}.xz"
+ base_model_url = BASE_URL.format(name)
+ model_url = f"{base_model_url}/{file_name}.xz"
LOGGER.info(f"Checking ETAG of {model_url}")
r = requests.head(model_url, allow_redirects=True)
r.raise_for_status()
diff --git a/scripts/trainer.py b/scripts/trainer.py
--- a/scripts/trainer.py
+++ b/scripts/trainer.py
@@ -15,7 +15,7 @@
basicConfig(level=INFO)
logger = getLogger(__name__)
-BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public"
+BASE_URL = "https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public"
class Trainer(object):
@@ -68,13 +68,15 @@
# Bugs.json
logger.info("Downloading bugs database")
- urlretrieve(f"{BASE_URL}/bugs.json.xz", "data/bugs.json.xz")
+ bugs_url = BASE_URL.format("bugs")
+ urlretrieve(f"{bugs_url}/bugs.json.xz", "data/bugs.json.xz")
logger.info("Decompressing bugs database")
self.decompress_file("data/bugs.json")
# Commits.json
logger.info("Downloading commits database")
- urlretrieve(f"{BASE_URL}/commits.json.xz", "data/commits.json.xz")
+ commits_url = BASE_URL.format("commits")
+ urlretrieve(f"{commits_url}/commits.json.xz", "data/commits.json.xz")
logger.info("Decompressing commits database")
self.decompress_file("data/commits.json")
|
{"golden_diff": "diff --git a/http_service/download_models.py b/http_service/download_models.py\n--- a/http_service/download_models.py\n+++ b/http_service/download_models.py\n@@ -14,7 +14,7 @@\n logging.basicConfig(level=logging.INFO)\n LOGGER = logging.getLogger()\n \n-BASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public\"\n+BASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.train_{}.latest/artifacts/public\"\n \n MODELS_NAMES = (\"defectenhancementtask\", \"component\", \"regression\")\n MODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n@@ -26,7 +26,8 @@\n file_name = f\"{name}model\"\n file_path = os.path.join(MODELS_DIR, file_name)\n \n- model_url = f\"{BASE_URL}/{file_name}.xz\"\n+ base_model_url = BASE_URL.format(name)\n+ model_url = f\"{base_model_url}/{file_name}.xz\"\n LOGGER.info(f\"Checking ETAG of {model_url}\")\n r = requests.head(model_url, allow_redirects=True)\n r.raise_for_status()\ndiff --git a/scripts/trainer.py b/scripts/trainer.py\n--- a/scripts/trainer.py\n+++ b/scripts/trainer.py\n@@ -15,7 +15,7 @@\n basicConfig(level=INFO)\n logger = getLogger(__name__)\n \n-BASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public\"\n+BASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public\"\n \n \n class Trainer(object):\n@@ -68,13 +68,15 @@\n \n # Bugs.json\n logger.info(\"Downloading bugs database\")\n- urlretrieve(f\"{BASE_URL}/bugs.json.xz\", \"data/bugs.json.xz\")\n+ bugs_url = BASE_URL.format(\"bugs\")\n+ urlretrieve(f\"{bugs_url}/bugs.json.xz\", \"data/bugs.json.xz\")\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n \n # Commits.json\n logger.info(\"Downloading commits database\")\n- urlretrieve(f\"{BASE_URL}/commits.json.xz\", \"data/commits.json.xz\")\n+ commits_url = BASE_URL.format(\"commits\")\n+ urlretrieve(f\"{commits_url}/commits.json.xz\", \"data/commits.json.xz\")\n logger.info(\"Decompressing commits database\")\n self.decompress_file(\"data/commits.json\")\n", "issue": "Store pipeline tasks in the index, and update Taskcluster URLs in scripts\nThe scripts are still using the old URLs from releng-services.\nStore pipeline tasks in the index, and update Taskcluster URLs in scripts\nThe scripts are still using the old URLs from releng-services.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport argparse\nimport lzma\nimport os\nimport shutil\nfrom logging import INFO, basicConfig, getLogger\nfrom urllib.request import urlretrieve\n\nfrom bugbug.models.component import ComponentModel\nfrom bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel\nfrom bugbug.models.regression import RegressionModel\nfrom bugbug.models.tracking import TrackingModel\n\nbasicConfig(level=INFO)\nlogger = getLogger(__name__)\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public\"\n\n\nclass Trainer(object):\n def decompress_file(self, path):\n with lzma.open(f\"{path}.xz\", \"rb\") as input_f:\n with open(path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def compress_file(self, path):\n with open(path, \"rb\") as input_f:\n with lzma.open(f\"{path}.xz\", \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def train_defect_enhancement_task(self):\n logger.info(\"Training *defect vs enhancement vs task* model\")\n model = DefectEnhancementTaskModel()\n model.train()\n self.compress_file(\"defectenhancementtaskmodel\")\n\n def train_component(self):\n logger.info(\"Training *component* model\")\n model = ComponentModel()\n model.train()\n self.compress_file(\"componentmodel\")\n\n def train_regression(self):\n logger.info(\"Training *regression vs non-regression* model\")\n model = RegressionModel()\n model.train()\n self.compress_file(\"regressionmodel\")\n\n def train_tracking(self):\n logger.info(\"Training *tracking* model\")\n model = TrackingModel()\n model.train()\n self.compress_file(\"trackingmodel\")\n\n def go(self, model):\n # TODO: Stop hard-coding them\n valid_models = [\"defect\", \"component\", \"regression\", \"tracking\"]\n\n if model not in valid_models:\n exception = (\n f\"Invalid model {model!r} name, use one of {valid_models!r} instead\"\n )\n raise ValueError(exception)\n\n # Download datasets that were built by bugbug_data.\n os.makedirs(\"data\", exist_ok=True)\n\n # Bugs.json\n logger.info(\"Downloading bugs database\")\n urlretrieve(f\"{BASE_URL}/bugs.json.xz\", \"data/bugs.json.xz\")\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n\n # Commits.json\n logger.info(\"Downloading commits database\")\n urlretrieve(f\"{BASE_URL}/commits.json.xz\", \"data/commits.json.xz\")\n logger.info(\"Decompressing commits database\")\n self.decompress_file(\"data/commits.json\")\n\n if model == \"defect\":\n # Train classifier for defect-vs-enhancement-vs-task.\n self.train_defect_enhancement_task()\n elif model == \"component\":\n # Train classifier for the component of a bug.\n self.train_component()\n elif model == \"regression\":\n # Train classifier for regression-vs-nonregression.\n self.train_regression()\n elif model == \"tracking\":\n # Train classifier for tracking bugs.\n self.train_tracking()\n else:\n # We shouldn't be here\n raise Exception(\"valid_models is likely not up-to-date anymore\")\n\n\ndef main():\n description = \"Train the models\"\n parser = argparse.ArgumentParser(description=description)\n\n parser.add_argument(\"model\", help=\"Which model to train.\")\n\n args = parser.parse_args()\n\n retriever = Trainer()\n retriever.go(args.model)\n", "path": "scripts/trainer.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nimport lzma\nimport os\nimport shutil\nfrom urllib.request import urlretrieve\n\nimport requests\n\nlogging.basicConfig(level=logging.INFO)\nLOGGER = logging.getLogger()\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public\"\n\nMODELS_NAMES = (\"defectenhancementtask\", \"component\", \"regression\")\nMODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n\n\ndef retrieve_model(name):\n os.makedirs(MODELS_DIR, exist_ok=True)\n\n file_name = f\"{name}model\"\n file_path = os.path.join(MODELS_DIR, file_name)\n\n model_url = f\"{BASE_URL}/{file_name}.xz\"\n LOGGER.info(f\"Checking ETAG of {model_url}\")\n r = requests.head(model_url, allow_redirects=True)\n r.raise_for_status()\n new_etag = r.headers[\"ETag\"]\n\n try:\n with open(f\"{file_path}.etag\", \"r\") as f:\n old_etag = f.read()\n except IOError:\n old_etag = None\n\n if old_etag != new_etag:\n LOGGER.info(f\"Downloading the model from {model_url}\")\n urlretrieve(model_url, f\"{file_path}.xz\")\n\n with lzma.open(f\"{file_path}.xz\", \"rb\") as input_f:\n with open(file_path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n LOGGER.info(f\"Written model in {file_path}\")\n\n with open(f\"{file_path}.etag\", \"w\") as f:\n f.write(new_etag)\n else:\n LOGGER.info(f\"ETAG for {model_url} is ok\")\n\n return file_path\n\n\ndef preload_models():\n for model_name in MODELS_NAMES:\n retrieve_model(model_name)\n\n\nif __name__ == \"__main__\":\n preload_models()\n", "path": "http_service/download_models.py"}]}
| 2,225 | 579 |
gh_patches_debug_38877
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-2323
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
After the 0.135.0 release, invalid GraphQL queries are not logged as errors
## Describe the Bug
Before the 0.135.0 invalid queries were logged as errors. For example, given following schema:
```python
import strawberry
@strawberry.type
class Query:
@strawberry.field
def version(self) -> int:
return 139
schema = strawberry.schema.Schema(query=Query)
```
When executing a query: `query { v }` the error will be logged:
```
Cannot query field 'v' on type 'Query'.
GraphQL request:1:9
1 | query { v }
| ^
Stack (most recent call last):
...
```
After the update to 0.135.0 the log is silent on this.
## System Information
- Operating system: Fedora Linux 36
- Strawberry version (if applicable): >=0.135.0
</issue>
<code>
[start of strawberry/schema/execute.py]
1 from asyncio import ensure_future
2 from inspect import isawaitable
3 from typing import (
4 Awaitable,
5 Callable,
6 Iterable,
7 List,
8 Optional,
9 Sequence,
10 Tuple,
11 Type,
12 Union,
13 cast,
14 )
15
16 from graphql import (
17 ExecutionContext as GraphQLExecutionContext,
18 ExecutionResult as GraphQLExecutionResult,
19 GraphQLError,
20 GraphQLSchema,
21 execute as original_execute,
22 parse,
23 )
24 from graphql.language import DocumentNode
25 from graphql.validation import ASTValidationRule, validate
26
27 from strawberry.extensions import Extension
28 from strawberry.extensions.runner import ExtensionsRunner
29 from strawberry.types import ExecutionContext, ExecutionResult
30 from strawberry.types.graphql import OperationType
31
32 from .exceptions import InvalidOperationTypeError
33
34
35 def parse_document(query: str) -> DocumentNode:
36 return parse(query)
37
38
39 def validate_document(
40 schema: GraphQLSchema,
41 document: DocumentNode,
42 validation_rules: Tuple[Type[ASTValidationRule], ...],
43 ) -> List[GraphQLError]:
44 return validate(
45 schema,
46 document,
47 validation_rules,
48 )
49
50
51 def _run_validation(execution_context: ExecutionContext) -> None:
52 # Check if there are any validation rules or if validation has
53 # already been run by an extension
54 if len(execution_context.validation_rules) > 0 and execution_context.errors is None:
55 assert execution_context.graphql_document
56 execution_context.errors = validate_document(
57 execution_context.schema._schema,
58 execution_context.graphql_document,
59 execution_context.validation_rules,
60 )
61
62
63 async def execute(
64 schema: GraphQLSchema,
65 query: str,
66 *,
67 allowed_operation_types: Iterable[OperationType],
68 extensions: Sequence[Union[Type[Extension], Extension]],
69 execution_context: ExecutionContext,
70 execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,
71 process_errors: Callable[[List[GraphQLError], Optional[ExecutionContext]], None],
72 ) -> ExecutionResult:
73 extensions_runner = ExtensionsRunner(
74 execution_context=execution_context,
75 extensions=list(extensions),
76 )
77
78 async with extensions_runner.request():
79 # Note: In graphql-core the schema would be validated here but in
80 # Strawberry we are validating it at initialisation time instead
81
82 async with extensions_runner.parsing():
83 try:
84 if not execution_context.graphql_document:
85 execution_context.graphql_document = parse_document(query)
86
87 except GraphQLError as error:
88 execution_context.errors = [error]
89 return ExecutionResult(
90 data=None,
91 errors=[error],
92 extensions=await extensions_runner.get_extensions_results(),
93 )
94
95 except Exception as error: # pragma: no cover
96 error = GraphQLError(str(error), original_error=error)
97
98 execution_context.errors = [error]
99 return ExecutionResult(
100 data=None,
101 errors=[error],
102 extensions=await extensions_runner.get_extensions_results(),
103 )
104
105 if execution_context.operation_type not in allowed_operation_types:
106 raise InvalidOperationTypeError(execution_context.operation_type)
107
108 async with extensions_runner.validation():
109 _run_validation(execution_context)
110 if execution_context.errors:
111 return ExecutionResult(data=None, errors=execution_context.errors)
112
113 async with extensions_runner.executing():
114 if not execution_context.result:
115 result = original_execute(
116 schema,
117 execution_context.graphql_document,
118 root_value=execution_context.root_value,
119 middleware=extensions_runner.as_middleware_manager(),
120 variable_values=execution_context.variables,
121 operation_name=execution_context.operation_name,
122 context_value=execution_context.context,
123 execution_context_class=execution_context_class,
124 )
125
126 if isawaitable(result):
127 result = await cast(Awaitable[GraphQLExecutionResult], result)
128
129 result = cast(GraphQLExecutionResult, result)
130 execution_context.result = result
131 # Also set errors on the execution_context so that it's easier
132 # to access in extensions
133 if result.errors:
134 execution_context.errors = result.errors
135
136 # Run the `Schema.process_errors` function here before
137 # extensions have a chance to modify them (see the MaskErrors
138 # extension). That way we can log the original errors but
139 # only return a sanitised version to the client.
140 process_errors(result.errors, execution_context)
141
142 return ExecutionResult(
143 data=execution_context.result.data,
144 errors=execution_context.result.errors,
145 extensions=await extensions_runner.get_extensions_results(),
146 )
147
148
149 def execute_sync(
150 schema: GraphQLSchema,
151 query: str,
152 *,
153 allowed_operation_types: Iterable[OperationType],
154 extensions: Sequence[Union[Type[Extension], Extension]],
155 execution_context: ExecutionContext,
156 execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,
157 process_errors: Callable[[List[GraphQLError], Optional[ExecutionContext]], None],
158 ) -> ExecutionResult:
159 extensions_runner = ExtensionsRunner(
160 execution_context=execution_context,
161 extensions=list(extensions),
162 )
163
164 with extensions_runner.request():
165 # Note: In graphql-core the schema would be validated here but in
166 # Strawberry we are validating it at initialisation time instead
167
168 with extensions_runner.parsing():
169 try:
170 if not execution_context.graphql_document:
171 execution_context.graphql_document = parse_document(query)
172
173 except GraphQLError as error:
174 execution_context.errors = [error]
175 return ExecutionResult(
176 data=None,
177 errors=[error],
178 extensions=extensions_runner.get_extensions_results_sync(),
179 )
180
181 except Exception as error: # pragma: no cover
182 error = GraphQLError(str(error), original_error=error)
183
184 execution_context.errors = [error]
185 return ExecutionResult(
186 data=None,
187 errors=[error],
188 extensions=extensions_runner.get_extensions_results_sync(),
189 )
190
191 if execution_context.operation_type not in allowed_operation_types:
192 raise InvalidOperationTypeError(execution_context.operation_type)
193
194 with extensions_runner.validation():
195 _run_validation(execution_context)
196 if execution_context.errors:
197 return ExecutionResult(data=None, errors=execution_context.errors)
198
199 with extensions_runner.executing():
200 if not execution_context.result:
201 result = original_execute(
202 schema,
203 execution_context.graphql_document,
204 root_value=execution_context.root_value,
205 middleware=extensions_runner.as_middleware_manager(),
206 variable_values=execution_context.variables,
207 operation_name=execution_context.operation_name,
208 context_value=execution_context.context,
209 execution_context_class=execution_context_class,
210 )
211
212 if isawaitable(result):
213 result = cast(Awaitable[GraphQLExecutionResult], result)
214 ensure_future(result).cancel()
215 raise RuntimeError(
216 "GraphQL execution failed to complete synchronously."
217 )
218
219 result = cast(GraphQLExecutionResult, result)
220 execution_context.result = result
221 # Also set errors on the execution_context so that it's easier
222 # to access in extensions
223 if result.errors:
224 execution_context.errors = result.errors
225
226 # Run the `Schema.process_errors` function here before
227 # extensions have a chance to modify them (see the MaskErrors
228 # extension). That way we can log the original errors but
229 # only return a sanitised version to the client.
230 process_errors(result.errors, execution_context)
231
232 return ExecutionResult(
233 data=execution_context.result.data,
234 errors=execution_context.result.errors,
235 extensions=extensions_runner.get_extensions_results_sync(),
236 )
237
[end of strawberry/schema/execute.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/schema/execute.py b/strawberry/schema/execute.py
--- a/strawberry/schema/execute.py
+++ b/strawberry/schema/execute.py
@@ -86,6 +86,7 @@
except GraphQLError as error:
execution_context.errors = [error]
+ process_errors([error], execution_context)
return ExecutionResult(
data=None,
errors=[error],
@@ -96,6 +97,8 @@
error = GraphQLError(str(error), original_error=error)
execution_context.errors = [error]
+ process_errors([error], execution_context)
+
return ExecutionResult(
data=None,
errors=[error],
@@ -108,6 +111,7 @@
async with extensions_runner.validation():
_run_validation(execution_context)
if execution_context.errors:
+ process_errors(execution_context.errors, execution_context)
return ExecutionResult(data=None, errors=execution_context.errors)
async with extensions_runner.executing():
@@ -172,6 +176,7 @@
except GraphQLError as error:
execution_context.errors = [error]
+ process_errors([error], execution_context)
return ExecutionResult(
data=None,
errors=[error],
@@ -182,6 +187,7 @@
error = GraphQLError(str(error), original_error=error)
execution_context.errors = [error]
+ process_errors([error], execution_context)
return ExecutionResult(
data=None,
errors=[error],
@@ -194,6 +200,7 @@
with extensions_runner.validation():
_run_validation(execution_context)
if execution_context.errors:
+ process_errors(execution_context.errors, execution_context)
return ExecutionResult(data=None, errors=execution_context.errors)
with extensions_runner.executing():
|
{"golden_diff": "diff --git a/strawberry/schema/execute.py b/strawberry/schema/execute.py\n--- a/strawberry/schema/execute.py\n+++ b/strawberry/schema/execute.py\n@@ -86,6 +86,7 @@\n \n except GraphQLError as error:\n execution_context.errors = [error]\n+ process_errors([error], execution_context)\n return ExecutionResult(\n data=None,\n errors=[error],\n@@ -96,6 +97,8 @@\n error = GraphQLError(str(error), original_error=error)\n \n execution_context.errors = [error]\n+ process_errors([error], execution_context)\n+\n return ExecutionResult(\n data=None,\n errors=[error],\n@@ -108,6 +111,7 @@\n async with extensions_runner.validation():\n _run_validation(execution_context)\n if execution_context.errors:\n+ process_errors(execution_context.errors, execution_context)\n return ExecutionResult(data=None, errors=execution_context.errors)\n \n async with extensions_runner.executing():\n@@ -172,6 +176,7 @@\n \n except GraphQLError as error:\n execution_context.errors = [error]\n+ process_errors([error], execution_context)\n return ExecutionResult(\n data=None,\n errors=[error],\n@@ -182,6 +187,7 @@\n error = GraphQLError(str(error), original_error=error)\n \n execution_context.errors = [error]\n+ process_errors([error], execution_context)\n return ExecutionResult(\n data=None,\n errors=[error],\n@@ -194,6 +200,7 @@\n with extensions_runner.validation():\n _run_validation(execution_context)\n if execution_context.errors:\n+ process_errors(execution_context.errors, execution_context)\n return ExecutionResult(data=None, errors=execution_context.errors)\n \n with extensions_runner.executing():\n", "issue": "After the 0.135.0 release, invalid GraphQL queries are not logged as errors\n## Describe the Bug\r\nBefore the 0.135.0 invalid queries were logged as errors. For example, given following schema:\r\n```python\r\nimport strawberry\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n def version(self) -> int:\r\n return 139\r\n\r\n\r\nschema = strawberry.schema.Schema(query=Query)\r\n```\r\nWhen executing a query: `query { v }` the error will be logged:\r\n```\r\nCannot query field 'v' on type 'Query'.\r\n\r\nGraphQL request:1:9\r\n1 | query { v }\r\n | ^\r\nStack (most recent call last):\r\n...\r\n```\r\nAfter the update to 0.135.0 the log is silent on this.\r\n\r\n## System Information\r\n\r\n - Operating system: Fedora Linux 36\r\n - Strawberry version (if applicable): >=0.135.0\r\n\n", "before_files": [{"content": "from asyncio import ensure_future\nfrom inspect import isawaitable\nfrom typing import (\n Awaitable,\n Callable,\n Iterable,\n List,\n Optional,\n Sequence,\n Tuple,\n Type,\n Union,\n cast,\n)\n\nfrom graphql import (\n ExecutionContext as GraphQLExecutionContext,\n ExecutionResult as GraphQLExecutionResult,\n GraphQLError,\n GraphQLSchema,\n execute as original_execute,\n parse,\n)\nfrom graphql.language import DocumentNode\nfrom graphql.validation import ASTValidationRule, validate\n\nfrom strawberry.extensions import Extension\nfrom strawberry.extensions.runner import ExtensionsRunner\nfrom strawberry.types import ExecutionContext, ExecutionResult\nfrom strawberry.types.graphql import OperationType\n\nfrom .exceptions import InvalidOperationTypeError\n\n\ndef parse_document(query: str) -> DocumentNode:\n return parse(query)\n\n\ndef validate_document(\n schema: GraphQLSchema,\n document: DocumentNode,\n validation_rules: Tuple[Type[ASTValidationRule], ...],\n) -> List[GraphQLError]:\n return validate(\n schema,\n document,\n validation_rules,\n )\n\n\ndef _run_validation(execution_context: ExecutionContext) -> None:\n # Check if there are any validation rules or if validation has\n # already been run by an extension\n if len(execution_context.validation_rules) > 0 and execution_context.errors is None:\n assert execution_context.graphql_document\n execution_context.errors = validate_document(\n execution_context.schema._schema,\n execution_context.graphql_document,\n execution_context.validation_rules,\n )\n\n\nasync def execute(\n schema: GraphQLSchema,\n query: str,\n *,\n allowed_operation_types: Iterable[OperationType],\n extensions: Sequence[Union[Type[Extension], Extension]],\n execution_context: ExecutionContext,\n execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,\n process_errors: Callable[[List[GraphQLError], Optional[ExecutionContext]], None],\n) -> ExecutionResult:\n extensions_runner = ExtensionsRunner(\n execution_context=execution_context,\n extensions=list(extensions),\n )\n\n async with extensions_runner.request():\n # Note: In graphql-core the schema would be validated here but in\n # Strawberry we are validating it at initialisation time instead\n\n async with extensions_runner.parsing():\n try:\n if not execution_context.graphql_document:\n execution_context.graphql_document = parse_document(query)\n\n except GraphQLError as error:\n execution_context.errors = [error]\n return ExecutionResult(\n data=None,\n errors=[error],\n extensions=await extensions_runner.get_extensions_results(),\n )\n\n except Exception as error: # pragma: no cover\n error = GraphQLError(str(error), original_error=error)\n\n execution_context.errors = [error]\n return ExecutionResult(\n data=None,\n errors=[error],\n extensions=await extensions_runner.get_extensions_results(),\n )\n\n if execution_context.operation_type not in allowed_operation_types:\n raise InvalidOperationTypeError(execution_context.operation_type)\n\n async with extensions_runner.validation():\n _run_validation(execution_context)\n if execution_context.errors:\n return ExecutionResult(data=None, errors=execution_context.errors)\n\n async with extensions_runner.executing():\n if not execution_context.result:\n result = original_execute(\n schema,\n execution_context.graphql_document,\n root_value=execution_context.root_value,\n middleware=extensions_runner.as_middleware_manager(),\n variable_values=execution_context.variables,\n operation_name=execution_context.operation_name,\n context_value=execution_context.context,\n execution_context_class=execution_context_class,\n )\n\n if isawaitable(result):\n result = await cast(Awaitable[GraphQLExecutionResult], result)\n\n result = cast(GraphQLExecutionResult, result)\n execution_context.result = result\n # Also set errors on the execution_context so that it's easier\n # to access in extensions\n if result.errors:\n execution_context.errors = result.errors\n\n # Run the `Schema.process_errors` function here before\n # extensions have a chance to modify them (see the MaskErrors\n # extension). That way we can log the original errors but\n # only return a sanitised version to the client.\n process_errors(result.errors, execution_context)\n\n return ExecutionResult(\n data=execution_context.result.data,\n errors=execution_context.result.errors,\n extensions=await extensions_runner.get_extensions_results(),\n )\n\n\ndef execute_sync(\n schema: GraphQLSchema,\n query: str,\n *,\n allowed_operation_types: Iterable[OperationType],\n extensions: Sequence[Union[Type[Extension], Extension]],\n execution_context: ExecutionContext,\n execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,\n process_errors: Callable[[List[GraphQLError], Optional[ExecutionContext]], None],\n) -> ExecutionResult:\n extensions_runner = ExtensionsRunner(\n execution_context=execution_context,\n extensions=list(extensions),\n )\n\n with extensions_runner.request():\n # Note: In graphql-core the schema would be validated here but in\n # Strawberry we are validating it at initialisation time instead\n\n with extensions_runner.parsing():\n try:\n if not execution_context.graphql_document:\n execution_context.graphql_document = parse_document(query)\n\n except GraphQLError as error:\n execution_context.errors = [error]\n return ExecutionResult(\n data=None,\n errors=[error],\n extensions=extensions_runner.get_extensions_results_sync(),\n )\n\n except Exception as error: # pragma: no cover\n error = GraphQLError(str(error), original_error=error)\n\n execution_context.errors = [error]\n return ExecutionResult(\n data=None,\n errors=[error],\n extensions=extensions_runner.get_extensions_results_sync(),\n )\n\n if execution_context.operation_type not in allowed_operation_types:\n raise InvalidOperationTypeError(execution_context.operation_type)\n\n with extensions_runner.validation():\n _run_validation(execution_context)\n if execution_context.errors:\n return ExecutionResult(data=None, errors=execution_context.errors)\n\n with extensions_runner.executing():\n if not execution_context.result:\n result = original_execute(\n schema,\n execution_context.graphql_document,\n root_value=execution_context.root_value,\n middleware=extensions_runner.as_middleware_manager(),\n variable_values=execution_context.variables,\n operation_name=execution_context.operation_name,\n context_value=execution_context.context,\n execution_context_class=execution_context_class,\n )\n\n if isawaitable(result):\n result = cast(Awaitable[GraphQLExecutionResult], result)\n ensure_future(result).cancel()\n raise RuntimeError(\n \"GraphQL execution failed to complete synchronously.\"\n )\n\n result = cast(GraphQLExecutionResult, result)\n execution_context.result = result\n # Also set errors on the execution_context so that it's easier\n # to access in extensions\n if result.errors:\n execution_context.errors = result.errors\n\n # Run the `Schema.process_errors` function here before\n # extensions have a chance to modify them (see the MaskErrors\n # extension). That way we can log the original errors but\n # only return a sanitised version to the client.\n process_errors(result.errors, execution_context)\n\n return ExecutionResult(\n data=execution_context.result.data,\n errors=execution_context.result.errors,\n extensions=extensions_runner.get_extensions_results_sync(),\n )\n", "path": "strawberry/schema/execute.py"}]}
| 2,886 | 409 |
gh_patches_debug_1302
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-3803
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Signer/Verifier deprecation warning has wrong stacklevel
Seeing this with Cryptography 2.0:
```
.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
_warn_sign_verify_deprecated()
.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:382: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
_warn_sign_verify_deprecated()
```
I see a few open issues related to deprecations (e.g. #3794), but I'm not sure if any of them cover this particular message.
</issue>
<code>
[start of src/cryptography/hazmat/backends/openssl/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import warnings
8
9 from cryptography import utils
10 from cryptography.hazmat.primitives import hashes
11 from cryptography.hazmat.primitives.asymmetric.utils import Prehashed
12
13
14 def _calculate_digest_and_algorithm(backend, data, algorithm):
15 if not isinstance(algorithm, Prehashed):
16 hash_ctx = hashes.Hash(algorithm, backend)
17 hash_ctx.update(data)
18 data = hash_ctx.finalize()
19 else:
20 algorithm = algorithm._algorithm
21
22 if len(data) != algorithm.digest_size:
23 raise ValueError(
24 "The provided data must be the same length as the hash "
25 "algorithm's digest size."
26 )
27
28 return (data, algorithm)
29
30
31 def _check_not_prehashed(signature_algorithm):
32 if isinstance(signature_algorithm, Prehashed):
33 raise TypeError(
34 "Prehashed is only supported in the sign and verify methods. "
35 "It cannot be used with signer or verifier."
36 )
37
38
39 def _warn_sign_verify_deprecated():
40 warnings.warn(
41 "signer and verifier have been deprecated. Please use sign "
42 "and verify instead.",
43 utils.PersistentlyDeprecated,
44 stacklevel=2
45 )
46
[end of src/cryptography/hazmat/backends/openssl/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py
--- a/src/cryptography/hazmat/backends/openssl/utils.py
+++ b/src/cryptography/hazmat/backends/openssl/utils.py
@@ -41,5 +41,5 @@
"signer and verifier have been deprecated. Please use sign "
"and verify instead.",
utils.PersistentlyDeprecated,
- stacklevel=2
+ stacklevel=3
)
|
{"golden_diff": "diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py\n--- a/src/cryptography/hazmat/backends/openssl/utils.py\n+++ b/src/cryptography/hazmat/backends/openssl/utils.py\n@@ -41,5 +41,5 @@\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n- stacklevel=2\n+ stacklevel=3\n )\n", "issue": "Signer/Verifier deprecation warning has wrong stacklevel\nSeeing this with Cryptography 2.0:\r\n\r\n```\r\n.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.\r\n _warn_sign_verify_deprecated()\r\n.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:382: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.\r\n _warn_sign_verify_deprecated()\r\n```\r\n\r\nI see a few open issues related to deprecations (e.g. #3794), but I'm not sure if any of them cover this particular message.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\n\ndef _calculate_digest_and_algorithm(backend, data, algorithm):\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm, backend)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n\n\ndef _check_not_prehashed(signature_algorithm):\n if isinstance(signature_algorithm, Prehashed):\n raise TypeError(\n \"Prehashed is only supported in the sign and verify methods. \"\n \"It cannot be used with signer or verifier.\"\n )\n\n\ndef _warn_sign_verify_deprecated():\n warnings.warn(\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n stacklevel=2\n )\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}]}
| 1,095 | 115 |
gh_patches_debug_39453
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-5789
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`st.metric` should have `label_visibility` parameter
We had that in the original spec but probably forgot it during implementation. Let's fix it. Not super urgent but would be good to do soon so we don't forget.
---
Community voting on feature requests enables the Streamlit team to understand which features are most important to our users.
**If you'd like the Streamlit team to prioritize this feature request, please use the 👍 (thumbs up emoji) reaction in response to the initial post.**
</issue>
<code>
[start of lib/streamlit/elements/metric.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from textwrap import dedent
17 from typing import TYPE_CHECKING, Optional, Union, cast
18
19 from typing_extensions import Literal, TypeAlias
20
21 from streamlit.errors import StreamlitAPIException
22 from streamlit.proto.Metric_pb2 import Metric as MetricProto
23 from streamlit.runtime.metrics_util import gather_metrics
24 from streamlit.string_util import clean_text
25
26 if TYPE_CHECKING:
27 import numpy as np
28
29 from streamlit.delta_generator import DeltaGenerator
30
31
32 Value: TypeAlias = Union["np.integer", "np.floating", float, int, str, None]
33 Delta: TypeAlias = Union[float, int, str, None]
34 DeltaColor: TypeAlias = Literal["normal", "inverse", "off"]
35
36
37 @dataclass(frozen=True)
38 class MetricColorAndDirection:
39 color: "MetricProto.MetricColor.ValueType"
40 direction: "MetricProto.MetricDirection.ValueType"
41
42
43 class MetricMixin:
44 @gather_metrics("metric")
45 def metric(
46 self,
47 label: str,
48 value: Value,
49 delta: Delta = None,
50 delta_color: DeltaColor = "normal",
51 help: Optional[str] = None,
52 ) -> "DeltaGenerator":
53 """Display a metric in big bold font, with an optional indicator of how the metric changed.
54
55 Tip: If you want to display a large number, it may be a good idea to
56 shorten it using packages like `millify <https://github.com/azaitsev/millify>`_
57 or `numerize <https://github.com/davidsa03/numerize>`_. E.g. ``1234`` can be
58 displayed as ``1.2k`` using ``st.metric("Short number", millify(1234))``.
59
60 Parameters
61 ----------
62 label : str
63 The header or title for the metric. The label can optionally contain
64 Markdown and supports the following elements: Bold, Italics,
65 Strikethroughs, Inline Code, Emojis, and Links.
66 value : int, float, str, or None
67 Value of the metric. None is rendered as a long dash.
68 delta : int, float, str, or None
69 Indicator of how the metric changed, rendered with an arrow below
70 the metric. If delta is negative (int/float) or starts with a minus
71 sign (str), the arrow points down and the text is red; else the
72 arrow points up and the text is green. If None (default), no delta
73 indicator is shown.
74 delta_color : str
75 If "normal" (default), the delta indicator is shown as described
76 above. If "inverse", it is red when positive and green when
77 negative. This is useful when a negative change is considered
78 good, e.g. if cost decreased. If "off", delta is shown in gray
79 regardless of its value.
80 help : str
81 An optional tooltip that gets displayed next to the metric label.
82
83 Example
84 -------
85 >>> st.metric(label="Temperature", value="70 °F", delta="1.2 °F")
86
87 .. output::
88 https://doc-metric-example1.streamlitapp.com/
89 height: 210px
90
91 ``st.metric`` looks especially nice in combination with ``st.columns``:
92
93 >>> col1, col2, col3 = st.columns(3)
94 >>> col1.metric("Temperature", "70 °F", "1.2 °F")
95 >>> col2.metric("Wind", "9 mph", "-8%")
96 >>> col3.metric("Humidity", "86%", "4%")
97
98 .. output::
99 https://doc-metric-example2.streamlitapp.com/
100 height: 210px
101
102 The delta indicator color can also be inverted or turned off:
103
104 >>> st.metric(label="Gas price", value=4, delta=-0.5,
105 ... delta_color="inverse")
106 >>>
107 >>> st.metric(label="Active developers", value=123, delta=123,
108 ... delta_color="off")
109
110 .. output::
111 https://doc-metric-example3.streamlitapp.com/
112 height: 320px
113
114 """
115 metric_proto = MetricProto()
116 metric_proto.body = self.parse_value(value)
117 metric_proto.label = self.parse_label(label)
118 metric_proto.delta = self.parse_delta(delta)
119 if help is not None:
120 metric_proto.help = dedent(help)
121
122 color_and_direction = self.determine_delta_color_and_direction(
123 cast(DeltaColor, clean_text(delta_color)), delta
124 )
125 metric_proto.color = color_and_direction.color
126 metric_proto.direction = color_and_direction.direction
127
128 return self.dg._enqueue("metric", metric_proto)
129
130 @staticmethod
131 def parse_label(label: str) -> str:
132 if not isinstance(label, str):
133 raise TypeError(
134 f"'{str(label)}' is of type {str(type(label))}, which is not an accepted type."
135 " label only accepts: str. Please convert the label to an accepted type."
136 )
137 return label
138
139 @staticmethod
140 def parse_value(value: Value) -> str:
141 if value is None:
142 return "—"
143 if isinstance(value, int) or isinstance(value, float) or isinstance(value, str):
144 return str(value)
145 elif hasattr(value, "item"):
146 # Add support for numpy values (e.g. int16, float64, etc.)
147 try:
148 # Item could also be just a variable, so we use try, except
149 if isinstance(value.item(), float) or isinstance(value.item(), int):
150 return str(value.item())
151 except Exception:
152 # If the numpy item is not a valid value, the TypeError below will be raised.
153 pass
154
155 raise TypeError(
156 f"'{str(value)}' is of type {str(type(value))}, which is not an accepted type."
157 " value only accepts: int, float, str, or None."
158 " Please convert the value to an accepted type."
159 )
160
161 @staticmethod
162 def parse_delta(delta: Delta) -> str:
163 if delta is None or delta == "":
164 return ""
165 if isinstance(delta, str):
166 return dedent(delta)
167 elif isinstance(delta, int) or isinstance(delta, float):
168 return str(delta)
169 else:
170 raise TypeError(
171 f"'{str(delta)}' is of type {str(type(delta))}, which is not an accepted type."
172 " delta only accepts: int, float, str, or None."
173 " Please convert the value to an accepted type."
174 )
175
176 def determine_delta_color_and_direction(
177 self,
178 delta_color: DeltaColor,
179 delta: Delta,
180 ) -> MetricColorAndDirection:
181 if delta_color not in {"normal", "inverse", "off"}:
182 raise StreamlitAPIException(
183 f"'{str(delta_color)}' is not an accepted value. delta_color only accepts: "
184 "'normal', 'inverse', or 'off'"
185 )
186
187 if delta is None or delta == "":
188 return MetricColorAndDirection(
189 color=MetricProto.MetricColor.GRAY,
190 direction=MetricProto.MetricDirection.NONE,
191 )
192
193 if self.is_negative(delta):
194 if delta_color == "normal":
195 cd_color = MetricProto.MetricColor.RED
196 elif delta_color == "inverse":
197 cd_color = MetricProto.MetricColor.GREEN
198 else:
199 cd_color = MetricProto.MetricColor.GRAY
200 cd_direction = MetricProto.MetricDirection.DOWN
201 else:
202 if delta_color == "normal":
203 cd_color = MetricProto.MetricColor.GREEN
204 elif delta_color == "inverse":
205 cd_color = MetricProto.MetricColor.RED
206 else:
207 cd_color = MetricProto.MetricColor.GRAY
208 cd_direction = MetricProto.MetricDirection.UP
209
210 return MetricColorAndDirection(
211 color=cd_color,
212 direction=cd_direction,
213 )
214
215 @staticmethod
216 def is_negative(delta: Delta) -> bool:
217 return dedent(str(delta)).startswith("-")
218
219 @property
220 def dg(self) -> "DeltaGenerator":
221 return cast("DeltaGenerator", self)
222
[end of lib/streamlit/elements/metric.py]
[start of e2e/scripts/st_metric.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 col1, col2, col3 = st.columns(3)
18
19 with col1:
20 st.metric("User growth", 123, 123, "normal")
21 with col2:
22 st.metric("S&P 500", -4.56, -50)
23 with col3:
24 st.metric("Apples I've eaten", "23k", " -20", "off")
25
26 " "
27
28 col1, col2, col3 = st.columns(3)
29
30 with col1:
31 st.selectbox("Pick one", [])
32 with col2:
33 st.metric("Test 2", -4.56, 1.23, "inverse")
34 with col3:
35 st.slider("Pick another")
36
[end of e2e/scripts/st_metric.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/e2e/scripts/st_metric.py b/e2e/scripts/st_metric.py
--- a/e2e/scripts/st_metric.py
+++ b/e2e/scripts/st_metric.py
@@ -33,3 +33,11 @@
st.metric("Test 2", -4.56, 1.23, "inverse")
with col3:
st.slider("Pick another")
+
+
+with col1:
+ st.metric("Test 3", -4.56, 1.23, label_visibility="visible")
+with col2:
+ st.metric("Test 4", -4.56, 1.23, label_visibility="hidden")
+with col3:
+ st.metric("Test 5", -4.56, 1.23, label_visibility="collapsed")
diff --git a/lib/streamlit/elements/metric.py b/lib/streamlit/elements/metric.py
--- a/lib/streamlit/elements/metric.py
+++ b/lib/streamlit/elements/metric.py
@@ -18,10 +18,12 @@
from typing_extensions import Literal, TypeAlias
+from streamlit.elements.utils import get_label_visibility_proto_value
from streamlit.errors import StreamlitAPIException
from streamlit.proto.Metric_pb2 import Metric as MetricProto
from streamlit.runtime.metrics_util import gather_metrics
from streamlit.string_util import clean_text
+from streamlit.type_util import LabelVisibility, maybe_raise_label_warnings
if TYPE_CHECKING:
import numpy as np
@@ -49,6 +51,7 @@
delta: Delta = None,
delta_color: DeltaColor = "normal",
help: Optional[str] = None,
+ label_visibility: LabelVisibility = "visible",
) -> "DeltaGenerator":
"""Display a metric in big bold font, with an optional indicator of how the metric changed.
@@ -79,6 +82,11 @@
regardless of its value.
help : str
An optional tooltip that gets displayed next to the metric label.
+ label_visibility : "visible" or "hidden" or "collapsed"
+ The visibility of the label. If "hidden", the label doesn't show but there
+ is still empty space for it (equivalent to label="").
+ If "collapsed", both the label and the space are removed. Default is
+ "visible". This argument can only be supplied by keyword.
Example
-------
@@ -112,6 +120,8 @@
height: 320px
"""
+ maybe_raise_label_warnings(label, label_visibility)
+
metric_proto = MetricProto()
metric_proto.body = self.parse_value(value)
metric_proto.label = self.parse_label(label)
@@ -124,6 +134,9 @@
)
metric_proto.color = color_and_direction.color
metric_proto.direction = color_and_direction.direction
+ metric_proto.label_visibility.value = get_label_visibility_proto_value(
+ label_visibility
+ )
return self.dg._enqueue("metric", metric_proto)
|
{"golden_diff": "diff --git a/e2e/scripts/st_metric.py b/e2e/scripts/st_metric.py\n--- a/e2e/scripts/st_metric.py\n+++ b/e2e/scripts/st_metric.py\n@@ -33,3 +33,11 @@\n st.metric(\"Test 2\", -4.56, 1.23, \"inverse\")\n with col3:\n st.slider(\"Pick another\")\n+\n+\n+with col1:\n+ st.metric(\"Test 3\", -4.56, 1.23, label_visibility=\"visible\")\n+with col2:\n+ st.metric(\"Test 4\", -4.56, 1.23, label_visibility=\"hidden\")\n+with col3:\n+ st.metric(\"Test 5\", -4.56, 1.23, label_visibility=\"collapsed\")\ndiff --git a/lib/streamlit/elements/metric.py b/lib/streamlit/elements/metric.py\n--- a/lib/streamlit/elements/metric.py\n+++ b/lib/streamlit/elements/metric.py\n@@ -18,10 +18,12 @@\n \n from typing_extensions import Literal, TypeAlias\n \n+from streamlit.elements.utils import get_label_visibility_proto_value\n from streamlit.errors import StreamlitAPIException\n from streamlit.proto.Metric_pb2 import Metric as MetricProto\n from streamlit.runtime.metrics_util import gather_metrics\n from streamlit.string_util import clean_text\n+from streamlit.type_util import LabelVisibility, maybe_raise_label_warnings\n \n if TYPE_CHECKING:\n import numpy as np\n@@ -49,6 +51,7 @@\n delta: Delta = None,\n delta_color: DeltaColor = \"normal\",\n help: Optional[str] = None,\n+ label_visibility: LabelVisibility = \"visible\",\n ) -> \"DeltaGenerator\":\n \"\"\"Display a metric in big bold font, with an optional indicator of how the metric changed.\n \n@@ -79,6 +82,11 @@\n regardless of its value.\n help : str\n An optional tooltip that gets displayed next to the metric label.\n+ label_visibility : \"visible\" or \"hidden\" or \"collapsed\"\n+ The visibility of the label. If \"hidden\", the label doesn't show but there\n+ is still empty space for it (equivalent to label=\"\").\n+ If \"collapsed\", both the label and the space are removed. Default is\n+ \"visible\". This argument can only be supplied by keyword.\n \n Example\n -------\n@@ -112,6 +120,8 @@\n height: 320px\n \n \"\"\"\n+ maybe_raise_label_warnings(label, label_visibility)\n+\n metric_proto = MetricProto()\n metric_proto.body = self.parse_value(value)\n metric_proto.label = self.parse_label(label)\n@@ -124,6 +134,9 @@\n )\n metric_proto.color = color_and_direction.color\n metric_proto.direction = color_and_direction.direction\n+ metric_proto.label_visibility.value = get_label_visibility_proto_value(\n+ label_visibility\n+ )\n \n return self.dg._enqueue(\"metric\", metric_proto)\n", "issue": "`st.metric` should have `label_visibility` parameter\nWe had that in the original spec but probably forgot it during implementation. Let's fix it. Not super urgent but would be good to do soon so we don't forget. \r\n\r\n---\r\n\r\nCommunity voting on feature requests enables the Streamlit team to understand which features are most important to our users.\r\n\r\n**If you'd like the Streamlit team to prioritize this feature request, please use the \ud83d\udc4d (thumbs up emoji) reaction in response to the initial post.**\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom dataclasses import dataclass\nfrom textwrap import dedent\nfrom typing import TYPE_CHECKING, Optional, Union, cast\n\nfrom typing_extensions import Literal, TypeAlias\n\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.Metric_pb2 import Metric as MetricProto\nfrom streamlit.runtime.metrics_util import gather_metrics\nfrom streamlit.string_util import clean_text\n\nif TYPE_CHECKING:\n import numpy as np\n\n from streamlit.delta_generator import DeltaGenerator\n\n\nValue: TypeAlias = Union[\"np.integer\", \"np.floating\", float, int, str, None]\nDelta: TypeAlias = Union[float, int, str, None]\nDeltaColor: TypeAlias = Literal[\"normal\", \"inverse\", \"off\"]\n\n\n@dataclass(frozen=True)\nclass MetricColorAndDirection:\n color: \"MetricProto.MetricColor.ValueType\"\n direction: \"MetricProto.MetricDirection.ValueType\"\n\n\nclass MetricMixin:\n @gather_metrics(\"metric\")\n def metric(\n self,\n label: str,\n value: Value,\n delta: Delta = None,\n delta_color: DeltaColor = \"normal\",\n help: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display a metric in big bold font, with an optional indicator of how the metric changed.\n\n Tip: If you want to display a large number, it may be a good idea to\n shorten it using packages like `millify <https://github.com/azaitsev/millify>`_\n or `numerize <https://github.com/davidsa03/numerize>`_. E.g. ``1234`` can be\n displayed as ``1.2k`` using ``st.metric(\"Short number\", millify(1234))``.\n\n Parameters\n ----------\n label : str\n The header or title for the metric. The label can optionally contain\n Markdown and supports the following elements: Bold, Italics,\n Strikethroughs, Inline Code, Emojis, and Links.\n value : int, float, str, or None\n Value of the metric. None is rendered as a long dash.\n delta : int, float, str, or None\n Indicator of how the metric changed, rendered with an arrow below\n the metric. If delta is negative (int/float) or starts with a minus\n sign (str), the arrow points down and the text is red; else the\n arrow points up and the text is green. If None (default), no delta\n indicator is shown.\n delta_color : str\n If \"normal\" (default), the delta indicator is shown as described\n above. If \"inverse\", it is red when positive and green when\n negative. This is useful when a negative change is considered\n good, e.g. if cost decreased. If \"off\", delta is shown in gray\n regardless of its value.\n help : str\n An optional tooltip that gets displayed next to the metric label.\n\n Example\n -------\n >>> st.metric(label=\"Temperature\", value=\"70 \u00b0F\", delta=\"1.2 \u00b0F\")\n\n .. output::\n https://doc-metric-example1.streamlitapp.com/\n height: 210px\n\n ``st.metric`` looks especially nice in combination with ``st.columns``:\n\n >>> col1, col2, col3 = st.columns(3)\n >>> col1.metric(\"Temperature\", \"70 \u00b0F\", \"1.2 \u00b0F\")\n >>> col2.metric(\"Wind\", \"9 mph\", \"-8%\")\n >>> col3.metric(\"Humidity\", \"86%\", \"4%\")\n\n .. output::\n https://doc-metric-example2.streamlitapp.com/\n height: 210px\n\n The delta indicator color can also be inverted or turned off:\n\n >>> st.metric(label=\"Gas price\", value=4, delta=-0.5,\n ... delta_color=\"inverse\")\n >>>\n >>> st.metric(label=\"Active developers\", value=123, delta=123,\n ... delta_color=\"off\")\n\n .. output::\n https://doc-metric-example3.streamlitapp.com/\n height: 320px\n\n \"\"\"\n metric_proto = MetricProto()\n metric_proto.body = self.parse_value(value)\n metric_proto.label = self.parse_label(label)\n metric_proto.delta = self.parse_delta(delta)\n if help is not None:\n metric_proto.help = dedent(help)\n\n color_and_direction = self.determine_delta_color_and_direction(\n cast(DeltaColor, clean_text(delta_color)), delta\n )\n metric_proto.color = color_and_direction.color\n metric_proto.direction = color_and_direction.direction\n\n return self.dg._enqueue(\"metric\", metric_proto)\n\n @staticmethod\n def parse_label(label: str) -> str:\n if not isinstance(label, str):\n raise TypeError(\n f\"'{str(label)}' is of type {str(type(label))}, which is not an accepted type.\"\n \" label only accepts: str. Please convert the label to an accepted type.\"\n )\n return label\n\n @staticmethod\n def parse_value(value: Value) -> str:\n if value is None:\n return \"\u2014\"\n if isinstance(value, int) or isinstance(value, float) or isinstance(value, str):\n return str(value)\n elif hasattr(value, \"item\"):\n # Add support for numpy values (e.g. int16, float64, etc.)\n try:\n # Item could also be just a variable, so we use try, except\n if isinstance(value.item(), float) or isinstance(value.item(), int):\n return str(value.item())\n except Exception:\n # If the numpy item is not a valid value, the TypeError below will be raised.\n pass\n\n raise TypeError(\n f\"'{str(value)}' is of type {str(type(value))}, which is not an accepted type.\"\n \" value only accepts: int, float, str, or None.\"\n \" Please convert the value to an accepted type.\"\n )\n\n @staticmethod\n def parse_delta(delta: Delta) -> str:\n if delta is None or delta == \"\":\n return \"\"\n if isinstance(delta, str):\n return dedent(delta)\n elif isinstance(delta, int) or isinstance(delta, float):\n return str(delta)\n else:\n raise TypeError(\n f\"'{str(delta)}' is of type {str(type(delta))}, which is not an accepted type.\"\n \" delta only accepts: int, float, str, or None.\"\n \" Please convert the value to an accepted type.\"\n )\n\n def determine_delta_color_and_direction(\n self,\n delta_color: DeltaColor,\n delta: Delta,\n ) -> MetricColorAndDirection:\n if delta_color not in {\"normal\", \"inverse\", \"off\"}:\n raise StreamlitAPIException(\n f\"'{str(delta_color)}' is not an accepted value. delta_color only accepts: \"\n \"'normal', 'inverse', or 'off'\"\n )\n\n if delta is None or delta == \"\":\n return MetricColorAndDirection(\n color=MetricProto.MetricColor.GRAY,\n direction=MetricProto.MetricDirection.NONE,\n )\n\n if self.is_negative(delta):\n if delta_color == \"normal\":\n cd_color = MetricProto.MetricColor.RED\n elif delta_color == \"inverse\":\n cd_color = MetricProto.MetricColor.GREEN\n else:\n cd_color = MetricProto.MetricColor.GRAY\n cd_direction = MetricProto.MetricDirection.DOWN\n else:\n if delta_color == \"normal\":\n cd_color = MetricProto.MetricColor.GREEN\n elif delta_color == \"inverse\":\n cd_color = MetricProto.MetricColor.RED\n else:\n cd_color = MetricProto.MetricColor.GRAY\n cd_direction = MetricProto.MetricDirection.UP\n\n return MetricColorAndDirection(\n color=cd_color,\n direction=cd_direction,\n )\n\n @staticmethod\n def is_negative(delta: Delta) -> bool:\n return dedent(str(delta)).startswith(\"-\")\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n return cast(\"DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/metric.py"}, {"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\ncol1, col2, col3 = st.columns(3)\n\nwith col1:\n st.metric(\"User growth\", 123, 123, \"normal\")\nwith col2:\n st.metric(\"S&P 500\", -4.56, -50)\nwith col3:\n st.metric(\"Apples I've eaten\", \"23k\", \" -20\", \"off\")\n\n\" \"\n\ncol1, col2, col3 = st.columns(3)\n\nwith col1:\n st.selectbox(\"Pick one\", [])\nwith col2:\n st.metric(\"Test 2\", -4.56, 1.23, \"inverse\")\nwith col3:\n st.slider(\"Pick another\")\n", "path": "e2e/scripts/st_metric.py"}]}
| 3,537 | 671 |
gh_patches_debug_25797
|
rasdani/github-patches
|
git_diff
|
nipy__nipype-2757
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'MeshFixInputSpec' object has no attribute 'save_as_vrml'
### Summary
Hi, I'm a newbie and am currently trying to revive a [4 years old repository](https://github.com/CyclotronResearchCentre/forward). This code makes use of MeshFix interface. When running an example I get an `AttributeError`.
### Actual behavior
The code where the issue occures is the following:
```
input_to_fsmesh = pe.Node(interface=mf.MeshFix(), name='input_to_fsmesh')
input_to_fsmesh.inputs.dont_clean = True
input_to_fsmesh.inputs.save_as_freesurfer_mesh = True
```
It stops running at the last line with the error:
```AttributeError: 'MeshFixInputSpec' object has no attribute 'save_as_vrml'```
The complete message is the following:
```
Exception occurred in traits notification handler for object:
args = <undefined>
cut_inner = <undefined>
cut_outer = <undefined>
decouple_inin = <undefined>
decouple_outin = <undefined>
decouple_outout = <undefined>
dilation = <undefined>
dont_clean = True
environ = {}
epsilon_angle = <undefined>
finetuning_distance = <undefined>
finetuning_inwards = <undefined>
finetuning_outwards = <undefined>
finetuning_substeps = <undefined>
in_file1 = <undefined>
in_file2 = <undefined>
join_closest_components = <undefined>
join_overlapping_largest_components = <undefined>
laplacian_smoothing_steps = <undefined>
number_of_biggest_shells = <undefined>
out_filename = <undefined>
output_type = off
quiet_mode = <undefined>
remove_handles = <undefined>
save_as_freesurfer_mesh = True
save_as_stl = <undefined>
save_as_vmrl = <undefined>
set_intersections_to_one = <undefined>
uniform_remeshing_steps = <undefined>
uniform_remeshing_vertices = <undefined>
x_shift = <undefined>
, trait: save_as_freesurfer_mesh, old value: <undefined>, new value: True
Traceback (most recent call last):
File "/home/martin/local/miniconda3/lib/python3.5/site-packages/traits/trait_notifiers.py", line 519,
in _dispatch_change_event
self.dispatch( handler, *args )
File "/home/martin/local/miniconda3/lib/python3.5/site-packages/traits/trait_notifiers.py", line 482,
in dispatch
handler( *args )
File "/home/martin/local/miniconda3/lib/python3.5/site-packages/nipype/interfaces/base/specs.py", line 112,
in _xor_warn
if isdefined(getattr(self, trait_name)):
AttributeError: 'MeshFixInputSpec' object has no attribute 'save_as_vrml'
Exception occurred in traits notification handler.
```
### Platform details:
```
{'commit_hash': '%h',
'commit_source': 'archive substitution',
'networkx_version': '2.1',
'nibabel_version': '2.3.1',
'nipype_version': '1.1.3',
'numpy_version': '1.15.2',
'pkg_path': '/home/martin/local/miniconda3/lib/python3.5/site-packages/nipype',
'scipy_version': '1.1.0',
'sys_executable': '/home/martin/local/miniconda3/bin/python',
'sys_platform': 'linux',
'sys_version': '3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 21:41:56) \n'
'[GCC 7.3.0]',
'traits_version': '4.6.0'}
```
To tell the exact truth my platform is WSL Ubuntu 18.04 as I have a Windows 10 laptop.
### Execution environment
My python environment outside container.
</issue>
<code>
[start of nipype/interfaces/meshfix.py]
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """ Fixes meshes:
5 """
6 from __future__ import (print_function, division, unicode_literals,
7 absolute_import)
8
9 import os.path as op
10 from ..utils.filemanip import split_filename
11 from .base import (CommandLine, CommandLineInputSpec, traits, TraitedSpec,
12 isdefined, File)
13
14
15 class MeshFixInputSpec(CommandLineInputSpec):
16 number_of_biggest_shells = traits.Int(
17 argstr='--shells %d', desc="Only the N biggest shells are kept")
18
19 epsilon_angle = traits.Range(
20 argstr='-a %f',
21 low=0.0,
22 high=2.0,
23 desc="Epsilon angle in degrees (must be between 0 and 2)")
24
25 join_overlapping_largest_components = traits.Bool(
26 argstr='-j',
27 xor=['join_closest_components'],
28 desc='Join 2 biggest components if they overlap, remove the rest.')
29
30 join_closest_components = traits.Bool(
31 argstr='-jc',
32 xor=['join_closest_components'],
33 desc='Join the closest pair of components.')
34
35 quiet_mode = traits.Bool(
36 argstr='-q', desc="Quiet mode, don't write much to stdout.")
37
38 dont_clean = traits.Bool(argstr='--no-clean', desc="Don't Clean")
39
40 save_as_stl = traits.Bool(
41 xor=['save_as_vmrl', 'save_as_freesurfer_mesh'],
42 argstr='--stl',
43 desc="Result is saved in stereolithographic format (.stl)")
44 save_as_vmrl = traits.Bool(
45 argstr='--wrl',
46 xor=['save_as_stl', 'save_as_freesurfer_mesh'],
47 desc="Result is saved in VRML1.0 format (.wrl)")
48 save_as_freesurfer_mesh = traits.Bool(
49 argstr='--fsmesh',
50 xor=['save_as_vrml', 'save_as_stl'],
51 desc="Result is saved in freesurfer mesh format")
52
53 remove_handles = traits.Bool(
54 argstr='--remove-handles', desc="Remove handles")
55
56 uniform_remeshing_steps = traits.Int(
57 argstr='-u %d',
58 requires=['uniform_remeshing_vertices'],
59 desc="Number of steps for uniform remeshing of the whole mesh")
60
61 uniform_remeshing_vertices = traits.Int(
62 argstr='--vertices %d',
63 requires=['uniform_remeshing_steps'],
64 desc="Constrains the number of vertices."
65 "Must be used with uniform_remeshing_steps")
66
67 laplacian_smoothing_steps = traits.Int(
68 argstr='--smooth %d',
69 desc="The number of laplacian smoothing steps to apply")
70
71 x_shift = traits.Int(
72 argstr='--smooth %d',
73 desc=
74 "Shifts the coordinates of the vertices when saving. Output must be in FreeSurfer format"
75 )
76
77 # Cutting, decoupling, dilation
78 cut_outer = traits.Int(
79 argstr='--cut-outer %d',
80 desc="Remove triangles of 1st that are outside of the 2nd shell.")
81 cut_inner = traits.Int(
82 argstr='--cut-inner %d',
83 desc=
84 "Remove triangles of 1st that are inside of the 2nd shell. Dilate 2nd by N; Fill holes and keep only 1st afterwards."
85 )
86 decouple_inin = traits.Int(
87 argstr='--decouple-inin %d',
88 desc="Treat 1st file as inner, 2nd file as outer component."
89 "Resolve overlaps by moving inners triangles inwards. Constrain the min distance between the components > d."
90 )
91 decouple_outin = traits.Int(
92 argstr='--decouple-outin %d',
93 desc="Treat 1st file as outer, 2nd file as inner component."
94 "Resolve overlaps by moving outers triangles inwards. Constrain the min distance between the components > d."
95 )
96 decouple_outout = traits.Int(
97 argstr='--decouple-outout %d',
98 desc="Treat 1st file as outer, 2nd file as inner component."
99 "Resolve overlaps by moving outers triangles outwards. Constrain the min distance between the components > d."
100 )
101
102 finetuning_inwards = traits.Bool(
103 argstr='--fineTuneIn ',
104 requires=['finetuning_distance', 'finetuning_substeps'])
105 finetuning_outwards = traits.Bool(
106 argstr='--fineTuneIn ',
107 requires=['finetuning_distance', 'finetuning_substeps'],
108 xor=['finetuning_inwards'],
109 desc=
110 'Similar to finetuning_inwards, but ensures minimal distance in the other direction'
111 )
112 finetuning_distance = traits.Float(
113 argstr='%f',
114 requires=['finetuning_substeps'],
115 desc="Used to fine-tune the minimal distance between surfaces."
116 "A minimal distance d is ensured, and reached in n substeps. When using the surfaces for subsequent volume meshing by gmsh, this step prevent too flat tetrahedra2)"
117 )
118 finetuning_substeps = traits.Int(
119 argstr='%d',
120 requires=['finetuning_distance'],
121 desc="Used to fine-tune the minimal distance between surfaces."
122 "A minimal distance d is ensured, and reached in n substeps. When using the surfaces for subsequent volume meshing by gmsh, this step prevent too flat tetrahedra2)"
123 )
124
125 dilation = traits.Int(
126 argstr='--dilate %d',
127 desc="Dilate the surface by d. d < 0 means shrinking.")
128 set_intersections_to_one = traits.Bool(
129 argstr='--intersect',
130 desc="If the mesh contains intersections, return value = 1."
131 "If saved in gmsh format, intersections will be highlighted.")
132
133 in_file1 = File(exists=True, argstr="%s", position=1, mandatory=True)
134 in_file2 = File(exists=True, argstr="%s", position=2)
135 output_type = traits.Enum(
136 'off', ['stl', 'msh', 'wrl', 'vrml', 'fs', 'off'],
137 usedefault=True,
138 desc='The output type to save the file as.')
139 out_filename = File(
140 genfile=True,
141 argstr="-o %s",
142 desc='The output filename for the fixed mesh file')
143
144
145 class MeshFixOutputSpec(TraitedSpec):
146 mesh_file = File(exists=True, desc='The output mesh file')
147
148
149 class MeshFix(CommandLine):
150 """
151 MeshFix v1.2-alpha - by Marco Attene, Mirko Windhoff, Axel Thielscher.
152
153 .. seealso::
154
155 http://jmeshlib.sourceforge.net
156 Sourceforge page
157
158 http://simnibs.de/installation/meshfixandgetfem
159 Ubuntu installation instructions
160
161 If MeshFix is used for research purposes, please cite the following paper:
162 M. Attene - A lightweight approach to repairing digitized polygon meshes.
163 The Visual Computer, 2010. (c) Springer.
164
165 Accepted input formats are OFF, PLY and STL.
166 Other formats (like .msh for gmsh) are supported only partially.
167
168 Example
169 -------
170
171 >>> import nipype.interfaces.meshfix as mf
172 >>> fix = mf.MeshFix()
173 >>> fix.inputs.in_file1 = 'lh-pial.stl'
174 >>> fix.inputs.in_file2 = 'rh-pial.stl'
175 >>> fix.run() # doctest: +SKIP
176 >>> fix.cmdline
177 'meshfix lh-pial.stl rh-pial.stl -o lh-pial_fixed.off'
178 """
179 _cmd = 'meshfix'
180 input_spec = MeshFixInputSpec
181 output_spec = MeshFixOutputSpec
182
183 def _list_outputs(self):
184 outputs = self.output_spec().get()
185 if isdefined(self.inputs.out_filename):
186 path, name, ext = split_filename(self.inputs.out_filename)
187 ext = ext.replace('.', '')
188 out_types = ['stl', 'msh', 'wrl', 'vrml', 'fs', 'off']
189 # Make sure that the output filename uses one of the possible file types
190 if any(ext == out_type.lower() for out_type in out_types):
191 outputs['mesh_file'] = op.abspath(self.inputs.out_filename)
192 else:
193 outputs['mesh_file'] = op.abspath(
194 name + '.' + self.inputs.output_type)
195 else:
196 outputs['mesh_file'] = op.abspath(self._gen_outfilename())
197 return outputs
198
199 def _gen_filename(self, name):
200 if name == 'out_filename':
201 return self._gen_outfilename()
202 else:
203 return None
204
205 def _gen_outfilename(self):
206 _, name, _ = split_filename(self.inputs.in_file1)
207 if self.inputs.save_as_freesurfer_mesh or self.inputs.output_type == 'fs':
208 self.inputs.output_type = 'fs'
209 self.inputs.save_as_freesurfer_mesh = True
210 if self.inputs.save_as_stl or self.inputs.output_type == 'stl':
211 self.inputs.output_type = 'stl'
212 self.inputs.save_as_stl = True
213 if self.inputs.save_as_vmrl or self.inputs.output_type == 'vmrl':
214 self.inputs.output_type = 'vmrl'
215 self.inputs.save_as_vmrl = True
216 return name + '_fixed.' + self.inputs.output_type
217
[end of nipype/interfaces/meshfix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nipype/interfaces/meshfix.py b/nipype/interfaces/meshfix.py
--- a/nipype/interfaces/meshfix.py
+++ b/nipype/interfaces/meshfix.py
@@ -38,10 +38,10 @@
dont_clean = traits.Bool(argstr='--no-clean', desc="Don't Clean")
save_as_stl = traits.Bool(
- xor=['save_as_vmrl', 'save_as_freesurfer_mesh'],
+ xor=['save_as_vrml', 'save_as_freesurfer_mesh'],
argstr='--stl',
desc="Result is saved in stereolithographic format (.stl)")
- save_as_vmrl = traits.Bool(
+ save_as_vrml = traits.Bool(
argstr='--wrl',
xor=['save_as_stl', 'save_as_freesurfer_mesh'],
desc="Result is saved in VRML1.0 format (.wrl)")
@@ -210,7 +210,7 @@
if self.inputs.save_as_stl or self.inputs.output_type == 'stl':
self.inputs.output_type = 'stl'
self.inputs.save_as_stl = True
- if self.inputs.save_as_vmrl or self.inputs.output_type == 'vmrl':
- self.inputs.output_type = 'vmrl'
- self.inputs.save_as_vmrl = True
+ if self.inputs.save_as_vrml or self.inputs.output_type == 'vrml':
+ self.inputs.output_type = 'vrml'
+ self.inputs.save_as_vrml = True
return name + '_fixed.' + self.inputs.output_type
|
{"golden_diff": "diff --git a/nipype/interfaces/meshfix.py b/nipype/interfaces/meshfix.py\n--- a/nipype/interfaces/meshfix.py\n+++ b/nipype/interfaces/meshfix.py\n@@ -38,10 +38,10 @@\n dont_clean = traits.Bool(argstr='--no-clean', desc=\"Don't Clean\")\n \n save_as_stl = traits.Bool(\n- xor=['save_as_vmrl', 'save_as_freesurfer_mesh'],\n+ xor=['save_as_vrml', 'save_as_freesurfer_mesh'],\n argstr='--stl',\n desc=\"Result is saved in stereolithographic format (.stl)\")\n- save_as_vmrl = traits.Bool(\n+ save_as_vrml = traits.Bool(\n argstr='--wrl',\n xor=['save_as_stl', 'save_as_freesurfer_mesh'],\n desc=\"Result is saved in VRML1.0 format (.wrl)\")\n@@ -210,7 +210,7 @@\n if self.inputs.save_as_stl or self.inputs.output_type == 'stl':\n self.inputs.output_type = 'stl'\n self.inputs.save_as_stl = True\n- if self.inputs.save_as_vmrl or self.inputs.output_type == 'vmrl':\n- self.inputs.output_type = 'vmrl'\n- self.inputs.save_as_vmrl = True\n+ if self.inputs.save_as_vrml or self.inputs.output_type == 'vrml':\n+ self.inputs.output_type = 'vrml'\n+ self.inputs.save_as_vrml = True\n return name + '_fixed.' + self.inputs.output_type\n", "issue": "'MeshFixInputSpec' object has no attribute 'save_as_vrml'\n### Summary\r\n\r\nHi, I'm a newbie and am currently trying to revive a [4 years old repository](https://github.com/CyclotronResearchCentre/forward). This code makes use of MeshFix interface. When running an example I get an `AttributeError`.\r\n\r\n### Actual behavior\r\n\r\nThe code where the issue occures is the following:\r\n```\r\ninput_to_fsmesh = pe.Node(interface=mf.MeshFix(), name='input_to_fsmesh')\r\ninput_to_fsmesh.inputs.dont_clean = True\r\ninput_to_fsmesh.inputs.save_as_freesurfer_mesh = True\r\n```\r\nIt stops running at the last line with the error: \r\n```AttributeError: 'MeshFixInputSpec' object has no attribute 'save_as_vrml'```\r\nThe complete message is the following:\r\n```\r\nException occurred in traits notification handler for object: \r\nargs = <undefined>\r\ncut_inner = <undefined>\r\ncut_outer = <undefined>\r\ndecouple_inin = <undefined>\r\ndecouple_outin = <undefined>\r\ndecouple_outout = <undefined>\r\ndilation = <undefined>\r\ndont_clean = True\r\nenviron = {}\r\nepsilon_angle = <undefined>\r\nfinetuning_distance = <undefined>\r\nfinetuning_inwards = <undefined>\r\nfinetuning_outwards = <undefined>\r\nfinetuning_substeps = <undefined>\r\nin_file1 = <undefined>\r\nin_file2 = <undefined>\r\njoin_closest_components = <undefined>\r\njoin_overlapping_largest_components = <undefined>\r\nlaplacian_smoothing_steps = <undefined>\r\nnumber_of_biggest_shells = <undefined>\r\nout_filename = <undefined>\r\noutput_type = off\r\nquiet_mode = <undefined>\r\nremove_handles = <undefined>\r\nsave_as_freesurfer_mesh = True\r\nsave_as_stl = <undefined>\r\nsave_as_vmrl = <undefined>\r\nset_intersections_to_one = <undefined>\r\nuniform_remeshing_steps = <undefined>\r\nuniform_remeshing_vertices = <undefined>\r\nx_shift = <undefined>\r\n, trait: save_as_freesurfer_mesh, old value: <undefined>, new value: True\r\n\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/martin/local/miniconda3/lib/python3.5/site-packages/traits/trait_notifiers.py\", line 519, \r\nin _dispatch_change_event\r\n self.dispatch( handler, *args ) \r\nFile \"/home/martin/local/miniconda3/lib/python3.5/site-packages/traits/trait_notifiers.py\", line 482, \r\nin dispatch\r\n\thandler( *args ) \r\nFile \"/home/martin/local/miniconda3/lib/python3.5/site-packages/nipype/interfaces/base/specs.py\", line 112, \r\nin _xor_warn \r\n if isdefined(getattr(self, trait_name)):\r\nAttributeError: 'MeshFixInputSpec' object has no attribute 'save_as_vrml'\r\nException occurred in traits notification handler.\r\n```\r\n\r\n### Platform details:\r\n```\r\n{'commit_hash': '%h',\r\n 'commit_source': 'archive substitution',\r\n 'networkx_version': '2.1',\r\n 'nibabel_version': '2.3.1',\r\n 'nipype_version': '1.1.3',\r\n 'numpy_version': '1.15.2',\r\n 'pkg_path': '/home/martin/local/miniconda3/lib/python3.5/site-packages/nipype',\r\n 'scipy_version': '1.1.0',\r\n 'sys_executable': '/home/martin/local/miniconda3/bin/python',\r\n 'sys_platform': 'linux',\r\n 'sys_version': '3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 21:41:56) \\n'\r\n '[GCC 7.3.0]',\r\n 'traits_version': '4.6.0'}\r\n```\r\nTo tell the exact truth my platform is WSL Ubuntu 18.04 as I have a Windows 10 laptop.\r\n\r\n### Execution environment\r\n\r\nMy python environment outside container.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\" Fixes meshes:\n\"\"\"\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\n\nimport os.path as op\nfrom ..utils.filemanip import split_filename\nfrom .base import (CommandLine, CommandLineInputSpec, traits, TraitedSpec,\n isdefined, File)\n\n\nclass MeshFixInputSpec(CommandLineInputSpec):\n number_of_biggest_shells = traits.Int(\n argstr='--shells %d', desc=\"Only the N biggest shells are kept\")\n\n epsilon_angle = traits.Range(\n argstr='-a %f',\n low=0.0,\n high=2.0,\n desc=\"Epsilon angle in degrees (must be between 0 and 2)\")\n\n join_overlapping_largest_components = traits.Bool(\n argstr='-j',\n xor=['join_closest_components'],\n desc='Join 2 biggest components if they overlap, remove the rest.')\n\n join_closest_components = traits.Bool(\n argstr='-jc',\n xor=['join_closest_components'],\n desc='Join the closest pair of components.')\n\n quiet_mode = traits.Bool(\n argstr='-q', desc=\"Quiet mode, don't write much to stdout.\")\n\n dont_clean = traits.Bool(argstr='--no-clean', desc=\"Don't Clean\")\n\n save_as_stl = traits.Bool(\n xor=['save_as_vmrl', 'save_as_freesurfer_mesh'],\n argstr='--stl',\n desc=\"Result is saved in stereolithographic format (.stl)\")\n save_as_vmrl = traits.Bool(\n argstr='--wrl',\n xor=['save_as_stl', 'save_as_freesurfer_mesh'],\n desc=\"Result is saved in VRML1.0 format (.wrl)\")\n save_as_freesurfer_mesh = traits.Bool(\n argstr='--fsmesh',\n xor=['save_as_vrml', 'save_as_stl'],\n desc=\"Result is saved in freesurfer mesh format\")\n\n remove_handles = traits.Bool(\n argstr='--remove-handles', desc=\"Remove handles\")\n\n uniform_remeshing_steps = traits.Int(\n argstr='-u %d',\n requires=['uniform_remeshing_vertices'],\n desc=\"Number of steps for uniform remeshing of the whole mesh\")\n\n uniform_remeshing_vertices = traits.Int(\n argstr='--vertices %d',\n requires=['uniform_remeshing_steps'],\n desc=\"Constrains the number of vertices.\"\n \"Must be used with uniform_remeshing_steps\")\n\n laplacian_smoothing_steps = traits.Int(\n argstr='--smooth %d',\n desc=\"The number of laplacian smoothing steps to apply\")\n\n x_shift = traits.Int(\n argstr='--smooth %d',\n desc=\n \"Shifts the coordinates of the vertices when saving. Output must be in FreeSurfer format\"\n )\n\n # Cutting, decoupling, dilation\n cut_outer = traits.Int(\n argstr='--cut-outer %d',\n desc=\"Remove triangles of 1st that are outside of the 2nd shell.\")\n cut_inner = traits.Int(\n argstr='--cut-inner %d',\n desc=\n \"Remove triangles of 1st that are inside of the 2nd shell. Dilate 2nd by N; Fill holes and keep only 1st afterwards.\"\n )\n decouple_inin = traits.Int(\n argstr='--decouple-inin %d',\n desc=\"Treat 1st file as inner, 2nd file as outer component.\"\n \"Resolve overlaps by moving inners triangles inwards. Constrain the min distance between the components > d.\"\n )\n decouple_outin = traits.Int(\n argstr='--decouple-outin %d',\n desc=\"Treat 1st file as outer, 2nd file as inner component.\"\n \"Resolve overlaps by moving outers triangles inwards. Constrain the min distance between the components > d.\"\n )\n decouple_outout = traits.Int(\n argstr='--decouple-outout %d',\n desc=\"Treat 1st file as outer, 2nd file as inner component.\"\n \"Resolve overlaps by moving outers triangles outwards. Constrain the min distance between the components > d.\"\n )\n\n finetuning_inwards = traits.Bool(\n argstr='--fineTuneIn ',\n requires=['finetuning_distance', 'finetuning_substeps'])\n finetuning_outwards = traits.Bool(\n argstr='--fineTuneIn ',\n requires=['finetuning_distance', 'finetuning_substeps'],\n xor=['finetuning_inwards'],\n desc=\n 'Similar to finetuning_inwards, but ensures minimal distance in the other direction'\n )\n finetuning_distance = traits.Float(\n argstr='%f',\n requires=['finetuning_substeps'],\n desc=\"Used to fine-tune the minimal distance between surfaces.\"\n \"A minimal distance d is ensured, and reached in n substeps. When using the surfaces for subsequent volume meshing by gmsh, this step prevent too flat tetrahedra2)\"\n )\n finetuning_substeps = traits.Int(\n argstr='%d',\n requires=['finetuning_distance'],\n desc=\"Used to fine-tune the minimal distance between surfaces.\"\n \"A minimal distance d is ensured, and reached in n substeps. When using the surfaces for subsequent volume meshing by gmsh, this step prevent too flat tetrahedra2)\"\n )\n\n dilation = traits.Int(\n argstr='--dilate %d',\n desc=\"Dilate the surface by d. d < 0 means shrinking.\")\n set_intersections_to_one = traits.Bool(\n argstr='--intersect',\n desc=\"If the mesh contains intersections, return value = 1.\"\n \"If saved in gmsh format, intersections will be highlighted.\")\n\n in_file1 = File(exists=True, argstr=\"%s\", position=1, mandatory=True)\n in_file2 = File(exists=True, argstr=\"%s\", position=2)\n output_type = traits.Enum(\n 'off', ['stl', 'msh', 'wrl', 'vrml', 'fs', 'off'],\n usedefault=True,\n desc='The output type to save the file as.')\n out_filename = File(\n genfile=True,\n argstr=\"-o %s\",\n desc='The output filename for the fixed mesh file')\n\n\nclass MeshFixOutputSpec(TraitedSpec):\n mesh_file = File(exists=True, desc='The output mesh file')\n\n\nclass MeshFix(CommandLine):\n \"\"\"\n MeshFix v1.2-alpha - by Marco Attene, Mirko Windhoff, Axel Thielscher.\n\n .. seealso::\n\n http://jmeshlib.sourceforge.net\n Sourceforge page\n\n http://simnibs.de/installation/meshfixandgetfem\n Ubuntu installation instructions\n\n If MeshFix is used for research purposes, please cite the following paper:\n M. Attene - A lightweight approach to repairing digitized polygon meshes.\n The Visual Computer, 2010. (c) Springer.\n\n Accepted input formats are OFF, PLY and STL.\n Other formats (like .msh for gmsh) are supported only partially.\n\n Example\n -------\n\n >>> import nipype.interfaces.meshfix as mf\n >>> fix = mf.MeshFix()\n >>> fix.inputs.in_file1 = 'lh-pial.stl'\n >>> fix.inputs.in_file2 = 'rh-pial.stl'\n >>> fix.run() # doctest: +SKIP\n >>> fix.cmdline\n 'meshfix lh-pial.stl rh-pial.stl -o lh-pial_fixed.off'\n \"\"\"\n _cmd = 'meshfix'\n input_spec = MeshFixInputSpec\n output_spec = MeshFixOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n if isdefined(self.inputs.out_filename):\n path, name, ext = split_filename(self.inputs.out_filename)\n ext = ext.replace('.', '')\n out_types = ['stl', 'msh', 'wrl', 'vrml', 'fs', 'off']\n # Make sure that the output filename uses one of the possible file types\n if any(ext == out_type.lower() for out_type in out_types):\n outputs['mesh_file'] = op.abspath(self.inputs.out_filename)\n else:\n outputs['mesh_file'] = op.abspath(\n name + '.' + self.inputs.output_type)\n else:\n outputs['mesh_file'] = op.abspath(self._gen_outfilename())\n return outputs\n\n def _gen_filename(self, name):\n if name == 'out_filename':\n return self._gen_outfilename()\n else:\n return None\n\n def _gen_outfilename(self):\n _, name, _ = split_filename(self.inputs.in_file1)\n if self.inputs.save_as_freesurfer_mesh or self.inputs.output_type == 'fs':\n self.inputs.output_type = 'fs'\n self.inputs.save_as_freesurfer_mesh = True\n if self.inputs.save_as_stl or self.inputs.output_type == 'stl':\n self.inputs.output_type = 'stl'\n self.inputs.save_as_stl = True\n if self.inputs.save_as_vmrl or self.inputs.output_type == 'vmrl':\n self.inputs.output_type = 'vmrl'\n self.inputs.save_as_vmrl = True\n return name + '_fixed.' + self.inputs.output_type\n", "path": "nipype/interfaces/meshfix.py"}]}
| 4,036 | 359 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.