problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_40630 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-6617 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs.opnfv.org is 'Not Found.'
## Details
The entire documentation site for OPNFV is missing. The most recent builds have succeeded, and as far as I know DNS hasn't changed recently.
* Read the Docs project URL: opnfvdocsdemo.readthedocs.io
* Build URL (if applicable): https://readthedocs.org/projects/opnfvdocsdemo/builds/
* Read the Docs username (if applicable):
## Expected Result
Going to https://docs.opnfv.org/ returns the documentation site.
## Actual Result
```
curl -i -L https://opnfvdocsdemo.readthedocs.io/
HTTP/1.1 302 Found
Content-Type: text/html; charset=utf-8
Location: http://docs.opnfv.org/en/stable-hunter/
Server: nginx
X-Frame-Options: DENY
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
X-Served: Django-Proxito
X-Deity: web04
Strict-Transport-Security: max-age=31536000; includeSubDomains
Date: Wed, 29 Jan 2020 23:13:29 GMT
Content-Length: 0
HTTP/1.1 301 Moved Permanently
Server: CloudFront
Date: Wed, 29 Jan 2020 23:13:29 GMT
Content-Type: text/html
Content-Length: 183
Connection: keep-alive
Location: https://docs.opnfv.org/en/stable-hunter/
X-Cache: Redirect from cloudfront
Via: 1.1 5ab5dc09da67e3ea794ec8a82992cc89.cloudfront.net (CloudFront)
X-Amz-Cf-Pop: HIO50-C1
X-Amz-Cf-Id: 0_rJ9aN8nFAFm6M9VPcWPWHa7B8QOaSW1_Y3Llttz31ZTaK03cTaYQ==
HTTP/2 404
content-type: text/html; charset=utf-8
content-length: 10
server: nginx
x-frame-options: DENY
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
x-served: Proxito-404
x-deity: web03
strict-transport-security: max-age=0
date: Wed, 29 Jan 2020 23:13:30 GMT
x-cache: Miss from cloudfront
via: 1.1 1b0911478686968732f973d6e5e31d11.cloudfront.net (CloudFront)
x-amz-cf-pop: HIO50-C1
x-amz-cf-id: sRmKIeU3LyXtKb93316GUwkxqiChktuq227k3nhDcOPqU-78E7JFTA==
Not Found.
```
</issue>
<code>
[start of readthedocs/proxito/middleware.py]
1 """
2 Middleware for Proxito.
3
4 This is used to take the request and map the host to the proper project slug.
5
6 Additional processing is done to get the project from the URL in the ``views.py`` as well.
7 """
8 import logging
9
10 from django.conf import settings
11 from django.shortcuts import render
12 from django.utils.deprecation import MiddlewareMixin
13
14 from readthedocs.projects.models import Domain
15
16 log = logging.getLogger(__name__) # noqa
17
18
19 def map_host_to_project_slug(request):
20 """
21 Take the request and map the host to the proper project slug.
22
23 We check, in order:
24
25 * The ``HTTP_X_RTD_SLUG`` host header for explicit Project mapping
26 - This sets ``request.rtdheader`` True
27 * The ``PUBLIC_DOMAIN`` where we can use the subdomain as the project name
28 - This sets ``request.subdomain`` True
29 * The hostname without port information, which maps to ``Domain`` objects
30 - This sets ``request.cname`` True
31 """
32
33 host = request.get_host().lower().split(':')[0]
34 public_domain = settings.PUBLIC_DOMAIN.lower().split(':')[0]
35 host_parts = host.split('.')
36 public_domain_parts = public_domain.split('.')
37
38 project_slug = None
39
40 # Explicit Project slug being passed in
41 if 'HTTP_X_RTD_SLUG' in request.META:
42 project_slug = request.META['HTTP_X_RTD_SLUG'].lower()
43 request.rtdheader = True
44 log.info('Setting project based on X_RTD_SLUG header: %s' % project_slug)
45
46 elif public_domain in host or host == 'proxito':
47 # Serve from the PUBLIC_DOMAIN, ensuring it looks like `foo.PUBLIC_DOMAIN`
48 if public_domain_parts == host_parts[1:]:
49 project_slug = host_parts[0]
50 request.subdomain = True
51 log.debug('Proxito Public Domain: host=%s', host)
52 else:
53 # TODO: This can catch some possibly valid domains (docs.readthedocs.io.com) for example
54 # But these feel like they might be phishing, etc. so let's block them for now.
55 log.warning('Weird variation on our hostname: host=%s', host)
56 return render(
57 request, 'core/dns-404.html', context={'host': host}, status=400
58 )
59
60 # Serve CNAMEs
61 else:
62 domain = Domain.objects.filter(domain=host).first()
63 if domain:
64 project_slug = domain.project.slug
65 request.cname = True
66 log.debug('Proxito CNAME: host=%s', host)
67 else:
68 # Some person is CNAMEing to us without configuring a domain - 404.
69 log.debug('CNAME 404: host=%s', host)
70 return render(
71 request, 'core/dns-404.html', context={'host': host}, status=404
72 )
73
74 log.debug('Proxito Project: slug=%s', project_slug)
75 return project_slug
76
77
78 class ProxitoMiddleware(MiddlewareMixin):
79
80 """The actual middleware we'll be using in prod."""
81
82 def process_request(self, request): # noqa
83 if any([not settings.USE_SUBDOMAIN, 'localhost' in request.get_host(),
84 'testserver' in request.get_host()]):
85 log.debug('Not processing Proxito middleware')
86 return None
87
88 ret = map_host_to_project_slug(request)
89
90 # Handle returning a response
91 if hasattr(ret, 'status_code'):
92 return ret
93
94 # Otherwise set the slug on the request
95 request.host_project_slug = request.slug = ret
96
97 return None
98
[end of readthedocs/proxito/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/proxito/middleware.py b/readthedocs/proxito/middleware.py
--- a/readthedocs/proxito/middleware.py
+++ b/readthedocs/proxito/middleware.py
@@ -11,7 +11,7 @@
from django.shortcuts import render
from django.utils.deprecation import MiddlewareMixin
-from readthedocs.projects.models import Domain
+from readthedocs.projects.models import Domain, Project
log = logging.getLogger(__name__) # noqa
@@ -40,39 +40,38 @@
# Explicit Project slug being passed in
if 'HTTP_X_RTD_SLUG' in request.META:
project_slug = request.META['HTTP_X_RTD_SLUG'].lower()
- request.rtdheader = True
- log.info('Setting project based on X_RTD_SLUG header: %s' % project_slug)
+ if Project.objects.filter(slug=project_slug).exists():
+ request.rtdheader = True
+ log.info('Setting project based on X_RTD_SLUG header: %s', project_slug)
+ return project_slug
- elif public_domain in host or host == 'proxito':
+ if public_domain in host or host == 'proxito':
# Serve from the PUBLIC_DOMAIN, ensuring it looks like `foo.PUBLIC_DOMAIN`
if public_domain_parts == host_parts[1:]:
project_slug = host_parts[0]
request.subdomain = True
log.debug('Proxito Public Domain: host=%s', host)
- else:
- # TODO: This can catch some possibly valid domains (docs.readthedocs.io.com) for example
- # But these feel like they might be phishing, etc. so let's block them for now.
- log.warning('Weird variation on our hostname: host=%s', host)
- return render(
- request, 'core/dns-404.html', context={'host': host}, status=400
- )
+ return project_slug
+ # TODO: This can catch some possibly valid domains (docs.readthedocs.io.com) for example
+ # But these feel like they might be phishing, etc. so let's block them for now.
+ log.warning('Weird variation on our hostname: host=%s', host)
+ return render(
+ request, 'core/dns-404.html', context={'host': host}, status=400
+ )
# Serve CNAMEs
- else:
- domain = Domain.objects.filter(domain=host).first()
- if domain:
- project_slug = domain.project.slug
- request.cname = True
- log.debug('Proxito CNAME: host=%s', host)
- else:
- # Some person is CNAMEing to us without configuring a domain - 404.
- log.debug('CNAME 404: host=%s', host)
- return render(
- request, 'core/dns-404.html', context={'host': host}, status=404
- )
-
- log.debug('Proxito Project: slug=%s', project_slug)
- return project_slug
+ domain = Domain.objects.filter(domain=host).first()
+ if domain:
+ project_slug = domain.project.slug
+ request.cname = True
+ log.debug('Proxito CNAME: host=%s', host)
+ return project_slug
+
+ # Some person is CNAMEing to us without configuring a domain - 404.
+ log.debug('CNAME 404: host=%s', host)
+ return render(
+ request, 'core/dns-404.html', context={'host': host}, status=404
+ )
class ProxitoMiddleware(MiddlewareMixin):
@@ -91,6 +90,8 @@
if hasattr(ret, 'status_code'):
return ret
+ log.debug('Proxito Project: slug=%s', ret)
+
# Otherwise set the slug on the request
request.host_project_slug = request.slug = ret
| {"golden_diff": "diff --git a/readthedocs/proxito/middleware.py b/readthedocs/proxito/middleware.py\n--- a/readthedocs/proxito/middleware.py\n+++ b/readthedocs/proxito/middleware.py\n@@ -11,7 +11,7 @@\n from django.shortcuts import render\n from django.utils.deprecation import MiddlewareMixin\n \n-from readthedocs.projects.models import Domain\n+from readthedocs.projects.models import Domain, Project\n \n log = logging.getLogger(__name__) # noqa\n \n@@ -40,39 +40,38 @@\n # Explicit Project slug being passed in\n if 'HTTP_X_RTD_SLUG' in request.META:\n project_slug = request.META['HTTP_X_RTD_SLUG'].lower()\n- request.rtdheader = True\n- log.info('Setting project based on X_RTD_SLUG header: %s' % project_slug)\n+ if Project.objects.filter(slug=project_slug).exists():\n+ request.rtdheader = True\n+ log.info('Setting project based on X_RTD_SLUG header: %s', project_slug)\n+ return project_slug\n \n- elif public_domain in host or host == 'proxito':\n+ if public_domain in host or host == 'proxito':\n # Serve from the PUBLIC_DOMAIN, ensuring it looks like `foo.PUBLIC_DOMAIN`\n if public_domain_parts == host_parts[1:]:\n project_slug = host_parts[0]\n request.subdomain = True\n log.debug('Proxito Public Domain: host=%s', host)\n- else:\n- # TODO: This can catch some possibly valid domains (docs.readthedocs.io.com) for example\n- # But these feel like they might be phishing, etc. so let's block them for now.\n- log.warning('Weird variation on our hostname: host=%s', host)\n- return render(\n- request, 'core/dns-404.html', context={'host': host}, status=400\n- )\n+ return project_slug\n+ # TODO: This can catch some possibly valid domains (docs.readthedocs.io.com) for example\n+ # But these feel like they might be phishing, etc. so let's block them for now.\n+ log.warning('Weird variation on our hostname: host=%s', host)\n+ return render(\n+ request, 'core/dns-404.html', context={'host': host}, status=400\n+ )\n \n # Serve CNAMEs\n- else:\n- domain = Domain.objects.filter(domain=host).first()\n- if domain:\n- project_slug = domain.project.slug\n- request.cname = True\n- log.debug('Proxito CNAME: host=%s', host)\n- else:\n- # Some person is CNAMEing to us without configuring a domain - 404.\n- log.debug('CNAME 404: host=%s', host)\n- return render(\n- request, 'core/dns-404.html', context={'host': host}, status=404\n- )\n-\n- log.debug('Proxito Project: slug=%s', project_slug)\n- return project_slug\n+ domain = Domain.objects.filter(domain=host).first()\n+ if domain:\n+ project_slug = domain.project.slug\n+ request.cname = True\n+ log.debug('Proxito CNAME: host=%s', host)\n+ return project_slug\n+\n+ # Some person is CNAMEing to us without configuring a domain - 404.\n+ log.debug('CNAME 404: host=%s', host)\n+ return render(\n+ request, 'core/dns-404.html', context={'host': host}, status=404\n+ )\n \n \n class ProxitoMiddleware(MiddlewareMixin):\n@@ -91,6 +90,8 @@\n if hasattr(ret, 'status_code'):\n return ret\n \n+ log.debug('Proxito Project: slug=%s', ret)\n+\n # Otherwise set the slug on the request\n request.host_project_slug = request.slug = ret\n", "issue": "docs.opnfv.org is 'Not Found.'\n## Details\r\n\r\nThe entire documentation site for OPNFV is missing. The most recent builds have succeeded, and as far as I know DNS hasn't changed recently.\r\n\r\n* Read the Docs project URL: opnfvdocsdemo.readthedocs.io\r\n* Build URL (if applicable): https://readthedocs.org/projects/opnfvdocsdemo/builds/\r\n* Read the Docs username (if applicable):\r\n\r\n## Expected Result\r\n\r\nGoing to https://docs.opnfv.org/ returns the documentation site.\r\n\r\n## Actual Result\r\n```\r\ncurl -i -L https://opnfvdocsdemo.readthedocs.io/\r\nHTTP/1.1 302 Found\r\nContent-Type: text/html; charset=utf-8\r\nLocation: http://docs.opnfv.org/en/stable-hunter/\r\nServer: nginx\r\nX-Frame-Options: DENY\r\nx-content-type-options: nosniff\r\nx-xss-protection: 1; mode=block\r\nX-Served: Django-Proxito\r\nX-Deity: web04\r\nStrict-Transport-Security: max-age=31536000; includeSubDomains\r\nDate: Wed, 29 Jan 2020 23:13:29 GMT\r\nContent-Length: 0\r\n\r\nHTTP/1.1 301 Moved Permanently\r\nServer: CloudFront\r\nDate: Wed, 29 Jan 2020 23:13:29 GMT\r\nContent-Type: text/html\r\nContent-Length: 183\r\nConnection: keep-alive\r\nLocation: https://docs.opnfv.org/en/stable-hunter/\r\nX-Cache: Redirect from cloudfront\r\nVia: 1.1 5ab5dc09da67e3ea794ec8a82992cc89.cloudfront.net (CloudFront)\r\nX-Amz-Cf-Pop: HIO50-C1\r\nX-Amz-Cf-Id: 0_rJ9aN8nFAFm6M9VPcWPWHa7B8QOaSW1_Y3Llttz31ZTaK03cTaYQ==\r\n\r\nHTTP/2 404 \r\ncontent-type: text/html; charset=utf-8\r\ncontent-length: 10\r\nserver: nginx\r\nx-frame-options: DENY\r\nx-content-type-options: nosniff\r\nx-xss-protection: 1; mode=block\r\nx-served: Proxito-404\r\nx-deity: web03\r\nstrict-transport-security: max-age=0\r\ndate: Wed, 29 Jan 2020 23:13:30 GMT\r\nx-cache: Miss from cloudfront\r\nvia: 1.1 1b0911478686968732f973d6e5e31d11.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: HIO50-C1\r\nx-amz-cf-id: sRmKIeU3LyXtKb93316GUwkxqiChktuq227k3nhDcOPqU-78E7JFTA==\r\n\r\nNot Found.\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nMiddleware for Proxito.\n\nThis is used to take the request and map the host to the proper project slug.\n\nAdditional processing is done to get the project from the URL in the ``views.py`` as well.\n\"\"\"\nimport logging\n\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.utils.deprecation import MiddlewareMixin\n\nfrom readthedocs.projects.models import Domain\n\nlog = logging.getLogger(__name__) # noqa\n\n\ndef map_host_to_project_slug(request):\n \"\"\"\n Take the request and map the host to the proper project slug.\n\n We check, in order:\n\n * The ``HTTP_X_RTD_SLUG`` host header for explicit Project mapping\n - This sets ``request.rtdheader`` True\n * The ``PUBLIC_DOMAIN`` where we can use the subdomain as the project name\n - This sets ``request.subdomain`` True\n * The hostname without port information, which maps to ``Domain`` objects\n - This sets ``request.cname`` True\n \"\"\"\n\n host = request.get_host().lower().split(':')[0]\n public_domain = settings.PUBLIC_DOMAIN.lower().split(':')[0]\n host_parts = host.split('.')\n public_domain_parts = public_domain.split('.')\n\n project_slug = None\n\n # Explicit Project slug being passed in\n if 'HTTP_X_RTD_SLUG' in request.META:\n project_slug = request.META['HTTP_X_RTD_SLUG'].lower()\n request.rtdheader = True\n log.info('Setting project based on X_RTD_SLUG header: %s' % project_slug)\n\n elif public_domain in host or host == 'proxito':\n # Serve from the PUBLIC_DOMAIN, ensuring it looks like `foo.PUBLIC_DOMAIN`\n if public_domain_parts == host_parts[1:]:\n project_slug = host_parts[0]\n request.subdomain = True\n log.debug('Proxito Public Domain: host=%s', host)\n else:\n # TODO: This can catch some possibly valid domains (docs.readthedocs.io.com) for example\n # But these feel like they might be phishing, etc. so let's block them for now.\n log.warning('Weird variation on our hostname: host=%s', host)\n return render(\n request, 'core/dns-404.html', context={'host': host}, status=400\n )\n\n # Serve CNAMEs\n else:\n domain = Domain.objects.filter(domain=host).first()\n if domain:\n project_slug = domain.project.slug\n request.cname = True\n log.debug('Proxito CNAME: host=%s', host)\n else:\n # Some person is CNAMEing to us without configuring a domain - 404.\n log.debug('CNAME 404: host=%s', host)\n return render(\n request, 'core/dns-404.html', context={'host': host}, status=404\n )\n\n log.debug('Proxito Project: slug=%s', project_slug)\n return project_slug\n\n\nclass ProxitoMiddleware(MiddlewareMixin):\n\n \"\"\"The actual middleware we'll be using in prod.\"\"\"\n\n def process_request(self, request): # noqa\n if any([not settings.USE_SUBDOMAIN, 'localhost' in request.get_host(),\n 'testserver' in request.get_host()]):\n log.debug('Not processing Proxito middleware')\n return None\n\n ret = map_host_to_project_slug(request)\n\n # Handle returning a response\n if hasattr(ret, 'status_code'):\n return ret\n\n # Otherwise set the slug on the request\n request.host_project_slug = request.slug = ret\n\n return None\n", "path": "readthedocs/proxito/middleware.py"}]} | 2,234 | 897 |
gh_patches_debug_42841 | rasdani/github-patches | git_diff | learningequality__kolibri-6868 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
names of CSV files should be internationalized
### Observed behavior
The names of files when exporting and downloading from the facility data page are all in English:
> 

ref: https://github.com/learningequality/kolibri/pull/6835
### Expected behavior
Names of files should be in the user's currently selected language. This is the same behavior as currently exists in Coach CSV export.
### User-facing consequences
file name meaning is not understandable in the user's preferred language
### Context
0.13.2
</issue>
<code>
[start of kolibri/plugins/facility/views.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import io
6 import os
7 from datetime import datetime
8
9 from django.http import Http404
10 from django.http.response import FileResponse
11 from django.utils.decorators import method_decorator
12 from django.views.generic.base import TemplateView
13
14 from kolibri.core.decorators import cache_no_user_data
15 from kolibri.utils import conf
16
17
18 @method_decorator(cache_no_user_data, name="dispatch")
19 class FacilityManagementView(TemplateView):
20 template_name = "facility_management.html"
21
22
23 def download_csv_file(request, filename):
24 filepath = os.path.join(conf.KOLIBRI_HOME, "temp", filename)
25
26 # if the file does not exist on disk, return a 404
27 if filepath is None or not os.path.exists(filepath):
28 raise Http404("Creation of users export file has failed")
29
30 # generate a file response
31 response = FileResponse(io.open(filepath, "rb"))
32 # set the content-type by guessing from the filename
33 response["Content-Type"] = "text/csv"
34
35 # set the content-disposition as attachment to force download
36 response["Content-Disposition"] = "attachment; filename=users_{}.csv".format(
37 datetime.now().strftime("%Y%m%d_%H%M%S")
38 )
39
40 # set the content-length to the file size
41 response["Content-Length"] = os.path.getsize(filepath)
42
43 return response
44
[end of kolibri/plugins/facility/views.py]
[start of kolibri/core/logger/csv_export.py]
1 from __future__ import unicode_literals
2
3 import csv
4 import io
5 import json
6 import logging
7 import math
8 import os
9 import sys
10 from collections import OrderedDict
11
12 from django.core.cache import cache
13 from django.http import Http404
14 from django.http import HttpResponse
15 from django.http.response import FileResponse
16
17 from .models import ContentSessionLog
18 from .models import ContentSummaryLog
19 from kolibri.core.content.models import ChannelMetadata
20 from kolibri.core.content.models import ContentNode
21 from kolibri.utils import conf
22
23
24 logger = logging.getLogger(__name__)
25
26
27 def cache_channel_name(obj):
28 channel_id = obj["channel_id"]
29 key = "{id}_ChannelMetadata_name".format(id=channel_id)
30 channel_name = cache.get(key)
31 if channel_name is None:
32 try:
33 channel_name = ChannelMetadata.objects.get(id=channel_id)
34 except ChannelMetadata.DoesNotExist:
35 channel_name = ""
36 cache.set(key, channel_name, 60 * 10)
37 return channel_name
38
39
40 def cache_content_title(obj):
41 content_id = obj["content_id"]
42 key = "{id}_ContentNode_title".format(id=content_id)
43 title = cache.get(key)
44 if title is None:
45 node = ContentNode.objects.filter(content_id=content_id).first()
46 if node:
47 title = node.title
48 else:
49 title = ""
50 cache.set(key, title, 60 * 10)
51 return title
52
53
54 mappings = {
55 "channel_name": cache_channel_name,
56 "content_title": cache_content_title,
57 "time_spent": lambda x: "{:.1f}".format(round(x["time_spent"], 1)),
58 "progress": lambda x: "{:.4f}".format(math.floor(x["progress"] * 10000.0) / 10000),
59 }
60
61 labels = OrderedDict(
62 (
63 ("user__facility__name", "Facility name"),
64 ("user__username", "Username"),
65 ("channel_id", "Channel id"),
66 ("channel_name", "Channel name"),
67 ("content_id", "Content id"),
68 ("content_title", "Content title"),
69 ("start_timestamp", "Time of first interaction"),
70 ("end_timestamp", "Time of last interaction"),
71 ("completion_timestamp", "Time of completion"),
72 ("time_spent", "Time Spent (sec)"),
73 ("progress", "Progress (0-1)"),
74 ("kind", "Content kind"),
75 )
76 )
77
78
79 def map_object(obj):
80 mapped_obj = {}
81 for header, label in labels.items():
82 if header in mappings:
83 mapped_obj[label] = mappings[header](obj)
84 elif header in obj:
85 mapped_obj[label] = obj[header]
86 return mapped_obj
87
88
89 classes_info = {
90 "session": {
91 "queryset": ContentSessionLog.objects.all(),
92 "filename": "content_session_logs.csv",
93 "db_columns": (
94 "user__username",
95 "user__facility__name",
96 "channel_id",
97 "content_id",
98 "start_timestamp",
99 "end_timestamp",
100 "time_spent",
101 "progress",
102 "kind",
103 ),
104 },
105 "summary": {
106 "queryset": ContentSummaryLog.objects.all(),
107 "filename": "content_summary_logs.csv",
108 "db_columns": (
109 "user__username",
110 "user__facility__name",
111 "content_id",
112 "channel_id",
113 "start_timestamp",
114 "end_timestamp",
115 "completion_timestamp",
116 "time_spent",
117 "progress",
118 "kind",
119 ),
120 },
121 }
122
123
124 def csv_file_generator(log_type, filepath, overwrite=False):
125 if log_type not in ("summary", "session"):
126 raise ValueError(
127 "Impossible to create a csv export file for {}".format(log_type)
128 )
129
130 log_info = classes_info[log_type]
131
132 if not overwrite and os.path.exists(filepath):
133 raise ValueError("{} already exists".format(filepath))
134 queryset = log_info["queryset"]
135
136 # Exclude completion timestamp for the sessionlog CSV
137 header_labels = tuple(
138 label
139 for label in labels.values()
140 if log_type == "summary" or label != "completion_timestamp"
141 )
142
143 if sys.version_info[0] < 3:
144 csv_file = io.open(filepath, "wb")
145 else:
146 csv_file = io.open(filepath, "w", newline="")
147
148 with csv_file as f:
149 writer = csv.DictWriter(f, header_labels)
150 logger.info("Creating csv file {filename}".format(filename=filepath))
151 writer.writeheader()
152 for item in queryset.select_related("user", "user__facility").values(
153 *log_info["db_columns"]
154 ):
155 writer.writerow(map_object(item))
156 yield
157
158
159 def exported_logs_info(request):
160 """
161 Get the last modification timestamp of the summary logs exported
162
163 :returns: An object with the files informatin
164 """
165
166 logs_dir = os.path.join(conf.KOLIBRI_HOME, "log_export")
167 csv_statuses = {}
168 csv_export_filenames = {
169 "session": "content_session_logs.csv",
170 "summary": "content_summary_logs.csv",
171 }
172 for log_type in csv_export_filenames.keys():
173 log_path = os.path.join(logs_dir, csv_export_filenames[log_type])
174 if os.path.exists(log_path):
175 csv_statuses[log_type] = os.path.getmtime(log_path)
176 else:
177 csv_statuses[log_type] = None
178
179 return HttpResponse(json.dumps(csv_statuses), content_type="application/json")
180
181
182 def download_csv_file(request, log_type):
183 csv_export_filenames = {
184 "session": "content_session_logs.csv",
185 "summary": "content_summary_logs.csv",
186 }
187 if log_type in csv_export_filenames.keys():
188 filepath = os.path.join(
189 conf.KOLIBRI_HOME, "log_export", csv_export_filenames[log_type]
190 )
191 else:
192 filepath = None
193
194 # if the file does not exist on disk, return a 404
195 if filepath is None or not os.path.exists(filepath):
196 raise Http404("There is no csv export file for {} available".format(log_type))
197
198 # generate a file response
199 response = FileResponse(io.open(filepath, "rb"))
200 # set the content-type by guessing from the filename
201 response["Content-Type"] = "text/csv"
202
203 # set the content-disposition as attachment to force download
204 response["Content-Disposition"] = "attachment; filename={}".format(
205 csv_export_filenames[log_type]
206 )
207
208 # set the content-length to the file size
209 response["Content-Length"] = os.path.getsize(filepath)
210
211 return response
212
[end of kolibri/core/logger/csv_export.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/core/logger/csv_export.py b/kolibri/core/logger/csv_export.py
--- a/kolibri/core/logger/csv_export.py
+++ b/kolibri/core/logger/csv_export.py
@@ -13,6 +13,10 @@
from django.http import Http404
from django.http import HttpResponse
from django.http.response import FileResponse
+from django.template.defaultfilters import slugify
+from django.utils import translation
+from django.utils.translation import get_language_from_request
+from django.utils.translation import pgettext
from .models import ContentSessionLog
from .models import ContentSummaryLog
@@ -180,6 +184,29 @@
def download_csv_file(request, log_type):
+ locale = get_language_from_request(request)
+ translation.activate(locale)
+
+ csv_translated_filenames = {
+ "session": (
+ slugify(
+ pgettext(
+ "Default name for the exported CSV file with content session logs. Please keep the underscores between words in the translation",
+ "content_session_logs",
+ )
+ )
+ + ".csv"
+ ).replace("-", "_"),
+ "summary": (
+ slugify(
+ pgettext(
+ "Default name for the exported CSV file with content summary logs. Please keep the underscores between words in the translation",
+ "content_summary_logs",
+ )
+ )
+ + ".csv"
+ ).replace("-", "_"),
+ }
csv_export_filenames = {
"session": "content_session_logs.csv",
"summary": "content_summary_logs.csv",
@@ -202,8 +229,9 @@
# set the content-disposition as attachment to force download
response["Content-Disposition"] = "attachment; filename={}".format(
- csv_export_filenames[log_type]
+ str(csv_translated_filenames[log_type])
)
+ translation.deactivate()
# set the content-length to the file size
response["Content-Length"] = os.path.getsize(filepath)
diff --git a/kolibri/plugins/facility/views.py b/kolibri/plugins/facility/views.py
--- a/kolibri/plugins/facility/views.py
+++ b/kolibri/plugins/facility/views.py
@@ -8,7 +8,11 @@
from django.http import Http404
from django.http.response import FileResponse
+from django.template.defaultfilters import slugify
+from django.utils import translation
from django.utils.decorators import method_decorator
+from django.utils.translation import get_language_from_request
+from django.utils.translation import pgettext
from django.views.generic.base import TemplateView
from kolibri.core.decorators import cache_no_user_data
@@ -21,6 +25,8 @@
def download_csv_file(request, filename):
+ locale = get_language_from_request(request)
+ translation.activate(locale)
filepath = os.path.join(conf.KOLIBRI_HOME, "temp", filename)
# if the file does not exist on disk, return a 404
@@ -33,11 +39,21 @@
response["Content-Type"] = "text/csv"
# set the content-disposition as attachment to force download
- response["Content-Disposition"] = "attachment; filename=users_{}.csv".format(
- datetime.now().strftime("%Y%m%d_%H%M%S")
+ exported_filename = (
+ slugify(
+ pgettext(
+ "Default name for the exported CSV file of facility user data. Please keep the underscore between words in the translation",
+ "users_{}",
+ ).format(datetime.now().strftime("%Y%m%d_%H%M%S"))
+ ).replace("-", "_")
+ + ".csv"
+ )
+ response["Content-Disposition"] = "attachment; filename={}".format(
+ str(exported_filename)
)
# set the content-length to the file size
response["Content-Length"] = os.path.getsize(filepath)
+ translation.deactivate()
return response
| {"golden_diff": "diff --git a/kolibri/core/logger/csv_export.py b/kolibri/core/logger/csv_export.py\n--- a/kolibri/core/logger/csv_export.py\n+++ b/kolibri/core/logger/csv_export.py\n@@ -13,6 +13,10 @@\n from django.http import Http404\n from django.http import HttpResponse\n from django.http.response import FileResponse\n+from django.template.defaultfilters import slugify\n+from django.utils import translation\n+from django.utils.translation import get_language_from_request\n+from django.utils.translation import pgettext\n \n from .models import ContentSessionLog\n from .models import ContentSummaryLog\n@@ -180,6 +184,29 @@\n \n \n def download_csv_file(request, log_type):\n+ locale = get_language_from_request(request)\n+ translation.activate(locale)\n+\n+ csv_translated_filenames = {\n+ \"session\": (\n+ slugify(\n+ pgettext(\n+ \"Default name for the exported CSV file with content session logs. Please keep the underscores between words in the translation\",\n+ \"content_session_logs\",\n+ )\n+ )\n+ + \".csv\"\n+ ).replace(\"-\", \"_\"),\n+ \"summary\": (\n+ slugify(\n+ pgettext(\n+ \"Default name for the exported CSV file with content summary logs. Please keep the underscores between words in the translation\",\n+ \"content_summary_logs\",\n+ )\n+ )\n+ + \".csv\"\n+ ).replace(\"-\", \"_\"),\n+ }\n csv_export_filenames = {\n \"session\": \"content_session_logs.csv\",\n \"summary\": \"content_summary_logs.csv\",\n@@ -202,8 +229,9 @@\n \n # set the content-disposition as attachment to force download\n response[\"Content-Disposition\"] = \"attachment; filename={}\".format(\n- csv_export_filenames[log_type]\n+ str(csv_translated_filenames[log_type])\n )\n+ translation.deactivate()\n \n # set the content-length to the file size\n response[\"Content-Length\"] = os.path.getsize(filepath)\ndiff --git a/kolibri/plugins/facility/views.py b/kolibri/plugins/facility/views.py\n--- a/kolibri/plugins/facility/views.py\n+++ b/kolibri/plugins/facility/views.py\n@@ -8,7 +8,11 @@\n \n from django.http import Http404\n from django.http.response import FileResponse\n+from django.template.defaultfilters import slugify\n+from django.utils import translation\n from django.utils.decorators import method_decorator\n+from django.utils.translation import get_language_from_request\n+from django.utils.translation import pgettext\n from django.views.generic.base import TemplateView\n \n from kolibri.core.decorators import cache_no_user_data\n@@ -21,6 +25,8 @@\n \n \n def download_csv_file(request, filename):\n+ locale = get_language_from_request(request)\n+ translation.activate(locale)\n filepath = os.path.join(conf.KOLIBRI_HOME, \"temp\", filename)\n \n # if the file does not exist on disk, return a 404\n@@ -33,11 +39,21 @@\n response[\"Content-Type\"] = \"text/csv\"\n \n # set the content-disposition as attachment to force download\n- response[\"Content-Disposition\"] = \"attachment; filename=users_{}.csv\".format(\n- datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n+ exported_filename = (\n+ slugify(\n+ pgettext(\n+ \"Default name for the exported CSV file of facility user data. Please keep the underscore between words in the translation\",\n+ \"users_{}\",\n+ ).format(datetime.now().strftime(\"%Y%m%d_%H%M%S\"))\n+ ).replace(\"-\", \"_\")\n+ + \".csv\"\n+ )\n+ response[\"Content-Disposition\"] = \"attachment; filename={}\".format(\n+ str(exported_filename)\n )\n \n # set the content-length to the file size\n response[\"Content-Length\"] = os.path.getsize(filepath)\n+ translation.deactivate()\n \n return response\n", "issue": "names of CSV files should be internationalized\n\r\n### Observed behavior\r\n\r\nThe names of files when exporting and downloading from the facility data page are all in English:\r\n\r\n> \r\n\r\n\r\n\r\nref: https://github.com/learningequality/kolibri/pull/6835\r\n\r\n### Expected behavior\r\n\r\nNames of files should be in the user's currently selected language. This is the same behavior as currently exists in Coach CSV export.\r\n\r\n### User-facing consequences\r\n\r\nfile name meaning is not understandable in the user's preferred language\r\n\r\n### Context\r\n\r\n0.13.2\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport os\nfrom datetime import datetime\n\nfrom django.http import Http404\nfrom django.http.response import FileResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic.base import TemplateView\n\nfrom kolibri.core.decorators import cache_no_user_data\nfrom kolibri.utils import conf\n\n\n@method_decorator(cache_no_user_data, name=\"dispatch\")\nclass FacilityManagementView(TemplateView):\n template_name = \"facility_management.html\"\n\n\ndef download_csv_file(request, filename):\n filepath = os.path.join(conf.KOLIBRI_HOME, \"temp\", filename)\n\n # if the file does not exist on disk, return a 404\n if filepath is None or not os.path.exists(filepath):\n raise Http404(\"Creation of users export file has failed\")\n\n # generate a file response\n response = FileResponse(io.open(filepath, \"rb\"))\n # set the content-type by guessing from the filename\n response[\"Content-Type\"] = \"text/csv\"\n\n # set the content-disposition as attachment to force download\n response[\"Content-Disposition\"] = \"attachment; filename=users_{}.csv\".format(\n datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n )\n\n # set the content-length to the file size\n response[\"Content-Length\"] = os.path.getsize(filepath)\n\n return response\n", "path": "kolibri/plugins/facility/views.py"}, {"content": "from __future__ import unicode_literals\n\nimport csv\nimport io\nimport json\nimport logging\nimport math\nimport os\nimport sys\nfrom collections import OrderedDict\n\nfrom django.core.cache import cache\nfrom django.http import Http404\nfrom django.http import HttpResponse\nfrom django.http.response import FileResponse\n\nfrom .models import ContentSessionLog\nfrom .models import ContentSummaryLog\nfrom kolibri.core.content.models import ChannelMetadata\nfrom kolibri.core.content.models import ContentNode\nfrom kolibri.utils import conf\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef cache_channel_name(obj):\n channel_id = obj[\"channel_id\"]\n key = \"{id}_ChannelMetadata_name\".format(id=channel_id)\n channel_name = cache.get(key)\n if channel_name is None:\n try:\n channel_name = ChannelMetadata.objects.get(id=channel_id)\n except ChannelMetadata.DoesNotExist:\n channel_name = \"\"\n cache.set(key, channel_name, 60 * 10)\n return channel_name\n\n\ndef cache_content_title(obj):\n content_id = obj[\"content_id\"]\n key = \"{id}_ContentNode_title\".format(id=content_id)\n title = cache.get(key)\n if title is None:\n node = ContentNode.objects.filter(content_id=content_id).first()\n if node:\n title = node.title\n else:\n title = \"\"\n cache.set(key, title, 60 * 10)\n return title\n\n\nmappings = {\n \"channel_name\": cache_channel_name,\n \"content_title\": cache_content_title,\n \"time_spent\": lambda x: \"{:.1f}\".format(round(x[\"time_spent\"], 1)),\n \"progress\": lambda x: \"{:.4f}\".format(math.floor(x[\"progress\"] * 10000.0) / 10000),\n}\n\nlabels = OrderedDict(\n (\n (\"user__facility__name\", \"Facility name\"),\n (\"user__username\", \"Username\"),\n (\"channel_id\", \"Channel id\"),\n (\"channel_name\", \"Channel name\"),\n (\"content_id\", \"Content id\"),\n (\"content_title\", \"Content title\"),\n (\"start_timestamp\", \"Time of first interaction\"),\n (\"end_timestamp\", \"Time of last interaction\"),\n (\"completion_timestamp\", \"Time of completion\"),\n (\"time_spent\", \"Time Spent (sec)\"),\n (\"progress\", \"Progress (0-1)\"),\n (\"kind\", \"Content kind\"),\n )\n)\n\n\ndef map_object(obj):\n mapped_obj = {}\n for header, label in labels.items():\n if header in mappings:\n mapped_obj[label] = mappings[header](obj)\n elif header in obj:\n mapped_obj[label] = obj[header]\n return mapped_obj\n\n\nclasses_info = {\n \"session\": {\n \"queryset\": ContentSessionLog.objects.all(),\n \"filename\": \"content_session_logs.csv\",\n \"db_columns\": (\n \"user__username\",\n \"user__facility__name\",\n \"channel_id\",\n \"content_id\",\n \"start_timestamp\",\n \"end_timestamp\",\n \"time_spent\",\n \"progress\",\n \"kind\",\n ),\n },\n \"summary\": {\n \"queryset\": ContentSummaryLog.objects.all(),\n \"filename\": \"content_summary_logs.csv\",\n \"db_columns\": (\n \"user__username\",\n \"user__facility__name\",\n \"content_id\",\n \"channel_id\",\n \"start_timestamp\",\n \"end_timestamp\",\n \"completion_timestamp\",\n \"time_spent\",\n \"progress\",\n \"kind\",\n ),\n },\n}\n\n\ndef csv_file_generator(log_type, filepath, overwrite=False):\n if log_type not in (\"summary\", \"session\"):\n raise ValueError(\n \"Impossible to create a csv export file for {}\".format(log_type)\n )\n\n log_info = classes_info[log_type]\n\n if not overwrite and os.path.exists(filepath):\n raise ValueError(\"{} already exists\".format(filepath))\n queryset = log_info[\"queryset\"]\n\n # Exclude completion timestamp for the sessionlog CSV\n header_labels = tuple(\n label\n for label in labels.values()\n if log_type == \"summary\" or label != \"completion_timestamp\"\n )\n\n if sys.version_info[0] < 3:\n csv_file = io.open(filepath, \"wb\")\n else:\n csv_file = io.open(filepath, \"w\", newline=\"\")\n\n with csv_file as f:\n writer = csv.DictWriter(f, header_labels)\n logger.info(\"Creating csv file {filename}\".format(filename=filepath))\n writer.writeheader()\n for item in queryset.select_related(\"user\", \"user__facility\").values(\n *log_info[\"db_columns\"]\n ):\n writer.writerow(map_object(item))\n yield\n\n\ndef exported_logs_info(request):\n \"\"\"\n Get the last modification timestamp of the summary logs exported\n\n :returns: An object with the files informatin\n \"\"\"\n\n logs_dir = os.path.join(conf.KOLIBRI_HOME, \"log_export\")\n csv_statuses = {}\n csv_export_filenames = {\n \"session\": \"content_session_logs.csv\",\n \"summary\": \"content_summary_logs.csv\",\n }\n for log_type in csv_export_filenames.keys():\n log_path = os.path.join(logs_dir, csv_export_filenames[log_type])\n if os.path.exists(log_path):\n csv_statuses[log_type] = os.path.getmtime(log_path)\n else:\n csv_statuses[log_type] = None\n\n return HttpResponse(json.dumps(csv_statuses), content_type=\"application/json\")\n\n\ndef download_csv_file(request, log_type):\n csv_export_filenames = {\n \"session\": \"content_session_logs.csv\",\n \"summary\": \"content_summary_logs.csv\",\n }\n if log_type in csv_export_filenames.keys():\n filepath = os.path.join(\n conf.KOLIBRI_HOME, \"log_export\", csv_export_filenames[log_type]\n )\n else:\n filepath = None\n\n # if the file does not exist on disk, return a 404\n if filepath is None or not os.path.exists(filepath):\n raise Http404(\"There is no csv export file for {} available\".format(log_type))\n\n # generate a file response\n response = FileResponse(io.open(filepath, \"rb\"))\n # set the content-type by guessing from the filename\n response[\"Content-Type\"] = \"text/csv\"\n\n # set the content-disposition as attachment to force download\n response[\"Content-Disposition\"] = \"attachment; filename={}\".format(\n csv_export_filenames[log_type]\n )\n\n # set the content-length to the file size\n response[\"Content-Length\"] = os.path.getsize(filepath)\n\n return response\n", "path": "kolibri/core/logger/csv_export.py"}]} | 3,156 | 875 |
gh_patches_debug_43447 | rasdani/github-patches | git_diff | beetbox__beets-4008 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Subsonicupdate plugin broken in multiple paces
### Problem
Running the latest git version of beets causes the subsonicupdate plugin to fail with a generic "Error: 1".
The current code is broken in 2 places:
1. `def __get_version(self):`
Code is trying to retrieve the version in order to decide whether to authenticate with token or password: a much better choice than my original plan of assuming everyone would use an updated version and/or prefer token over password due to additional security.
In the current code, the REST API is called by passing only parameters 'c' and 'f': this API fails because it's not complete and the plugin stops processing.
2. `def start_scan(self):`
Assuming the authentication would go through, this call would fail anyway since the current code is splitting the API version in 3 values and passing them separately, e.g. `&v=1&v=61&v=1`, rather than `&v=1.61.1`
I have always been running an ancient version of beets with my original code for the plugin and I noticed this just today as I set up a new machine and installed beets form git repo through AUR.
It also seems I am the only user of this plugin given no other feedback has been filed so far, so I can understand if this fix doesn't get high priority.
### Setup
* OS: Arch Linux
* Python version: 3.9.6
* beets version: 1.5.0
* Turning off plugins made problem go away (yes/no): disabling subsonicupdate plugin will prevent the error from happening
### Additional Info
Subsonic API reference: http://www.subsonic.org/pages/api.jsp
</issue>
<code>
[start of beetsplug/subsonicupdate.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, Adrian Sampson.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Updates Subsonic library on Beets import
17 Your Beets configuration file should contain
18 a "subsonic" section like the following:
19 subsonic:
20 url: https://mydomain.com:443/subsonic
21 user: username
22 pass: password
23 """
24 from __future__ import division, absolute_import, print_function
25
26 import hashlib
27 import random
28 import string
29
30 import requests
31
32 from binascii import hexlify
33 from beets import config
34 from beets.plugins import BeetsPlugin
35
36 __author__ = 'https://github.com/maffo999'
37 AUTH_TOKEN_VERSION = (1, 12)
38
39
40 class SubsonicUpdate(BeetsPlugin):
41 def __init__(self):
42 super(SubsonicUpdate, self).__init__()
43 # Set default configuration values
44 config['subsonic'].add({
45 'user': 'admin',
46 'pass': 'admin',
47 'url': 'http://localhost:4040',
48 })
49 config['subsonic']['pass'].redact = True
50 self._version = None
51 self._auth = None
52 self.register_listener('import', self.start_scan)
53
54 @property
55 def version(self):
56 if self._version is None:
57 self._version = self.__get_version()
58 return self._version
59
60 @property
61 def auth(self):
62 if self._auth is None:
63 if self.version is not None:
64 if self.version > AUTH_TOKEN_VERSION:
65 self._auth = "token"
66 else:
67 self._auth = "password"
68 self._log.info(
69 u"using '{}' authentication method".format(self._auth))
70 return self._auth
71
72 @staticmethod
73 def __create_token():
74 """Create salt and token from given password.
75
76 :return: The generated salt and hashed token
77 """
78 password = config['subsonic']['pass'].as_str()
79
80 # Pick the random sequence and salt the password
81 r = string.ascii_letters + string.digits
82 salt = "".join([random.choice(r) for _ in range(6)])
83 salted_password = password + salt
84 token = hashlib.md5(salted_password.encode('utf-8')).hexdigest()
85
86 # Put together the payload of the request to the server and the URL
87 return salt, token
88
89 @staticmethod
90 def __format_url(endpoint):
91 """Get the Subsonic URL to trigger the given endpoint.
92 Uses either the url config option or the deprecated host, port,
93 and context_path config options together.
94
95 :return: Endpoint for updating Subsonic
96 """
97
98 url = config['subsonic']['url'].as_str()
99 if url and url.endswith('/'):
100 url = url[:-1]
101
102 # @deprecated("Use url config option instead")
103 if not url:
104 host = config['subsonic']['host'].as_str()
105 port = config['subsonic']['port'].get(int)
106 context_path = config['subsonic']['contextpath'].as_str()
107 if context_path == '/':
108 context_path = ''
109 url = "http://{}:{}{}".format(host, port, context_path)
110
111 return url + '/rest/{}'.format(endpoint)
112
113 def __get_version(self):
114 url = self.__format_url("ping.view")
115 payload = {
116 'c': 'beets',
117 'f': 'json'
118 }
119 try:
120 response = requests.get(url, params=payload)
121 if response.status_code == 200:
122 json = response.json()
123 version = json['subsonic-response']['version']
124 self._log.info(
125 u'subsonic version:{0} '.format(version))
126 return tuple(int(s) for s in version.split('.'))
127 else:
128 self._log.error(u'Error: {0}', json)
129 return None
130 except Exception as error:
131 self._log.error(u'Error: {0}'.format(error))
132 return None
133
134 def start_scan(self):
135 user = config['subsonic']['user'].as_str()
136 url = self.__format_url("startScan.view")
137
138 if self.auth == 'token':
139 salt, token = self.__create_token()
140 payload = {
141 'u': user,
142 't': token,
143 's': salt,
144 'v': self.version, # Subsonic 6.1 and newer.
145 'c': 'beets',
146 'f': 'json'
147 }
148 elif self.auth == 'password':
149 password = config['subsonic']['pass'].as_str()
150 encpass = hexlify(password.encode()).decode()
151 payload = {
152 'u': user,
153 'p': 'enc:{}'.format(encpass),
154 'v': self.version,
155 'c': 'beets',
156 'f': 'json'
157 }
158 else:
159 return
160 try:
161 response = requests.get(url, params=payload)
162 json = response.json()
163
164 if response.status_code == 200 and \
165 json['subsonic-response']['status'] == "ok":
166 count = json['subsonic-response']['scanStatus']['count']
167 self._log.info(
168 u'Updating Subsonic; scanning {0} tracks'.format(count))
169 elif response.status_code == 200 and \
170 json['subsonic-response']['status'] == "failed":
171 error_message = json['subsonic-response']['error']['message']
172 self._log.error(u'Error: {0}'.format(error_message))
173 else:
174 self._log.error(u'Error: {0}', json)
175 except Exception as error:
176 self._log.error(u'Error: {0}'.format(error))
177
[end of beetsplug/subsonicupdate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/subsonicupdate.py b/beetsplug/subsonicupdate.py
--- a/beetsplug/subsonicupdate.py
+++ b/beetsplug/subsonicupdate.py
@@ -20,6 +20,14 @@
url: https://mydomain.com:443/subsonic
user: username
pass: password
+ auth: token
+For older Subsonic versions, token authentication
+is not supported, use password instead:
+ subsonic:
+ url: https://mydomain.com:443/subsonic
+ user: username
+ pass: password
+ auth: pass
"""
from __future__ import division, absolute_import, print_function
@@ -34,7 +42,6 @@
from beets.plugins import BeetsPlugin
__author__ = 'https://github.com/maffo999'
-AUTH_TOKEN_VERSION = (1, 12)
class SubsonicUpdate(BeetsPlugin):
@@ -45,30 +52,11 @@
'user': 'admin',
'pass': 'admin',
'url': 'http://localhost:4040',
+ 'auth': 'token',
})
config['subsonic']['pass'].redact = True
- self._version = None
- self._auth = None
self.register_listener('import', self.start_scan)
- @property
- def version(self):
- if self._version is None:
- self._version = self.__get_version()
- return self._version
-
- @property
- def auth(self):
- if self._auth is None:
- if self.version is not None:
- if self.version > AUTH_TOKEN_VERSION:
- self._auth = "token"
- else:
- self._auth = "password"
- self._log.info(
- u"using '{}' authentication method".format(self._auth))
- return self._auth
-
@staticmethod
def __create_token():
"""Create salt and token from given password.
@@ -110,48 +98,30 @@
return url + '/rest/{}'.format(endpoint)
- def __get_version(self):
- url = self.__format_url("ping.view")
- payload = {
- 'c': 'beets',
- 'f': 'json'
- }
- try:
- response = requests.get(url, params=payload)
- if response.status_code == 200:
- json = response.json()
- version = json['subsonic-response']['version']
- self._log.info(
- u'subsonic version:{0} '.format(version))
- return tuple(int(s) for s in version.split('.'))
- else:
- self._log.error(u'Error: {0}', json)
- return None
- except Exception as error:
- self._log.error(u'Error: {0}'.format(error))
- return None
-
def start_scan(self):
user = config['subsonic']['user'].as_str()
- url = self.__format_url("startScan.view")
+ auth = config['subsonic']['auth'].as_str()
+ url = self.__format_url("startScan")
+ self._log.debug(u'URL is {0}', url)
+ self._log.debug(u'auth type is {0}', config['subsonic']['auth'])
- if self.auth == 'token':
+ if auth == "token":
salt, token = self.__create_token()
payload = {
'u': user,
't': token,
's': salt,
- 'v': self.version, # Subsonic 6.1 and newer.
+ 'v': '1.13.0', # Subsonic 5.3 and newer
'c': 'beets',
'f': 'json'
}
- elif self.auth == 'password':
+ elif auth == "password":
password = config['subsonic']['pass'].as_str()
encpass = hexlify(password.encode()).decode()
payload = {
'u': user,
'p': 'enc:{}'.format(encpass),
- 'v': self.version,
+ 'v': '1.12.0',
'c': 'beets',
'f': 'json'
}
| {"golden_diff": "diff --git a/beetsplug/subsonicupdate.py b/beetsplug/subsonicupdate.py\n--- a/beetsplug/subsonicupdate.py\n+++ b/beetsplug/subsonicupdate.py\n@@ -20,6 +20,14 @@\n url: https://mydomain.com:443/subsonic\n user: username\n pass: password\n+ auth: token\n+For older Subsonic versions, token authentication\n+is not supported, use password instead:\n+ subsonic:\n+ url: https://mydomain.com:443/subsonic\n+ user: username\n+ pass: password\n+ auth: pass\n \"\"\"\n from __future__ import division, absolute_import, print_function\n \n@@ -34,7 +42,6 @@\n from beets.plugins import BeetsPlugin\n \n __author__ = 'https://github.com/maffo999'\n-AUTH_TOKEN_VERSION = (1, 12)\n \n \n class SubsonicUpdate(BeetsPlugin):\n@@ -45,30 +52,11 @@\n 'user': 'admin',\n 'pass': 'admin',\n 'url': 'http://localhost:4040',\n+ 'auth': 'token',\n })\n config['subsonic']['pass'].redact = True\n- self._version = None\n- self._auth = None\n self.register_listener('import', self.start_scan)\n \n- @property\n- def version(self):\n- if self._version is None:\n- self._version = self.__get_version()\n- return self._version\n-\n- @property\n- def auth(self):\n- if self._auth is None:\n- if self.version is not None:\n- if self.version > AUTH_TOKEN_VERSION:\n- self._auth = \"token\"\n- else:\n- self._auth = \"password\"\n- self._log.info(\n- u\"using '{}' authentication method\".format(self._auth))\n- return self._auth\n-\n @staticmethod\n def __create_token():\n \"\"\"Create salt and token from given password.\n@@ -110,48 +98,30 @@\n \n return url + '/rest/{}'.format(endpoint)\n \n- def __get_version(self):\n- url = self.__format_url(\"ping.view\")\n- payload = {\n- 'c': 'beets',\n- 'f': 'json'\n- }\n- try:\n- response = requests.get(url, params=payload)\n- if response.status_code == 200:\n- json = response.json()\n- version = json['subsonic-response']['version']\n- self._log.info(\n- u'subsonic version:{0} '.format(version))\n- return tuple(int(s) for s in version.split('.'))\n- else:\n- self._log.error(u'Error: {0}', json)\n- return None\n- except Exception as error:\n- self._log.error(u'Error: {0}'.format(error))\n- return None\n-\n def start_scan(self):\n user = config['subsonic']['user'].as_str()\n- url = self.__format_url(\"startScan.view\")\n+ auth = config['subsonic']['auth'].as_str()\n+ url = self.__format_url(\"startScan\")\n+ self._log.debug(u'URL is {0}', url)\n+ self._log.debug(u'auth type is {0}', config['subsonic']['auth'])\n \n- if self.auth == 'token':\n+ if auth == \"token\":\n salt, token = self.__create_token()\n payload = {\n 'u': user,\n 't': token,\n 's': salt,\n- 'v': self.version, # Subsonic 6.1 and newer.\n+ 'v': '1.13.0', # Subsonic 5.3 and newer\n 'c': 'beets',\n 'f': 'json'\n }\n- elif self.auth == 'password':\n+ elif auth == \"password\":\n password = config['subsonic']['pass'].as_str()\n encpass = hexlify(password.encode()).decode()\n payload = {\n 'u': user,\n 'p': 'enc:{}'.format(encpass),\n- 'v': self.version,\n+ 'v': '1.12.0',\n 'c': 'beets',\n 'f': 'json'\n }\n", "issue": "Subsonicupdate plugin broken in multiple paces\n### Problem\r\n\r\nRunning the latest git version of beets causes the subsonicupdate plugin to fail with a generic \"Error: 1\".\r\nThe current code is broken in 2 places:\r\n\r\n1. `def __get_version(self):`\r\nCode is trying to retrieve the version in order to decide whether to authenticate with token or password: a much better choice than my original plan of assuming everyone would use an updated version and/or prefer token over password due to additional security.\r\nIn the current code, the REST API is called by passing only parameters 'c' and 'f': this API fails because it's not complete and the plugin stops processing.\r\n\r\n2. `def start_scan(self):`\r\nAssuming the authentication would go through, this call would fail anyway since the current code is splitting the API version in 3 values and passing them separately, e.g. `&v=1&v=61&v=1`, rather than `&v=1.61.1`\r\n\r\nI have always been running an ancient version of beets with my original code for the plugin and I noticed this just today as I set up a new machine and installed beets form git repo through AUR.\r\nIt also seems I am the only user of this plugin given no other feedback has been filed so far, so I can understand if this fix doesn't get high priority.\r\n\r\n### Setup\r\n\r\n* OS: Arch Linux\r\n* Python version: 3.9.6\r\n* beets version: 1.5.0\r\n* Turning off plugins made problem go away (yes/no): disabling subsonicupdate plugin will prevent the error from happening\r\n\r\n### Additional Info\r\nSubsonic API reference: http://www.subsonic.org/pages/api.jsp\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Updates Subsonic library on Beets import\nYour Beets configuration file should contain\na \"subsonic\" section like the following:\n subsonic:\n url: https://mydomain.com:443/subsonic\n user: username\n pass: password\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport hashlib\nimport random\nimport string\n\nimport requests\n\nfrom binascii import hexlify\nfrom beets import config\nfrom beets.plugins import BeetsPlugin\n\n__author__ = 'https://github.com/maffo999'\nAUTH_TOKEN_VERSION = (1, 12)\n\n\nclass SubsonicUpdate(BeetsPlugin):\n def __init__(self):\n super(SubsonicUpdate, self).__init__()\n # Set default configuration values\n config['subsonic'].add({\n 'user': 'admin',\n 'pass': 'admin',\n 'url': 'http://localhost:4040',\n })\n config['subsonic']['pass'].redact = True\n self._version = None\n self._auth = None\n self.register_listener('import', self.start_scan)\n\n @property\n def version(self):\n if self._version is None:\n self._version = self.__get_version()\n return self._version\n\n @property\n def auth(self):\n if self._auth is None:\n if self.version is not None:\n if self.version > AUTH_TOKEN_VERSION:\n self._auth = \"token\"\n else:\n self._auth = \"password\"\n self._log.info(\n u\"using '{}' authentication method\".format(self._auth))\n return self._auth\n\n @staticmethod\n def __create_token():\n \"\"\"Create salt and token from given password.\n\n :return: The generated salt and hashed token\n \"\"\"\n password = config['subsonic']['pass'].as_str()\n\n # Pick the random sequence and salt the password\n r = string.ascii_letters + string.digits\n salt = \"\".join([random.choice(r) for _ in range(6)])\n salted_password = password + salt\n token = hashlib.md5(salted_password.encode('utf-8')).hexdigest()\n\n # Put together the payload of the request to the server and the URL\n return salt, token\n\n @staticmethod\n def __format_url(endpoint):\n \"\"\"Get the Subsonic URL to trigger the given endpoint.\n Uses either the url config option or the deprecated host, port,\n and context_path config options together.\n\n :return: Endpoint for updating Subsonic\n \"\"\"\n\n url = config['subsonic']['url'].as_str()\n if url and url.endswith('/'):\n url = url[:-1]\n\n # @deprecated(\"Use url config option instead\")\n if not url:\n host = config['subsonic']['host'].as_str()\n port = config['subsonic']['port'].get(int)\n context_path = config['subsonic']['contextpath'].as_str()\n if context_path == '/':\n context_path = ''\n url = \"http://{}:{}{}\".format(host, port, context_path)\n\n return url + '/rest/{}'.format(endpoint)\n\n def __get_version(self):\n url = self.__format_url(\"ping.view\")\n payload = {\n 'c': 'beets',\n 'f': 'json'\n }\n try:\n response = requests.get(url, params=payload)\n if response.status_code == 200:\n json = response.json()\n version = json['subsonic-response']['version']\n self._log.info(\n u'subsonic version:{0} '.format(version))\n return tuple(int(s) for s in version.split('.'))\n else:\n self._log.error(u'Error: {0}', json)\n return None\n except Exception as error:\n self._log.error(u'Error: {0}'.format(error))\n return None\n\n def start_scan(self):\n user = config['subsonic']['user'].as_str()\n url = self.__format_url(\"startScan.view\")\n\n if self.auth == 'token':\n salt, token = self.__create_token()\n payload = {\n 'u': user,\n 't': token,\n 's': salt,\n 'v': self.version, # Subsonic 6.1 and newer.\n 'c': 'beets',\n 'f': 'json'\n }\n elif self.auth == 'password':\n password = config['subsonic']['pass'].as_str()\n encpass = hexlify(password.encode()).decode()\n payload = {\n 'u': user,\n 'p': 'enc:{}'.format(encpass),\n 'v': self.version,\n 'c': 'beets',\n 'f': 'json'\n }\n else:\n return\n try:\n response = requests.get(url, params=payload)\n json = response.json()\n\n if response.status_code == 200 and \\\n json['subsonic-response']['status'] == \"ok\":\n count = json['subsonic-response']['scanStatus']['count']\n self._log.info(\n u'Updating Subsonic; scanning {0} tracks'.format(count))\n elif response.status_code == 200 and \\\n json['subsonic-response']['status'] == \"failed\":\n error_message = json['subsonic-response']['error']['message']\n self._log.error(u'Error: {0}'.format(error_message))\n else:\n self._log.error(u'Error: {0}', json)\n except Exception as error:\n self._log.error(u'Error: {0}'.format(error))\n", "path": "beetsplug/subsonicupdate.py"}]} | 2,722 | 988 |
gh_patches_debug_40000 | rasdani/github-patches | git_diff | liberapay__liberapay.com-610 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Take throttling is too complicated
https://witches.town/@Alda/2122717
</issue>
<code>
[start of liberapay/models/_mixin_team.py]
1 """Teams are groups of participants.
2 """
3 from __future__ import division, print_function, unicode_literals
4
5 from collections import OrderedDict
6 from decimal import Decimal, ROUND_UP
7 from statistics import median
8
9 from liberapay.constants import D_CENT, D_INF, D_UNIT, D_ZERO
10
11
12 class MemberLimitReached(Exception): pass
13
14
15 class InactiveParticipantAdded(Exception): pass
16
17
18 class MixinTeam(object):
19
20 def invite(self, invitee, inviter):
21 assert self.kind == 'group'
22 with self.db.get_cursor() as c:
23 n_id = invitee.notify(
24 'team_invite',
25 team=self.username,
26 team_url=self.url(),
27 inviter=inviter.username,
28 )
29 payload = dict(invitee=invitee.id, notification_id=n_id)
30 self.add_event(c, 'invite', payload, inviter.id)
31
32 def add_member(self, member, cursor=None):
33 """Add a member to this team.
34 """
35 if len(self.get_current_takes()) == 149:
36 raise MemberLimitReached
37 if member.status != 'active':
38 raise InactiveParticipantAdded
39 self.set_take_for(member, D_ZERO, self, cursor=cursor)
40
41 def remove_all_members(self, cursor=None):
42 (cursor or self.db).run("""
43 INSERT INTO takes (ctime, member, team, amount, recorder) (
44 SELECT ctime, member, %(id)s, NULL, %(id)s
45 FROM current_takes
46 WHERE team=%(id)s
47 );
48 """, dict(id=self.id))
49
50 def member_of(self, team):
51 """Given a Participant object, return a boolean.
52 """
53 assert team.kind == 'group'
54 return self.db.one("""
55 SELECT true
56 FROM current_takes
57 WHERE team=%s AND member=%s
58 """, (team.id, self.id), default=False)
59
60 def get_takes_last_week(self):
61 """Get the users' nominal takes last week. Used in throttling.
62 """
63 assert self.kind == 'group'
64 takes = {t.member: t.amount for t in self.db.all("""
65
66 SELECT DISTINCT ON (member) member, amount, mtime
67 FROM takes
68 WHERE team=%s
69 AND mtime < (
70 SELECT ts_start
71 FROM paydays
72 WHERE ts_end > ts_start
73 ORDER BY ts_start DESC LIMIT 1
74 )
75 ORDER BY member, mtime DESC
76
77 """, (self.id,)) if t.amount}
78 return takes
79
80 def get_take_for(self, member):
81 """Return the nominal take for this member, or None.
82 """
83 return self.db.one(
84 "SELECT amount FROM current_takes WHERE member = %s AND team = %s",
85 (member.id, self.id)
86 )
87
88 def compute_max_this_week(self, member_id, last_week):
89 """2x the member's take last week, or the member's take last week + a
90 proportional share of the leftover, or a minimum based on last week's
91 median take, or 1.
92 """
93 sum_last_week = sum(last_week.values())
94 initial_leftover = self.receiving - sum_last_week
95 nonzero_last_week = [a for a in last_week.values() if a]
96 member_last_week = last_week.get(member_id, 0)
97 leftover_share = member_last_week / (sum_last_week or D_INF)
98 leftover_share = max(leftover_share, D_UNIT / self.nmembers)
99 return max(
100 member_last_week * 2,
101 member_last_week + initial_leftover * leftover_share,
102 median(nonzero_last_week or (0,)),
103 D_UNIT
104 )
105
106 def set_take_for(self, member, take, recorder, check_max=True, cursor=None):
107 """Sets member's take from the team pool.
108 """
109 assert self.kind == 'group'
110
111 if recorder.id != self.id:
112 cur_take = self.get_take_for(member)
113 if cur_take is None:
114 return None
115
116 if not isinstance(take, (None.__class__, Decimal)):
117 take = Decimal(take)
118
119 if take and check_max and take > 1:
120 last_week = self.get_takes_last_week()
121 max_this_week = self.compute_max_this_week(member.id, last_week)
122 if take > max_this_week:
123 take = max_this_week
124
125 with self.db.get_cursor(cursor) as cursor:
126 # Lock to avoid race conditions
127 cursor.run("LOCK TABLE takes IN EXCLUSIVE MODE")
128 # Compute the current takes
129 old_takes = self.compute_actual_takes(cursor)
130 # Insert the new take
131 cursor.run("""
132
133 INSERT INTO takes (ctime, member, team, amount, recorder)
134 VALUES ( COALESCE (( SELECT ctime
135 FROM takes
136 WHERE member=%(member)s
137 AND team=%(team)s
138 LIMIT 1
139 ), CURRENT_TIMESTAMP)
140 , %(member)s
141 , %(team)s
142 , %(amount)s
143 , %(recorder)s
144 )
145
146 """, dict(member=member.id, team=self.id, amount=take,
147 recorder=recorder.id))
148 # Compute the new takes
149 new_takes = self.compute_actual_takes(cursor)
150 # Update receiving amounts in the participants table
151 self.update_taking(old_takes, new_takes, cursor, member)
152 # Update is_funded on member's tips
153 member.update_giving(cursor)
154
155 return take
156
157 def update_taking(self, old_takes, new_takes, cursor=None, member=None):
158 """Update `taking` amounts based on the difference between `old_takes`
159 and `new_takes`.
160 """
161 for p_id in set(old_takes.keys()).union(new_takes.keys()):
162 old = old_takes.get(p_id, {}).get('actual_amount', D_ZERO)
163 new = new_takes.get(p_id, {}).get('actual_amount', D_ZERO)
164 diff = new - old
165 if diff != 0:
166 (cursor or self.db).run("""
167 UPDATE participants
168 SET taking = (taking + %(diff)s)
169 , receiving = (receiving + %(diff)s)
170 WHERE id=%(p_id)s
171 """, dict(p_id=p_id, diff=diff))
172 if member and p_id == member.id:
173 r = (cursor or self.db).one(
174 "SELECT taking, receiving FROM participants WHERE id = %s",
175 (p_id,)
176 )
177 member.set_attributes(**r._asdict())
178
179 def get_current_takes(self, cursor=None):
180 """Return a list of member takes for a team.
181 """
182 assert self.kind == 'group'
183 TAKES = """
184 SELECT p.id AS member_id, p.username AS member_name, p.avatar_url
185 , (p.mangopay_user_id IS NOT NULL) AS is_identified
186 , t.amount, t.ctime, t.mtime
187 FROM current_takes t
188 JOIN participants p ON p.id = member
189 WHERE t.team=%(team)s
190 ORDER BY p.username
191 """
192 records = (cursor or self.db).all(TAKES, dict(team=self.id))
193 return [r._asdict() for r in records]
194
195 def compute_actual_takes(self, cursor=None):
196 """Get the takes, compute the actual amounts, and return an OrderedDict.
197 """
198 actual_takes = OrderedDict()
199 nominal_takes = self.get_current_takes(cursor=cursor)
200 balance = self.receiving
201 total_takes = sum(t['amount'] for t in nominal_takes if t['is_identified'])
202 ratio = min(balance / total_takes, 1) if total_takes else 0
203 for take in nominal_takes:
204 nominal = take['nominal_take'] = take.pop('amount')
205 actual = take['actual_amount'] = min(
206 (nominal * ratio).quantize(D_CENT, rounding=ROUND_UP),
207 balance
208 ) if take['is_identified'] else D_ZERO
209 balance -= actual
210 actual_takes[take['member_id']] = take
211 actual_takes.leftover = balance
212 return actual_takes
213
214 @property
215 def nmembers(self):
216 assert self.kind == 'group'
217 return self.db.one("""
218 SELECT COUNT(*)
219 FROM current_takes
220 WHERE team=%s
221 """, (self.id,))
222
223 def get_members(self):
224 """Return an OrderedDict of member dicts.
225 """
226 takes = self.compute_actual_takes()
227 last_week = self.get_takes_last_week()
228 members = OrderedDict()
229 members.leftover = takes.leftover
230 for take in takes.values():
231 member = {}
232 m_id = member['id'] = take['member_id']
233 member['username'] = take['member_name']
234 member['nominal_take'] = take['nominal_take']
235 member['actual_amount'] = take['actual_amount']
236 member['last_week'] = last_week.get(m_id, D_ZERO)
237 member['max_this_week'] = self.compute_max_this_week(m_id, last_week)
238 members[member['id']] = member
239 return members
240
241 @property
242 def closed_by(self):
243 assert self.status == 'closed'
244 return self.db.one("""
245 SELECT member
246 FROM takes
247 WHERE team = %s
248 ORDER BY mtime DESC
249 LIMIT 1
250 """, (self.id,))
251
[end of liberapay/models/_mixin_team.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/liberapay/models/_mixin_team.py b/liberapay/models/_mixin_team.py
--- a/liberapay/models/_mixin_team.py
+++ b/liberapay/models/_mixin_team.py
@@ -86,19 +86,21 @@
)
def compute_max_this_week(self, member_id, last_week):
- """2x the member's take last week, or the member's take last week + a
- proportional share of the leftover, or a minimum based on last week's
- median take, or 1.
+ """2x the member's take last week, or the member's take last week + the
+ leftover, or last week's median take, or 1.00, or infinity if the takes
+ were all zero last week or if throttling is disabled.
"""
+ if not self.throttle_takes:
+ return D_INF
sum_last_week = sum(last_week.values())
+ if sum_last_week == 0:
+ return D_INF
initial_leftover = self.receiving - sum_last_week
nonzero_last_week = [a for a in last_week.values() if a]
member_last_week = last_week.get(member_id, 0)
- leftover_share = member_last_week / (sum_last_week or D_INF)
- leftover_share = max(leftover_share, D_UNIT / self.nmembers)
return max(
member_last_week * 2,
- member_last_week + initial_leftover * leftover_share,
+ member_last_week + initial_leftover,
median(nonzero_last_week or (0,)),
D_UNIT
)
@@ -116,17 +118,17 @@
if not isinstance(take, (None.__class__, Decimal)):
take = Decimal(take)
- if take and check_max and take > 1:
- last_week = self.get_takes_last_week()
- max_this_week = self.compute_max_this_week(member.id, last_week)
- if take > max_this_week:
- take = max_this_week
-
with self.db.get_cursor(cursor) as cursor:
# Lock to avoid race conditions
cursor.run("LOCK TABLE takes IN EXCLUSIVE MODE")
# Compute the current takes
old_takes = self.compute_actual_takes(cursor)
+ # Throttle the new take, if there is more than one member
+ if take and check_max and len(old_takes) > 1 and take > 1:
+ last_week = self.get_takes_last_week()
+ max_this_week = self.compute_max_this_week(member.id, last_week)
+ if take > max_this_week:
+ take = max_this_week
# Insert the new take
cursor.run("""
@@ -234,7 +236,8 @@
member['nominal_take'] = take['nominal_take']
member['actual_amount'] = take['actual_amount']
member['last_week'] = last_week.get(m_id, D_ZERO)
- member['max_this_week'] = self.compute_max_this_week(m_id, last_week)
+ x = self.compute_max_this_week(m_id, last_week)
+ member['max_this_week'] = x if x.is_finite() else None
members[member['id']] = member
return members
| {"golden_diff": "diff --git a/liberapay/models/_mixin_team.py b/liberapay/models/_mixin_team.py\n--- a/liberapay/models/_mixin_team.py\n+++ b/liberapay/models/_mixin_team.py\n@@ -86,19 +86,21 @@\n )\n \n def compute_max_this_week(self, member_id, last_week):\n- \"\"\"2x the member's take last week, or the member's take last week + a\n- proportional share of the leftover, or a minimum based on last week's\n- median take, or 1.\n+ \"\"\"2x the member's take last week, or the member's take last week + the\n+ leftover, or last week's median take, or 1.00, or infinity if the takes\n+ were all zero last week or if throttling is disabled.\n \"\"\"\n+ if not self.throttle_takes:\n+ return D_INF\n sum_last_week = sum(last_week.values())\n+ if sum_last_week == 0:\n+ return D_INF\n initial_leftover = self.receiving - sum_last_week\n nonzero_last_week = [a for a in last_week.values() if a]\n member_last_week = last_week.get(member_id, 0)\n- leftover_share = member_last_week / (sum_last_week or D_INF)\n- leftover_share = max(leftover_share, D_UNIT / self.nmembers)\n return max(\n member_last_week * 2,\n- member_last_week + initial_leftover * leftover_share,\n+ member_last_week + initial_leftover,\n median(nonzero_last_week or (0,)),\n D_UNIT\n )\n@@ -116,17 +118,17 @@\n if not isinstance(take, (None.__class__, Decimal)):\n take = Decimal(take)\n \n- if take and check_max and take > 1:\n- last_week = self.get_takes_last_week()\n- max_this_week = self.compute_max_this_week(member.id, last_week)\n- if take > max_this_week:\n- take = max_this_week\n-\n with self.db.get_cursor(cursor) as cursor:\n # Lock to avoid race conditions\n cursor.run(\"LOCK TABLE takes IN EXCLUSIVE MODE\")\n # Compute the current takes\n old_takes = self.compute_actual_takes(cursor)\n+ # Throttle the new take, if there is more than one member\n+ if take and check_max and len(old_takes) > 1 and take > 1:\n+ last_week = self.get_takes_last_week()\n+ max_this_week = self.compute_max_this_week(member.id, last_week)\n+ if take > max_this_week:\n+ take = max_this_week\n # Insert the new take\n cursor.run(\"\"\"\n \n@@ -234,7 +236,8 @@\n member['nominal_take'] = take['nominal_take']\n member['actual_amount'] = take['actual_amount']\n member['last_week'] = last_week.get(m_id, D_ZERO)\n- member['max_this_week'] = self.compute_max_this_week(m_id, last_week)\n+ x = self.compute_max_this_week(m_id, last_week)\n+ member['max_this_week'] = x if x.is_finite() else None\n members[member['id']] = member\n return members\n", "issue": "Take throttling is too complicated\nhttps://witches.town/@Alda/2122717\n", "before_files": [{"content": "\"\"\"Teams are groups of participants.\n\"\"\"\nfrom __future__ import division, print_function, unicode_literals\n\nfrom collections import OrderedDict\nfrom decimal import Decimal, ROUND_UP\nfrom statistics import median\n\nfrom liberapay.constants import D_CENT, D_INF, D_UNIT, D_ZERO\n\n\nclass MemberLimitReached(Exception): pass\n\n\nclass InactiveParticipantAdded(Exception): pass\n\n\nclass MixinTeam(object):\n\n def invite(self, invitee, inviter):\n assert self.kind == 'group'\n with self.db.get_cursor() as c:\n n_id = invitee.notify(\n 'team_invite',\n team=self.username,\n team_url=self.url(),\n inviter=inviter.username,\n )\n payload = dict(invitee=invitee.id, notification_id=n_id)\n self.add_event(c, 'invite', payload, inviter.id)\n\n def add_member(self, member, cursor=None):\n \"\"\"Add a member to this team.\n \"\"\"\n if len(self.get_current_takes()) == 149:\n raise MemberLimitReached\n if member.status != 'active':\n raise InactiveParticipantAdded\n self.set_take_for(member, D_ZERO, self, cursor=cursor)\n\n def remove_all_members(self, cursor=None):\n (cursor or self.db).run(\"\"\"\n INSERT INTO takes (ctime, member, team, amount, recorder) (\n SELECT ctime, member, %(id)s, NULL, %(id)s\n FROM current_takes\n WHERE team=%(id)s\n );\n \"\"\", dict(id=self.id))\n\n def member_of(self, team):\n \"\"\"Given a Participant object, return a boolean.\n \"\"\"\n assert team.kind == 'group'\n return self.db.one(\"\"\"\n SELECT true\n FROM current_takes\n WHERE team=%s AND member=%s\n \"\"\", (team.id, self.id), default=False)\n\n def get_takes_last_week(self):\n \"\"\"Get the users' nominal takes last week. Used in throttling.\n \"\"\"\n assert self.kind == 'group'\n takes = {t.member: t.amount for t in self.db.all(\"\"\"\n\n SELECT DISTINCT ON (member) member, amount, mtime\n FROM takes\n WHERE team=%s\n AND mtime < (\n SELECT ts_start\n FROM paydays\n WHERE ts_end > ts_start\n ORDER BY ts_start DESC LIMIT 1\n )\n ORDER BY member, mtime DESC\n\n \"\"\", (self.id,)) if t.amount}\n return takes\n\n def get_take_for(self, member):\n \"\"\"Return the nominal take for this member, or None.\n \"\"\"\n return self.db.one(\n \"SELECT amount FROM current_takes WHERE member = %s AND team = %s\",\n (member.id, self.id)\n )\n\n def compute_max_this_week(self, member_id, last_week):\n \"\"\"2x the member's take last week, or the member's take last week + a\n proportional share of the leftover, or a minimum based on last week's\n median take, or 1.\n \"\"\"\n sum_last_week = sum(last_week.values())\n initial_leftover = self.receiving - sum_last_week\n nonzero_last_week = [a for a in last_week.values() if a]\n member_last_week = last_week.get(member_id, 0)\n leftover_share = member_last_week / (sum_last_week or D_INF)\n leftover_share = max(leftover_share, D_UNIT / self.nmembers)\n return max(\n member_last_week * 2,\n member_last_week + initial_leftover * leftover_share,\n median(nonzero_last_week or (0,)),\n D_UNIT\n )\n\n def set_take_for(self, member, take, recorder, check_max=True, cursor=None):\n \"\"\"Sets member's take from the team pool.\n \"\"\"\n assert self.kind == 'group'\n\n if recorder.id != self.id:\n cur_take = self.get_take_for(member)\n if cur_take is None:\n return None\n\n if not isinstance(take, (None.__class__, Decimal)):\n take = Decimal(take)\n\n if take and check_max and take > 1:\n last_week = self.get_takes_last_week()\n max_this_week = self.compute_max_this_week(member.id, last_week)\n if take > max_this_week:\n take = max_this_week\n\n with self.db.get_cursor(cursor) as cursor:\n # Lock to avoid race conditions\n cursor.run(\"LOCK TABLE takes IN EXCLUSIVE MODE\")\n # Compute the current takes\n old_takes = self.compute_actual_takes(cursor)\n # Insert the new take\n cursor.run(\"\"\"\n\n INSERT INTO takes (ctime, member, team, amount, recorder)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE member=%(member)s\n AND team=%(team)s\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %(member)s\n , %(team)s\n , %(amount)s\n , %(recorder)s\n )\n\n \"\"\", dict(member=member.id, team=self.id, amount=take,\n recorder=recorder.id))\n # Compute the new takes\n new_takes = self.compute_actual_takes(cursor)\n # Update receiving amounts in the participants table\n self.update_taking(old_takes, new_takes, cursor, member)\n # Update is_funded on member's tips\n member.update_giving(cursor)\n\n return take\n\n def update_taking(self, old_takes, new_takes, cursor=None, member=None):\n \"\"\"Update `taking` amounts based on the difference between `old_takes`\n and `new_takes`.\n \"\"\"\n for p_id in set(old_takes.keys()).union(new_takes.keys()):\n old = old_takes.get(p_id, {}).get('actual_amount', D_ZERO)\n new = new_takes.get(p_id, {}).get('actual_amount', D_ZERO)\n diff = new - old\n if diff != 0:\n (cursor or self.db).run(\"\"\"\n UPDATE participants\n SET taking = (taking + %(diff)s)\n , receiving = (receiving + %(diff)s)\n WHERE id=%(p_id)s\n \"\"\", dict(p_id=p_id, diff=diff))\n if member and p_id == member.id:\n r = (cursor or self.db).one(\n \"SELECT taking, receiving FROM participants WHERE id = %s\",\n (p_id,)\n )\n member.set_attributes(**r._asdict())\n\n def get_current_takes(self, cursor=None):\n \"\"\"Return a list of member takes for a team.\n \"\"\"\n assert self.kind == 'group'\n TAKES = \"\"\"\n SELECT p.id AS member_id, p.username AS member_name, p.avatar_url\n , (p.mangopay_user_id IS NOT NULL) AS is_identified\n , t.amount, t.ctime, t.mtime\n FROM current_takes t\n JOIN participants p ON p.id = member\n WHERE t.team=%(team)s\n ORDER BY p.username\n \"\"\"\n records = (cursor or self.db).all(TAKES, dict(team=self.id))\n return [r._asdict() for r in records]\n\n def compute_actual_takes(self, cursor=None):\n \"\"\"Get the takes, compute the actual amounts, and return an OrderedDict.\n \"\"\"\n actual_takes = OrderedDict()\n nominal_takes = self.get_current_takes(cursor=cursor)\n balance = self.receiving\n total_takes = sum(t['amount'] for t in nominal_takes if t['is_identified'])\n ratio = min(balance / total_takes, 1) if total_takes else 0\n for take in nominal_takes:\n nominal = take['nominal_take'] = take.pop('amount')\n actual = take['actual_amount'] = min(\n (nominal * ratio).quantize(D_CENT, rounding=ROUND_UP),\n balance\n ) if take['is_identified'] else D_ZERO\n balance -= actual\n actual_takes[take['member_id']] = take\n actual_takes.leftover = balance\n return actual_takes\n\n @property\n def nmembers(self):\n assert self.kind == 'group'\n return self.db.one(\"\"\"\n SELECT COUNT(*)\n FROM current_takes\n WHERE team=%s\n \"\"\", (self.id,))\n\n def get_members(self):\n \"\"\"Return an OrderedDict of member dicts.\n \"\"\"\n takes = self.compute_actual_takes()\n last_week = self.get_takes_last_week()\n members = OrderedDict()\n members.leftover = takes.leftover\n for take in takes.values():\n member = {}\n m_id = member['id'] = take['member_id']\n member['username'] = take['member_name']\n member['nominal_take'] = take['nominal_take']\n member['actual_amount'] = take['actual_amount']\n member['last_week'] = last_week.get(m_id, D_ZERO)\n member['max_this_week'] = self.compute_max_this_week(m_id, last_week)\n members[member['id']] = member\n return members\n\n @property\n def closed_by(self):\n assert self.status == 'closed'\n return self.db.one(\"\"\"\n SELECT member\n FROM takes\n WHERE team = %s\n ORDER BY mtime DESC\n LIMIT 1\n \"\"\", (self.id,))\n", "path": "liberapay/models/_mixin_team.py"}]} | 3,253 | 734 |
gh_patches_debug_685 | rasdani/github-patches | git_diff | pytorch__TensorRT-1849 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Test Suite for `torch.compile` backend Partitioning/Lowering Phases
- Add robust test suite for `torch.compile` backend, ensuring each phase functions correctly
- Add general-purpose utilities for test expansion as the backend evolves
</issue>
<code>
[start of py/torch_tensorrt/dynamo/torch_compile/utils.py]
1 import torch
2
3 from typing import Any, Union, Sequence, Dict
4 from torch_tensorrt import _Input, Device
5
6
7 def prepare_inputs(
8 inputs: Union[_Input.Input, torch.Tensor, Sequence, Dict],
9 device: torch.device = torch.device("cuda"),
10 ) -> Any:
11 if isinstance(inputs, _Input.Input):
12 if isinstance(inputs.shape, dict):
13 return inputs.example_tensor(optimization_profile_field="opt_shape").to(
14 device
15 )
16 else:
17 return inputs.example_tensor().to(device)
18
19 elif isinstance(inputs, torch.Tensor):
20 return inputs
21
22 elif isinstance(inputs, list):
23 prepared_input = list()
24
25 for input_obj in inputs:
26 prepared_input.append(prepare_inputs(input_obj))
27
28 return prepared_input
29
30 elif isinstance(inputs, tuple):
31 prepared_input = list()
32
33 for input_obj in inputs:
34 prepared_input.append(prepare_inputs(input_obj))
35
36 return tuple(prepared_input)
37
38 elif isinstance(inputs, dict):
39 prepared_input = dict()
40
41 for key, input_obj in inputs.items():
42 prepared_input[key] = prepare_inputs(input_obj)
43
44 return prepared_input
45
46 else:
47 raise ValueError(
48 f"Invalid input type {type(inputs)} encountered in the torch_compile input parsing. "
49 + "Allowed input types: {torch_tensorrt.Input, torch.Tensor, list, tuple, dict}"
50 )
51
52
53 def prepare_device(device: Union[Device, torch.device]) -> torch.device:
54 if isinstance(device, Device):
55 if device.gpu_id != -1:
56 device = torch.device(device.gpu_id)
57 else:
58 raise ValueError("Invalid GPU ID provided for the CUDA device provided")
59
60 elif isinstance(device, torch.device):
61 device = device
62
63 else:
64 raise ValueError(
65 "Invalid device provided. Supported options: torch.device | torch_tensorrt.Device"
66 )
67
[end of py/torch_tensorrt/dynamo/torch_compile/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py/torch_tensorrt/dynamo/torch_compile/utils.py b/py/torch_tensorrt/dynamo/torch_compile/utils.py
--- a/py/torch_tensorrt/dynamo/torch_compile/utils.py
+++ b/py/torch_tensorrt/dynamo/torch_compile/utils.py
@@ -64,3 +64,5 @@
raise ValueError(
"Invalid device provided. Supported options: torch.device | torch_tensorrt.Device"
)
+
+ return device
| {"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/torch_compile/utils.py b/py/torch_tensorrt/dynamo/torch_compile/utils.py\n--- a/py/torch_tensorrt/dynamo/torch_compile/utils.py\n+++ b/py/torch_tensorrt/dynamo/torch_compile/utils.py\n@@ -64,3 +64,5 @@\n raise ValueError(\n \"Invalid device provided. Supported options: torch.device | torch_tensorrt.Device\"\n )\n+\n+ return device\n", "issue": "Add Test Suite for `torch.compile` backend Partitioning/Lowering Phases\n- Add robust test suite for `torch.compile` backend, ensuring each phase functions correctly\r\n- Add general-purpose utilities for test expansion as the backend evolves\n", "before_files": [{"content": "import torch\n\nfrom typing import Any, Union, Sequence, Dict\nfrom torch_tensorrt import _Input, Device\n\n\ndef prepare_inputs(\n inputs: Union[_Input.Input, torch.Tensor, Sequence, Dict],\n device: torch.device = torch.device(\"cuda\"),\n) -> Any:\n if isinstance(inputs, _Input.Input):\n if isinstance(inputs.shape, dict):\n return inputs.example_tensor(optimization_profile_field=\"opt_shape\").to(\n device\n )\n else:\n return inputs.example_tensor().to(device)\n\n elif isinstance(inputs, torch.Tensor):\n return inputs\n\n elif isinstance(inputs, list):\n prepared_input = list()\n\n for input_obj in inputs:\n prepared_input.append(prepare_inputs(input_obj))\n\n return prepared_input\n\n elif isinstance(inputs, tuple):\n prepared_input = list()\n\n for input_obj in inputs:\n prepared_input.append(prepare_inputs(input_obj))\n\n return tuple(prepared_input)\n\n elif isinstance(inputs, dict):\n prepared_input = dict()\n\n for key, input_obj in inputs.items():\n prepared_input[key] = prepare_inputs(input_obj)\n\n return prepared_input\n\n else:\n raise ValueError(\n f\"Invalid input type {type(inputs)} encountered in the torch_compile input parsing. \"\n + \"Allowed input types: {torch_tensorrt.Input, torch.Tensor, list, tuple, dict}\"\n )\n\n\ndef prepare_device(device: Union[Device, torch.device]) -> torch.device:\n if isinstance(device, Device):\n if device.gpu_id != -1:\n device = torch.device(device.gpu_id)\n else:\n raise ValueError(\"Invalid GPU ID provided for the CUDA device provided\")\n\n elif isinstance(device, torch.device):\n device = device\n\n else:\n raise ValueError(\n \"Invalid device provided. Supported options: torch.device | torch_tensorrt.Device\"\n )\n", "path": "py/torch_tensorrt/dynamo/torch_compile/utils.py"}]} | 1,117 | 101 |
gh_patches_debug_38916 | rasdani/github-patches | git_diff | scrapy__scrapy-1944 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scrapy 1.1.0 RC3 - exception thrown with invalid ssl certificate
Hello,
I am crawling sometimes websites with an invalid ssl certificate. For example, Scrapy 1.1.0 RC3 fails to open when I do:
> scrapy shell https://www.directoriosanitario.com/directorio
> or
> scrapy shell https://saobinv.5go.cc/top/
and throws the following exception:
> twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure service_identity.exceptions.VerificationError: VerificationError(errors=[DNSMismatch(mismatched_id=DNS_ID(hostname=b'www.directoriosanitario.com'))])>]
I tried it with Scrapy 1.0.5 on python 2.7 and the spider opens but warns with:
> AttributeError: 'NoneType' object has no attribute 'failVerification'
Is there a way to force the spider to open with Scrapy 1.1.0 RC3?
</issue>
<code>
[start of scrapy/core/downloader/tls.py]
1 from OpenSSL import SSL
2
3
4 METHOD_SSLv3 = 'SSLv3'
5 METHOD_TLS = 'TLS'
6 METHOD_TLSv10 = 'TLSv1.0'
7 METHOD_TLSv11 = 'TLSv1.1'
8 METHOD_TLSv12 = 'TLSv1.2'
9
10 openssl_methods = {
11 METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)
12 METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)
13 METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only
14 METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
15 METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
16 }
17
[end of scrapy/core/downloader/tls.py]
[start of scrapy/core/downloader/contextfactory.py]
1 from OpenSSL import SSL
2 from twisted.internet.ssl import ClientContextFactory
3
4 try:
5
6 from zope.interface.declarations import implementer
7
8 # the following should be available from Twisted 14.0.0
9 from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust
10 from twisted.internet._sslverify import ClientTLSOptions
11 from twisted.web.client import BrowserLikePolicyForHTTPS
12 from twisted.web.iweb import IPolicyForHTTPS
13
14 @implementer(IPolicyForHTTPS)
15 class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):
16 """
17 Non-peer-certificate verifying HTTPS context factory
18
19 Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)
20 which allows TLS protocol negotiation
21
22 'A TLS/SSL connection established with [this method] may
23 understand the SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.'
24 """
25
26 def __init__(self, method=SSL.SSLv23_METHOD, *args, **kwargs):
27 super(ScrapyClientContextFactory, self).__init__(*args, **kwargs)
28 self._ssl_method = method
29
30 def getCertificateOptions(self):
31 # setting verify=True will require you to provide CAs
32 # to verify against; in other words: it's not that simple
33
34 # backward-compatible SSL/TLS method:
35 #
36 # * this will respect `method` attribute in often recommended
37 # `ScrapyClientContextFactory` subclass
38 # (https://github.com/scrapy/scrapy/issues/1429#issuecomment-131782133)
39 #
40 # * getattr() for `_ssl_method` attribute for context factories
41 # not calling super(..., self).__init__
42 return CertificateOptions(verify=False,
43 method=getattr(self, 'method',
44 getattr(self, '_ssl_method', None)))
45
46 # kept for old-style HTTP/1.0 downloader context twisted calls,
47 # e.g. connectSSL()
48 def getContext(self, hostname=None, port=None):
49 return self.getCertificateOptions().getContext()
50
51 def creatorForNetloc(self, hostname, port):
52 return ClientTLSOptions(hostname.decode("ascii"), self.getContext())
53
54
55 @implementer(IPolicyForHTTPS)
56 class BrowserLikeContextFactory(ScrapyClientContextFactory):
57 """
58 Twisted-recommended context factory for web clients.
59
60 Quoting http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html:
61 "The default is to use a BrowserLikePolicyForHTTPS,
62 so unless you have special requirements you can leave this as-is."
63
64 creatorForNetloc() is the same as BrowserLikePolicyForHTTPS
65 except this context factory allows setting the TLS/SSL method to use.
66
67 Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)
68 which allows TLS protocol negotiation.
69 """
70 def creatorForNetloc(self, hostname, port):
71
72 # trustRoot set to platformTrust() will use the platform's root CAs.
73 #
74 # This means that a website like https://www.cacert.org will be rejected
75 # by default, since CAcert.org CA certificate is seldom shipped.
76 return optionsForClientTLS(hostname.decode("ascii"),
77 trustRoot=platformTrust(),
78 extraCertificateOptions={
79 'method': self._ssl_method,
80 })
81
82 except ImportError:
83
84 class ScrapyClientContextFactory(ClientContextFactory):
85 "A SSL context factory which is more permissive against SSL bugs."
86 # see https://github.com/scrapy/scrapy/issues/82
87 # and https://github.com/scrapy/scrapy/issues/26
88 # and https://github.com/scrapy/scrapy/issues/981
89
90 def __init__(self, method=SSL.SSLv23_METHOD):
91 self.method = method
92
93 def getContext(self, hostname=None, port=None):
94 ctx = ClientContextFactory.getContext(self)
95 # Enable all workarounds to SSL bugs as documented by
96 # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html
97 ctx.set_options(SSL.OP_ALL)
98 return ctx
99
[end of scrapy/core/downloader/contextfactory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py
--- a/scrapy/core/downloader/contextfactory.py
+++ b/scrapy/core/downloader/contextfactory.py
@@ -6,11 +6,16 @@
from zope.interface.declarations import implementer
# the following should be available from Twisted 14.0.0
- from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust
- from twisted.internet._sslverify import ClientTLSOptions
+ from twisted.internet.ssl import (optionsForClientTLS,
+ CertificateOptions,
+ platformTrust)
+
from twisted.web.client import BrowserLikePolicyForHTTPS
from twisted.web.iweb import IPolicyForHTTPS
+ from scrapy.core.downloader.tls import ScrapyClientTLSOptions
+
+
@implementer(IPolicyForHTTPS)
class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):
"""
@@ -49,7 +54,7 @@
return self.getCertificateOptions().getContext()
def creatorForNetloc(self, hostname, port):
- return ClientTLSOptions(hostname.decode("ascii"), self.getContext())
+ return ScrapyClientTLSOptions(hostname.decode("ascii"), self.getContext())
@implementer(IPolicyForHTTPS)
diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py
--- a/scrapy/core/downloader/tls.py
+++ b/scrapy/core/downloader/tls.py
@@ -1,6 +1,9 @@
+import logging
from OpenSSL import SSL
+logger = logging.getLogger(__name__)
+
METHOD_SSLv3 = 'SSLv3'
METHOD_TLS = 'TLS'
METHOD_TLSv10 = 'TLSv1.0'
@@ -14,3 +17,36 @@
METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
}
+
+# ClientTLSOptions requires a recent-enough version of Twisted
+try:
+
+ # taken from twisted/twisted/internet/_sslverify.py
+ try:
+ from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START
+ except ImportError:
+ SSL_CB_HANDSHAKE_START = 0x10
+ SSL_CB_HANDSHAKE_DONE = 0x20
+
+ from twisted.internet._sslverify import (ClientTLSOptions,
+ _maybeSetHostNameIndication,
+ verifyHostname,
+ VerificationError)
+
+ class ScrapyClientTLSOptions(ClientTLSOptions):
+ # same as Twisted's ClientTLSOptions,
+ # except that VerificationError is caught
+ # and doesn't close the connection
+ def _identityVerifyingInfoCallback(self, connection, where, ret):
+ if where & SSL_CB_HANDSHAKE_START:
+ _maybeSetHostNameIndication(connection, self._hostnameBytes)
+ elif where & SSL_CB_HANDSHAKE_DONE:
+ try:
+ verifyHostname(connection, self._hostnameASCII)
+ except VerificationError as e:
+ logger.warning(e)
+
+except ImportError:
+ # ImportError should not matter for older Twisted versions
+ # as the above is not used in the fallback ScrapyClientContextFactory
+ pass
| {"golden_diff": "diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py\n--- a/scrapy/core/downloader/contextfactory.py\n+++ b/scrapy/core/downloader/contextfactory.py\n@@ -6,11 +6,16 @@\n from zope.interface.declarations import implementer\n \n # the following should be available from Twisted 14.0.0\n- from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust\n- from twisted.internet._sslverify import ClientTLSOptions\n+ from twisted.internet.ssl import (optionsForClientTLS,\n+ CertificateOptions,\n+ platformTrust)\n+\n from twisted.web.client import BrowserLikePolicyForHTTPS\n from twisted.web.iweb import IPolicyForHTTPS\n \n+ from scrapy.core.downloader.tls import ScrapyClientTLSOptions\n+\n+\n @implementer(IPolicyForHTTPS)\n class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):\n \"\"\"\n@@ -49,7 +54,7 @@\n return self.getCertificateOptions().getContext()\n \n def creatorForNetloc(self, hostname, port):\n- return ClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n+ return ScrapyClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n \n \n @implementer(IPolicyForHTTPS)\ndiff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py\n--- a/scrapy/core/downloader/tls.py\n+++ b/scrapy/core/downloader/tls.py\n@@ -1,6 +1,9 @@\n+import logging\n from OpenSSL import SSL\n \n \n+logger = logging.getLogger(__name__)\n+\n METHOD_SSLv3 = 'SSLv3'\n METHOD_TLS = 'TLS'\n METHOD_TLSv10 = 'TLSv1.0'\n@@ -14,3 +17,36 @@\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n }\n+\n+# ClientTLSOptions requires a recent-enough version of Twisted\n+try:\n+\n+ # taken from twisted/twisted/internet/_sslverify.py\n+ try:\n+ from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START\n+ except ImportError:\n+ SSL_CB_HANDSHAKE_START = 0x10\n+ SSL_CB_HANDSHAKE_DONE = 0x20\n+\n+ from twisted.internet._sslverify import (ClientTLSOptions,\n+ _maybeSetHostNameIndication,\n+ verifyHostname,\n+ VerificationError)\n+\n+ class ScrapyClientTLSOptions(ClientTLSOptions):\n+ # same as Twisted's ClientTLSOptions,\n+ # except that VerificationError is caught\n+ # and doesn't close the connection\n+ def _identityVerifyingInfoCallback(self, connection, where, ret):\n+ if where & SSL_CB_HANDSHAKE_START:\n+ _maybeSetHostNameIndication(connection, self._hostnameBytes)\n+ elif where & SSL_CB_HANDSHAKE_DONE:\n+ try:\n+ verifyHostname(connection, self._hostnameASCII)\n+ except VerificationError as e:\n+ logger.warning(e)\n+\n+except ImportError:\n+ # ImportError should not matter for older Twisted versions\n+ # as the above is not used in the fallback ScrapyClientContextFactory\n+ pass\n", "issue": "Scrapy 1.1.0 RC3 - exception thrown with invalid ssl certificate\nHello,\n\nI am crawling sometimes websites with an invalid ssl certificate. For example, Scrapy 1.1.0 RC3 fails to open when I do:\n\n> scrapy shell https://www.directoriosanitario.com/directorio\n> or\n> scrapy shell https://saobinv.5go.cc/top/\n\nand throws the following exception:\n\n> twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure service_identity.exceptions.VerificationError: VerificationError(errors=[DNSMismatch(mismatched_id=DNS_ID(hostname=b'www.directoriosanitario.com'))])>]\n\nI tried it with Scrapy 1.0.5 on python 2.7 and the spider opens but warns with: \n\n> AttributeError: 'NoneType' object has no attribute 'failVerification'\n\nIs there a way to force the spider to open with Scrapy 1.1.0 RC3?\n\n", "before_files": [{"content": "from OpenSSL import SSL\n\n\nMETHOD_SSLv3 = 'SSLv3'\nMETHOD_TLS = 'TLS'\nMETHOD_TLSv10 = 'TLSv1.0'\nMETHOD_TLSv11 = 'TLSv1.1'\nMETHOD_TLSv12 = 'TLSv1.2'\n\nopenssl_methods = {\n METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)\n METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)\n METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n}\n", "path": "scrapy/core/downloader/tls.py"}, {"content": "from OpenSSL import SSL\nfrom twisted.internet.ssl import ClientContextFactory\n\ntry:\n\n from zope.interface.declarations import implementer\n\n # the following should be available from Twisted 14.0.0\n from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust\n from twisted.internet._sslverify import ClientTLSOptions\n from twisted.web.client import BrowserLikePolicyForHTTPS\n from twisted.web.iweb import IPolicyForHTTPS\n\n @implementer(IPolicyForHTTPS)\n class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):\n \"\"\"\n Non-peer-certificate verifying HTTPS context factory\n\n Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)\n which allows TLS protocol negotiation\n\n 'A TLS/SSL connection established with [this method] may\n understand the SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.'\n \"\"\"\n\n def __init__(self, method=SSL.SSLv23_METHOD, *args, **kwargs):\n super(ScrapyClientContextFactory, self).__init__(*args, **kwargs)\n self._ssl_method = method\n\n def getCertificateOptions(self):\n # setting verify=True will require you to provide CAs\n # to verify against; in other words: it's not that simple\n\n # backward-compatible SSL/TLS method:\n #\n # * this will respect `method` attribute in often recommended\n # `ScrapyClientContextFactory` subclass\n # (https://github.com/scrapy/scrapy/issues/1429#issuecomment-131782133)\n #\n # * getattr() for `_ssl_method` attribute for context factories\n # not calling super(..., self).__init__\n return CertificateOptions(verify=False,\n method=getattr(self, 'method',\n getattr(self, '_ssl_method', None)))\n\n # kept for old-style HTTP/1.0 downloader context twisted calls,\n # e.g. connectSSL()\n def getContext(self, hostname=None, port=None):\n return self.getCertificateOptions().getContext()\n\n def creatorForNetloc(self, hostname, port):\n return ClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n\n\n @implementer(IPolicyForHTTPS)\n class BrowserLikeContextFactory(ScrapyClientContextFactory):\n \"\"\"\n Twisted-recommended context factory for web clients.\n\n Quoting http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html:\n \"The default is to use a BrowserLikePolicyForHTTPS,\n so unless you have special requirements you can leave this as-is.\"\n\n creatorForNetloc() is the same as BrowserLikePolicyForHTTPS\n except this context factory allows setting the TLS/SSL method to use.\n\n Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)\n which allows TLS protocol negotiation.\n \"\"\"\n def creatorForNetloc(self, hostname, port):\n\n # trustRoot set to platformTrust() will use the platform's root CAs.\n #\n # This means that a website like https://www.cacert.org will be rejected\n # by default, since CAcert.org CA certificate is seldom shipped.\n return optionsForClientTLS(hostname.decode(\"ascii\"),\n trustRoot=platformTrust(),\n extraCertificateOptions={\n 'method': self._ssl_method,\n })\n\nexcept ImportError:\n\n class ScrapyClientContextFactory(ClientContextFactory):\n \"A SSL context factory which is more permissive against SSL bugs.\"\n # see https://github.com/scrapy/scrapy/issues/82\n # and https://github.com/scrapy/scrapy/issues/26\n # and https://github.com/scrapy/scrapy/issues/981\n\n def __init__(self, method=SSL.SSLv23_METHOD):\n self.method = method\n\n def getContext(self, hostname=None, port=None):\n ctx = ClientContextFactory.getContext(self)\n # Enable all workarounds to SSL bugs as documented by\n # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html\n ctx.set_options(SSL.OP_ALL)\n return ctx\n", "path": "scrapy/core/downloader/contextfactory.py"}]} | 2,078 | 758 |
gh_patches_debug_34682 | rasdani/github-patches | git_diff | meltano__meltano-7983 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feat: consider clearing the catalog cache when `--full-refresh` present
This has come up in slack a bunch of times but most recently in https://meltano.slack.com/archives/C01TCRBBJD7/p1689207554179589
Lots of users run into this where the catalog is being cached so the output theyre seeing is not as expected, we end up recommending that they clear their `.meltano/run/tap-x` directory so it regenerates the catalog. If someone runs with the `--full-refresh` flag its probably because something in the source changed so they need to re-run the replication but the cached catalog is blocking those changes from propogating.
Related to:
- https://github.com/meltano/meltano/issues/6292
- https://github.com/meltano/meltano/issues/6763
- https://github.com/meltano/meltano/issues/2856
- https://github.com/meltano/meltano/issues/2848
</issue>
<code>
[start of src/meltano/cli/__init__.py]
1 """Main entry point for the meltano CLI."""
2
3 from __future__ import annotations
4
5 import logging
6 import os
7 import sys
8 import typing as t
9
10 from meltano.cli import ( # noqa: WPS235
11 add,
12 config,
13 discovery,
14 docs,
15 dragon,
16 elt,
17 environment,
18 hub,
19 initialize,
20 install,
21 invoke,
22 job,
23 lock,
24 remove,
25 run,
26 schedule,
27 schema,
28 select,
29 state,
30 upgrade,
31 validate,
32 )
33 from meltano.cli import compile as compile_module
34 from meltano.cli.cli import cli
35 from meltano.cli.utils import CliError
36 from meltano.cloud.cli import cloud
37 from meltano.core.error import MeltanoError, ProjectReadonly
38 from meltano.core.logging import setup_logging
39
40 if t.TYPE_CHECKING:
41 from meltano.core.tracking.tracker import Tracker
42
43 cli.add_command(add.add)
44 cli.add_command(cloud)
45 cli.add_command(compile_module.compile_command)
46 cli.add_command(config.config)
47 cli.add_command(discovery.discover)
48 cli.add_command(docs.docs)
49 cli.add_command(dragon.dragon)
50 cli.add_command(elt.elt)
51 cli.add_command(environment.meltano_environment)
52 cli.add_command(hub.hub)
53 cli.add_command(initialize.init)
54 cli.add_command(install.install)
55 cli.add_command(invoke.invoke)
56 cli.add_command(lock.lock)
57 cli.add_command(remove.remove)
58 cli.add_command(schedule.schedule)
59 cli.add_command(schema.schema)
60 cli.add_command(select.select)
61 cli.add_command(state.meltano_state)
62 cli.add_command(upgrade.upgrade)
63 cli.add_command(run.run)
64 cli.add_command(validate.test)
65 cli.add_command(job.job)
66
67 # Holds the exit code for error reporting during process exiting. In
68 # particular, a function registered by the `atexit` module uses this value.
69 exit_code: None | int = None
70
71 atexit_handler_registered = False
72 exit_code_reported = False
73 exit_event_tracker: Tracker | None = None
74
75 setup_logging()
76
77 logger = logging.getLogger(__name__)
78
79 troubleshooting_message = """\
80 Need help fixing this problem? Visit http://melta.no/ for troubleshooting steps, or to
81 join our friendly Slack community.
82 """
83
84
85 def handle_meltano_error(error: MeltanoError) -> t.NoReturn:
86 """Handle a MeltanoError.
87
88 Args:
89 error: The error to handle.
90
91 Raises:
92 CliError: always.
93 """
94 raise CliError(str(error)) from error
95
96
97 def _run_cli():
98 """Run the Meltano CLI.
99
100 Raises:
101 KeyboardInterrupt: if caught.
102 """
103 try:
104 try: # noqa: WPS225, WPS505
105 cli(obj={"project": None})
106 except ProjectReadonly as err:
107 raise CliError(
108 f"The requested action could not be completed: {err}",
109 ) from err
110 except KeyboardInterrupt: # noqa: WPS329
111 raise
112 except MeltanoError as err:
113 handle_meltano_error(err)
114 except Exception as err:
115 raise CliError(f"{troubleshooting_message}\n{err}") from err
116 except CliError as cli_error:
117 cli_error.print()
118 sys.exit(1)
119
120
121 def main():
122 """Entry point for the meltano CLI."""
123 # Mark the current process as executed via the CLI
124 os.environ["MELTANO_JOB_TRIGGER"] = os.getenv("MELTANO_JOB_TRIGGER", "cli")
125 try:
126 _run_cli()
127 finally:
128 global exit_code
129 ex = sys.exc_info()[1]
130 if ex is None:
131 exit_code = 0 # noqa: WPS442
132 elif isinstance(ex, SystemExit):
133 exit_code = 0 if ex.code is None else ex.code # noqa: WPS442
134 else:
135 exit_code = 1 # noqa: WPS442
136 # Track the exit event now to provide more details via the exception context.
137 # We assume the process will exit practically immediately after `main` returns.
138 if exit_event_tracker is not None:
139 exit_event_tracker.track_exit_event()
140
[end of src/meltano/cli/__init__.py]
[start of src/meltano/cli/discovery.py]
1 """Discoverable Plugins CLI."""
2
3 from __future__ import annotations
4
5 import typing as t
6
7 import click
8
9 from meltano.cli.params import pass_project
10 from meltano.cli.utils import InstrumentedCmd
11 from meltano.core.plugin import PluginType
12
13 if t.TYPE_CHECKING:
14 from meltano.core.project import Project
15
16
17 @click.command(
18 cls=InstrumentedCmd,
19 short_help="List the available plugins in Meltano Hub and their variants.",
20 )
21 @click.argument(
22 "plugin_type",
23 type=click.Choice([*list(PluginType), "all"]),
24 default="all",
25 )
26 @pass_project()
27 def discover( # noqa: C901
28 project: Project,
29 plugin_type: str,
30 ):
31 """
32 List the available discoverable plugins and their variants.
33
34 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#discover
35 """
36 if plugin_type == "all":
37 plugin_types = [
38 plugin_type for plugin_type in list(PluginType) if plugin_type.discoverable
39 ]
40 else:
41 plugin_types = [PluginType.from_cli_argument(plugin_type)]
42
43 for idx, discovered_plugin_type in enumerate(plugin_types):
44 if idx > 0:
45 click.echo()
46
47 click.secho(f"{str(discovered_plugin_type).capitalize()}", fg="green")
48
49 try:
50 plugin_type_index = project.hub_service.get_plugins_of_type(
51 discovered_plugin_type,
52 )
53 except Exception:
54 click.secho(
55 f"Can not retrieve {discovered_plugin_type} from the Hub",
56 fg="yellow",
57 err=True,
58 )
59 continue
60
61 for plugin_name, plugin in plugin_type_index.items():
62 click.echo(plugin_name, nl=False)
63
64 if len(plugin.variants) > 1:
65 click.echo(f", variants: {', '.join(plugin.variant_labels)}")
66 else:
67 click.echo()
68
[end of src/meltano/cli/discovery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/cli/__init__.py b/src/meltano/cli/__init__.py
--- a/src/meltano/cli/__init__.py
+++ b/src/meltano/cli/__init__.py
@@ -10,7 +10,6 @@
from meltano.cli import ( # noqa: WPS235
add,
config,
- discovery,
docs,
dragon,
elt,
@@ -44,7 +43,6 @@
cli.add_command(cloud)
cli.add_command(compile_module.compile_command)
cli.add_command(config.config)
-cli.add_command(discovery.discover)
cli.add_command(docs.docs)
cli.add_command(dragon.dragon)
cli.add_command(elt.elt)
diff --git a/src/meltano/cli/discovery.py b/src/meltano/cli/discovery.py
deleted file mode 100644
--- a/src/meltano/cli/discovery.py
+++ /dev/null
@@ -1,67 +0,0 @@
-"""Discoverable Plugins CLI."""
-
-from __future__ import annotations
-
-import typing as t
-
-import click
-
-from meltano.cli.params import pass_project
-from meltano.cli.utils import InstrumentedCmd
-from meltano.core.plugin import PluginType
-
-if t.TYPE_CHECKING:
- from meltano.core.project import Project
-
-
[email protected](
- cls=InstrumentedCmd,
- short_help="List the available plugins in Meltano Hub and their variants.",
-)
[email protected](
- "plugin_type",
- type=click.Choice([*list(PluginType), "all"]),
- default="all",
-)
-@pass_project()
-def discover( # noqa: C901
- project: Project,
- plugin_type: str,
-):
- """
- List the available discoverable plugins and their variants.
-
- \b\nRead more at https://docs.meltano.com/reference/command-line-interface#discover
- """
- if plugin_type == "all":
- plugin_types = [
- plugin_type for plugin_type in list(PluginType) if plugin_type.discoverable
- ]
- else:
- plugin_types = [PluginType.from_cli_argument(plugin_type)]
-
- for idx, discovered_plugin_type in enumerate(plugin_types):
- if idx > 0:
- click.echo()
-
- click.secho(f"{str(discovered_plugin_type).capitalize()}", fg="green")
-
- try:
- plugin_type_index = project.hub_service.get_plugins_of_type(
- discovered_plugin_type,
- )
- except Exception:
- click.secho(
- f"Can not retrieve {discovered_plugin_type} from the Hub",
- fg="yellow",
- err=True,
- )
- continue
-
- for plugin_name, plugin in plugin_type_index.items():
- click.echo(plugin_name, nl=False)
-
- if len(plugin.variants) > 1:
- click.echo(f", variants: {', '.join(plugin.variant_labels)}")
- else:
- click.echo()
| {"golden_diff": "diff --git a/src/meltano/cli/__init__.py b/src/meltano/cli/__init__.py\n--- a/src/meltano/cli/__init__.py\n+++ b/src/meltano/cli/__init__.py\n@@ -10,7 +10,6 @@\n from meltano.cli import ( # noqa: WPS235\n add,\n config,\n- discovery,\n docs,\n dragon,\n elt,\n@@ -44,7 +43,6 @@\n cli.add_command(cloud)\n cli.add_command(compile_module.compile_command)\n cli.add_command(config.config)\n-cli.add_command(discovery.discover)\n cli.add_command(docs.docs)\n cli.add_command(dragon.dragon)\n cli.add_command(elt.elt)\ndiff --git a/src/meltano/cli/discovery.py b/src/meltano/cli/discovery.py\ndeleted file mode 100644\n--- a/src/meltano/cli/discovery.py\n+++ /dev/null\n@@ -1,67 +0,0 @@\n-\"\"\"Discoverable Plugins CLI.\"\"\"\n-\n-from __future__ import annotations\n-\n-import typing as t\n-\n-import click\n-\n-from meltano.cli.params import pass_project\n-from meltano.cli.utils import InstrumentedCmd\n-from meltano.core.plugin import PluginType\n-\n-if t.TYPE_CHECKING:\n- from meltano.core.project import Project\n-\n-\[email protected](\n- cls=InstrumentedCmd,\n- short_help=\"List the available plugins in Meltano Hub and their variants.\",\n-)\[email protected](\n- \"plugin_type\",\n- type=click.Choice([*list(PluginType), \"all\"]),\n- default=\"all\",\n-)\n-@pass_project()\n-def discover( # noqa: C901\n- project: Project,\n- plugin_type: str,\n-):\n- \"\"\"\n- List the available discoverable plugins and their variants.\n-\n- \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#discover\n- \"\"\"\n- if plugin_type == \"all\":\n- plugin_types = [\n- plugin_type for plugin_type in list(PluginType) if plugin_type.discoverable\n- ]\n- else:\n- plugin_types = [PluginType.from_cli_argument(plugin_type)]\n-\n- for idx, discovered_plugin_type in enumerate(plugin_types):\n- if idx > 0:\n- click.echo()\n-\n- click.secho(f\"{str(discovered_plugin_type).capitalize()}\", fg=\"green\")\n-\n- try:\n- plugin_type_index = project.hub_service.get_plugins_of_type(\n- discovered_plugin_type,\n- )\n- except Exception:\n- click.secho(\n- f\"Can not retrieve {discovered_plugin_type} from the Hub\",\n- fg=\"yellow\",\n- err=True,\n- )\n- continue\n-\n- for plugin_name, plugin in plugin_type_index.items():\n- click.echo(plugin_name, nl=False)\n-\n- if len(plugin.variants) > 1:\n- click.echo(f\", variants: {', '.join(plugin.variant_labels)}\")\n- else:\n- click.echo()\n", "issue": "feat: consider clearing the catalog cache when `--full-refresh` present\nThis has come up in slack a bunch of times but most recently in https://meltano.slack.com/archives/C01TCRBBJD7/p1689207554179589\r\n\r\nLots of users run into this where the catalog is being cached so the output theyre seeing is not as expected, we end up recommending that they clear their `.meltano/run/tap-x` directory so it regenerates the catalog. If someone runs with the `--full-refresh` flag its probably because something in the source changed so they need to re-run the replication but the cached catalog is blocking those changes from propogating.\r\n\r\n\r\nRelated to:\r\n- https://github.com/meltano/meltano/issues/6292\r\n- https://github.com/meltano/meltano/issues/6763\r\n- https://github.com/meltano/meltano/issues/2856\r\n- https://github.com/meltano/meltano/issues/2848\r\n\r\n\n", "before_files": [{"content": "\"\"\"Main entry point for the meltano CLI.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport os\nimport sys\nimport typing as t\n\nfrom meltano.cli import ( # noqa: WPS235\n add,\n config,\n discovery,\n docs,\n dragon,\n elt,\n environment,\n hub,\n initialize,\n install,\n invoke,\n job,\n lock,\n remove,\n run,\n schedule,\n schema,\n select,\n state,\n upgrade,\n validate,\n)\nfrom meltano.cli import compile as compile_module\nfrom meltano.cli.cli import cli\nfrom meltano.cli.utils import CliError\nfrom meltano.cloud.cli import cloud\nfrom meltano.core.error import MeltanoError, ProjectReadonly\nfrom meltano.core.logging import setup_logging\n\nif t.TYPE_CHECKING:\n from meltano.core.tracking.tracker import Tracker\n\ncli.add_command(add.add)\ncli.add_command(cloud)\ncli.add_command(compile_module.compile_command)\ncli.add_command(config.config)\ncli.add_command(discovery.discover)\ncli.add_command(docs.docs)\ncli.add_command(dragon.dragon)\ncli.add_command(elt.elt)\ncli.add_command(environment.meltano_environment)\ncli.add_command(hub.hub)\ncli.add_command(initialize.init)\ncli.add_command(install.install)\ncli.add_command(invoke.invoke)\ncli.add_command(lock.lock)\ncli.add_command(remove.remove)\ncli.add_command(schedule.schedule)\ncli.add_command(schema.schema)\ncli.add_command(select.select)\ncli.add_command(state.meltano_state)\ncli.add_command(upgrade.upgrade)\ncli.add_command(run.run)\ncli.add_command(validate.test)\ncli.add_command(job.job)\n\n# Holds the exit code for error reporting during process exiting. In\n# particular, a function registered by the `atexit` module uses this value.\nexit_code: None | int = None\n\natexit_handler_registered = False\nexit_code_reported = False\nexit_event_tracker: Tracker | None = None\n\nsetup_logging()\n\nlogger = logging.getLogger(__name__)\n\ntroubleshooting_message = \"\"\"\\\nNeed help fixing this problem? Visit http://melta.no/ for troubleshooting steps, or to\njoin our friendly Slack community.\n\"\"\"\n\n\ndef handle_meltano_error(error: MeltanoError) -> t.NoReturn:\n \"\"\"Handle a MeltanoError.\n\n Args:\n error: The error to handle.\n\n Raises:\n CliError: always.\n \"\"\"\n raise CliError(str(error)) from error\n\n\ndef _run_cli():\n \"\"\"Run the Meltano CLI.\n\n Raises:\n KeyboardInterrupt: if caught.\n \"\"\"\n try:\n try: # noqa: WPS225, WPS505\n cli(obj={\"project\": None})\n except ProjectReadonly as err:\n raise CliError(\n f\"The requested action could not be completed: {err}\",\n ) from err\n except KeyboardInterrupt: # noqa: WPS329\n raise\n except MeltanoError as err:\n handle_meltano_error(err)\n except Exception as err:\n raise CliError(f\"{troubleshooting_message}\\n{err}\") from err\n except CliError as cli_error:\n cli_error.print()\n sys.exit(1)\n\n\ndef main():\n \"\"\"Entry point for the meltano CLI.\"\"\"\n # Mark the current process as executed via the CLI\n os.environ[\"MELTANO_JOB_TRIGGER\"] = os.getenv(\"MELTANO_JOB_TRIGGER\", \"cli\")\n try:\n _run_cli()\n finally:\n global exit_code\n ex = sys.exc_info()[1]\n if ex is None:\n exit_code = 0 # noqa: WPS442\n elif isinstance(ex, SystemExit):\n exit_code = 0 if ex.code is None else ex.code # noqa: WPS442\n else:\n exit_code = 1 # noqa: WPS442\n # Track the exit event now to provide more details via the exception context.\n # We assume the process will exit practically immediately after `main` returns.\n if exit_event_tracker is not None:\n exit_event_tracker.track_exit_event()\n", "path": "src/meltano/cli/__init__.py"}, {"content": "\"\"\"Discoverable Plugins CLI.\"\"\"\n\nfrom __future__ import annotations\n\nimport typing as t\n\nimport click\n\nfrom meltano.cli.params import pass_project\nfrom meltano.cli.utils import InstrumentedCmd\nfrom meltano.core.plugin import PluginType\n\nif t.TYPE_CHECKING:\n from meltano.core.project import Project\n\n\[email protected](\n cls=InstrumentedCmd,\n short_help=\"List the available plugins in Meltano Hub and their variants.\",\n)\[email protected](\n \"plugin_type\",\n type=click.Choice([*list(PluginType), \"all\"]),\n default=\"all\",\n)\n@pass_project()\ndef discover( # noqa: C901\n project: Project,\n plugin_type: str,\n):\n \"\"\"\n List the available discoverable plugins and their variants.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#discover\n \"\"\"\n if plugin_type == \"all\":\n plugin_types = [\n plugin_type for plugin_type in list(PluginType) if plugin_type.discoverable\n ]\n else:\n plugin_types = [PluginType.from_cli_argument(plugin_type)]\n\n for idx, discovered_plugin_type in enumerate(plugin_types):\n if idx > 0:\n click.echo()\n\n click.secho(f\"{str(discovered_plugin_type).capitalize()}\", fg=\"green\")\n\n try:\n plugin_type_index = project.hub_service.get_plugins_of_type(\n discovered_plugin_type,\n )\n except Exception:\n click.secho(\n f\"Can not retrieve {discovered_plugin_type} from the Hub\",\n fg=\"yellow\",\n err=True,\n )\n continue\n\n for plugin_name, plugin in plugin_type_index.items():\n click.echo(plugin_name, nl=False)\n\n if len(plugin.variants) > 1:\n click.echo(f\", variants: {', '.join(plugin.variant_labels)}\")\n else:\n click.echo()\n", "path": "src/meltano/cli/discovery.py"}]} | 2,540 | 675 |
gh_patches_debug_5633 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-16 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Calling conda smithy github-create with --user raises exception
When I invoke `oncda smithy` to create a github repo (after setting the token correctly) I get this:
``` python
$ conda smithy github-create --user mwcraig mrjob-feedstock/
Traceback (most recent call last):
File "/Users/mcraig/miniconda/bin/conda-smithy", line 9, in <module>
load_entry_point('conda-smithy==0.1.0.dev0', 'console_scripts', 'conda-smithy')()
File "/Users/mcraig/miniconda/lib/python2.7/site-packages/conda_smithy-0.1.0.dev0-py2.7.egg/conda_smithy/conda_smithy.py", line 164, in main
args.subcommand_func(args)
File "/Users/mcraig/miniconda/lib/python2.7/site-packages/conda_smithy-0.1.0.dev0-py2.7.egg/conda_smithy/conda_smithy.py", line 94, in __call__
user_or_org.get_user(args.user)
UnboundLocalError: local variable 'user_or_org' referenced before assignment
```
</issue>
<code>
[start of conda_smithy/conda_smithy.py]
1 #!/usr/bin/env python
2 from __future__ import print_function, absolute_import
3
4 import os
5 import requests
6 import subprocess
7 import sys
8 import argparse
9
10 from conda_build.metadata import MetaData
11
12 import conda_smithy.configure_circle_ci as configure_circle_ci
13 import conda_smithy.configure_feedstock as configure_feedstock
14
15
16 def generate_feedstock_content(target_directory, recipe_dir):
17 target_recipe_dir = os.path.join(target_directory, 'recipe')
18 if not os.path.exists(target_recipe_dir):
19 os.makedirs(target_recipe_dir)
20 configure_feedstock.copytree(recipe_dir, target_recipe_dir)
21
22 forge_yml = os.path.join(target_directory, 'conda-forge.yml')
23 with open(forge_yml, 'w') as fh:
24 fh.write('[]')
25
26 configure_feedstock.main(target_directory)
27
28
29 def init_git_repo(target):
30 subprocess.check_call(['git', 'init'], cwd=target)
31
32
33 def create_git_repo(target, meta):
34 init_git_repo(target)
35 subprocess.check_call(['git', 'add', '*'], cwd=target)
36 msg = 'Initial commit of the {} feedstock.'.format(meta.name())
37 subprocess.check_call(['git', 'commit', '-m', msg], cwd=target)
38
39
40 class Subcommand(object):
41 #: The name of the subcommand
42 subcommand = None
43 def __init__(self, parser):
44 subcommand_parser = parser.add_parser(self.subcommand)
45 subcommand_parser.set_defaults(subcommand_func=self)
46 return subcommand_parser
47
48 def __call__(self, args):
49 pass
50
51
52 class Init(Subcommand):
53 subcommand = 'init'
54 def __init__(self, parser):
55 # conda-smithy init /path/to/udunits-recipe ./
56 subcommand_parser = Subcommand.__init__(self, parser)
57 subcommand_parser.add_argument("recipe_directory")
58 subcommand_parser.add_argument("--feedstock-directory",
59 default='./{package.name}-feedstock')
60 subcommand_parser.add_argument("--no-git-repo", action='store_true',
61 default=False)
62
63 def __call__(self, args):
64 meta = MetaData(args.recipe_directory)
65 feedstock_directory = args.feedstock_directory.format(package=argparse.Namespace(name=meta.name()))
66 generate_feedstock_content(feedstock_directory, args.recipe_directory)
67 if not args.no_git_repo:
68 create_git_repo(feedstock_directory, meta)
69
70
71 class GithubCreate(Subcommand):
72 subcommand = 'github-create'
73 def __init__(self, parser):
74 # conda-smithy github-create ./ --organization=conda-forge
75 subcommand_parser = Subcommand.__init__(self, parser)
76 subcommand_parser.add_argument("feedstock_directory")
77 group = subcommand_parser.add_mutually_exclusive_group()
78 group.add_argument("--user")
79 group.add_argument("--organization", default="conda-forge")
80
81 def __call__(self, args):
82 try:
83 with open(os.path.expanduser('~/.conda-smithy/github.token'), 'r') as fh:
84 token = fh.read().strip()
85 except IOError:
86 print('No github token. Put one in ~/.conda-smithy/github.token')
87 meta = configure_feedstock.meta_of_feedstock(args.feedstock_directory)
88
89 from github import Github
90 gh = Github(token)
91 if args.user is not None:
92 pass
93 # User has been defined, and organization has not.
94 user_or_org.get_user(args.user)
95 else:
96 # Use the organization provided.
97 user_or_org = gh.get_organization(args.organization)
98 repo = user_or_org.create_repo(os.path.basename(os.path.abspath(args.feedstock_directory)),
99 has_wiki=False,
100 description='A conda-smithy repository for {}.'.format(meta.name()))
101 print('Created {} on github'.format(repo.full_name))
102
103
104 class RegisterFeedstockCI(Subcommand):
105 subcommand = 'register-feedstock-ci'
106 def __init__(self, parser):
107 # conda-smithy register-feedstock-ci ./
108 subcommand_parser = Subcommand.__init__(self, parser)
109 subcommand_parser.add_argument("feedstock_directory")
110 group = subcommand_parser.add_mutually_exclusive_group()
111 group.add_argument("--user")
112 group.add_argument("--organization", default="conda-forge")
113
114 def add_project_to_appveyor(self, user, project):
115 headers = {'Authorization': 'Bearer {}'.format(appveyor_token),
116 'Content-Type': 'application/json'}
117 url = 'https://ci.appveyor.com/api/projects'
118
119 data = {'repositoryProvider': 'gitHub', 'repositoryName': '{}/{}'.format(user, project)}
120
121 response = requests.post(url, headers=headers, data=data)
122 response = requests.get(url, headers=headers)
123 if response.status_code != 201:
124 response.raise_for_status()
125
126 def __call__(self, args):
127 owner = args.user or args.organization
128 repo = os.path.basename(os.path.abspath(args.feedstock_directory))
129
130 print('CI Summary for {}/{} (may take some time):'.format(owner, repo))
131 configure_circle_ci.add_project_to_circle(owner, repo)
132 configure_circle_ci.add_project_to_appveyor(owner, repo)
133 configure_circle_ci.add_project_to_travis(owner, repo)
134
135
136 def main():
137 # UX:
138 # conda-smithy init /path/to/udunits-recipe ./
139 # conda-smithy github-create ./ --organization=conda-forge --remote-name=upstream
140 # conda-smithy register-feedstock-ci ./
141
142 # How about:
143 # conda smithy config
144 # conda smithy create-forge ./recipe
145
146 # conda smithy clone-all
147
148 parser = argparse.ArgumentParser("conda-smithy - conda recipe building, made powerful.")
149 subparser = parser.add_subparsers()
150 # TODO: Consider allowing plugins/extensions using entry_points.
151 # http://reinout.vanrees.org/weblog/2010/01/06/zest-releaser-entry-points.html
152 for subcommand in Subcommand.__subclasses__():
153 subcommand(subparser)
154
155 if not sys.argv[1:]:
156 # args = parser.parse_args(['--help'])
157 args = parser.parse_args(['init', '../udunits-feedstock/recipe',
158 '--feedstock-directory=../{package.name}-delme-feedstock'])
159 # args = parser.parse_args(['github-create', '../udunits-delme-feedstock'])
160 # args = parser.parse_args(['register-feedstock-ci', '../udunits-delme-feedstock'])
161 else:
162 args = parser.parse_args()
163
164 args.subcommand_func(args)
165
166
167 if __name__ == '__main__':
168 main()
169
[end of conda_smithy/conda_smithy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_smithy/conda_smithy.py b/conda_smithy/conda_smithy.py
--- a/conda_smithy/conda_smithy.py
+++ b/conda_smithy/conda_smithy.py
@@ -91,7 +91,7 @@
if args.user is not None:
pass
# User has been defined, and organization has not.
- user_or_org.get_user(args.user)
+ user_or_org = gh.get_user()
else:
# Use the organization provided.
user_or_org = gh.get_organization(args.organization)
| {"golden_diff": "diff --git a/conda_smithy/conda_smithy.py b/conda_smithy/conda_smithy.py\n--- a/conda_smithy/conda_smithy.py\n+++ b/conda_smithy/conda_smithy.py\n@@ -91,7 +91,7 @@\n if args.user is not None:\n pass\n # User has been defined, and organization has not.\n- user_or_org.get_user(args.user)\n+ user_or_org = gh.get_user()\n else:\n # Use the organization provided.\n user_or_org = gh.get_organization(args.organization)\n", "issue": "Calling conda smithy github-create with --user raises exception\nWhen I invoke `oncda smithy` to create a github repo (after setting the token correctly) I get this:\n\n``` python\n$ conda smithy github-create --user mwcraig mrjob-feedstock/\nTraceback (most recent call last):\n File \"/Users/mcraig/miniconda/bin/conda-smithy\", line 9, in <module>\n load_entry_point('conda-smithy==0.1.0.dev0', 'console_scripts', 'conda-smithy')()\n File \"/Users/mcraig/miniconda/lib/python2.7/site-packages/conda_smithy-0.1.0.dev0-py2.7.egg/conda_smithy/conda_smithy.py\", line 164, in main\n args.subcommand_func(args)\n File \"/Users/mcraig/miniconda/lib/python2.7/site-packages/conda_smithy-0.1.0.dev0-py2.7.egg/conda_smithy/conda_smithy.py\", line 94, in __call__\n user_or_org.get_user(args.user)\nUnboundLocalError: local variable 'user_or_org' referenced before assignment\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function, absolute_import\n\nimport os\nimport requests\nimport subprocess\nimport sys\nimport argparse\n\nfrom conda_build.metadata import MetaData\n\nimport conda_smithy.configure_circle_ci as configure_circle_ci\nimport conda_smithy.configure_feedstock as configure_feedstock\n\n\ndef generate_feedstock_content(target_directory, recipe_dir):\n target_recipe_dir = os.path.join(target_directory, 'recipe')\n if not os.path.exists(target_recipe_dir):\n os.makedirs(target_recipe_dir)\n configure_feedstock.copytree(recipe_dir, target_recipe_dir)\n\n forge_yml = os.path.join(target_directory, 'conda-forge.yml')\n with open(forge_yml, 'w') as fh:\n fh.write('[]')\n\n configure_feedstock.main(target_directory)\n\n\ndef init_git_repo(target):\n subprocess.check_call(['git', 'init'], cwd=target)\n\n\ndef create_git_repo(target, meta):\n init_git_repo(target)\n subprocess.check_call(['git', 'add', '*'], cwd=target)\n msg = 'Initial commit of the {} feedstock.'.format(meta.name())\n subprocess.check_call(['git', 'commit', '-m', msg], cwd=target)\n\n\nclass Subcommand(object):\n #: The name of the subcommand\n subcommand = None\n def __init__(self, parser):\n subcommand_parser = parser.add_parser(self.subcommand)\n subcommand_parser.set_defaults(subcommand_func=self)\n return subcommand_parser\n\n def __call__(self, args):\n pass\n\n\nclass Init(Subcommand):\n subcommand = 'init'\n def __init__(self, parser):\n # conda-smithy init /path/to/udunits-recipe ./\n subcommand_parser = Subcommand.__init__(self, parser)\n subcommand_parser.add_argument(\"recipe_directory\")\n subcommand_parser.add_argument(\"--feedstock-directory\",\n default='./{package.name}-feedstock')\n subcommand_parser.add_argument(\"--no-git-repo\", action='store_true',\n default=False)\n\n def __call__(self, args):\n meta = MetaData(args.recipe_directory)\n feedstock_directory = args.feedstock_directory.format(package=argparse.Namespace(name=meta.name()))\n generate_feedstock_content(feedstock_directory, args.recipe_directory)\n if not args.no_git_repo:\n create_git_repo(feedstock_directory, meta)\n\n\nclass GithubCreate(Subcommand):\n subcommand = 'github-create'\n def __init__(self, parser):\n # conda-smithy github-create ./ --organization=conda-forge\n subcommand_parser = Subcommand.__init__(self, parser)\n subcommand_parser.add_argument(\"feedstock_directory\")\n group = subcommand_parser.add_mutually_exclusive_group()\n group.add_argument(\"--user\")\n group.add_argument(\"--organization\", default=\"conda-forge\")\n\n def __call__(self, args):\n try:\n with open(os.path.expanduser('~/.conda-smithy/github.token'), 'r') as fh:\n token = fh.read().strip()\n except IOError:\n print('No github token. Put one in ~/.conda-smithy/github.token')\n meta = configure_feedstock.meta_of_feedstock(args.feedstock_directory)\n\n from github import Github\n gh = Github(token)\n if args.user is not None:\n pass\n # User has been defined, and organization has not.\n user_or_org.get_user(args.user)\n else:\n # Use the organization provided.\n user_or_org = gh.get_organization(args.organization)\n repo = user_or_org.create_repo(os.path.basename(os.path.abspath(args.feedstock_directory)),\n has_wiki=False,\n description='A conda-smithy repository for {}.'.format(meta.name()))\n print('Created {} on github'.format(repo.full_name))\n\n\nclass RegisterFeedstockCI(Subcommand):\n subcommand = 'register-feedstock-ci'\n def __init__(self, parser):\n # conda-smithy register-feedstock-ci ./\n subcommand_parser = Subcommand.__init__(self, parser)\n subcommand_parser.add_argument(\"feedstock_directory\")\n group = subcommand_parser.add_mutually_exclusive_group()\n group.add_argument(\"--user\")\n group.add_argument(\"--organization\", default=\"conda-forge\")\n\n def add_project_to_appveyor(self, user, project):\n headers = {'Authorization': 'Bearer {}'.format(appveyor_token),\n 'Content-Type': 'application/json'}\n url = 'https://ci.appveyor.com/api/projects'\n\n data = {'repositoryProvider': 'gitHub', 'repositoryName': '{}/{}'.format(user, project)}\n\n response = requests.post(url, headers=headers, data=data)\n response = requests.get(url, headers=headers)\n if response.status_code != 201:\n response.raise_for_status()\n\n def __call__(self, args):\n owner = args.user or args.organization\n repo = os.path.basename(os.path.abspath(args.feedstock_directory))\n\n print('CI Summary for {}/{} (may take some time):'.format(owner, repo))\n configure_circle_ci.add_project_to_circle(owner, repo)\n configure_circle_ci.add_project_to_appveyor(owner, repo)\n configure_circle_ci.add_project_to_travis(owner, repo)\n\n\ndef main():\n# UX:\n# conda-smithy init /path/to/udunits-recipe ./\n# conda-smithy github-create ./ --organization=conda-forge --remote-name=upstream\n# conda-smithy register-feedstock-ci ./\n\n# How about:\n# conda smithy config\n# conda smithy create-forge ./recipe\n\n# conda smithy clone-all\n\n parser = argparse.ArgumentParser(\"conda-smithy - conda recipe building, made powerful.\")\n subparser = parser.add_subparsers()\n # TODO: Consider allowing plugins/extensions using entry_points.\n # http://reinout.vanrees.org/weblog/2010/01/06/zest-releaser-entry-points.html\n for subcommand in Subcommand.__subclasses__():\n subcommand(subparser)\n\n if not sys.argv[1:]:\n# args = parser.parse_args(['--help'])\n args = parser.parse_args(['init', '../udunits-feedstock/recipe',\n '--feedstock-directory=../{package.name}-delme-feedstock'])\n# args = parser.parse_args(['github-create', '../udunits-delme-feedstock'])\n# args = parser.parse_args(['register-feedstock-ci', '../udunits-delme-feedstock'])\n else:\n args = parser.parse_args()\n\n args.subcommand_func(args)\n\n\nif __name__ == '__main__':\n main()\n", "path": "conda_smithy/conda_smithy.py"}]} | 2,655 | 132 |
gh_patches_debug_21907 | rasdani/github-patches | git_diff | webkom__lego-1985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Timezone email
Format dates in emails in the same language as the email template (Norwegian), and converted to the proper timezone.


</issue>
<code>
[start of lego/apps/events/notifications.py]
1 from lego.apps.notifications.constants import (
2 EVENT_ADMIN_REGISTRATION,
3 EVENT_ADMIN_UNREGISTRATION,
4 EVENT_BUMP,
5 EVENT_PAYMENT_OVERDUE,
6 EVENT_PAYMENT_OVERDUE_CREATOR,
7 )
8 from lego.apps.notifications.notification import Notification
9
10
11 class EventBumpNotification(Notification):
12
13 name = EVENT_BUMP
14
15 def generate_mail(self):
16 event = self.kwargs["event"]
17
18 return self._delay_mail(
19 to_email=self.user.email,
20 context={"event": event.title, "name": self.user.full_name, "id": event.id},
21 subject=f"Du er flyttet opp fra ventelisten på arrangementet {event.title}",
22 plain_template="events/email/bump.txt",
23 html_template="events/email/bump.html",
24 )
25
26 def generate_push(self):
27 event = self.kwargs["event"]
28
29 return self._delay_push(
30 template="events/push/bump.txt",
31 context={"event": event.title},
32 instance=event,
33 )
34
35
36 class EventPaymentOverdueNotification(Notification):
37
38 name = EVENT_PAYMENT_OVERDUE
39
40 def generate_mail(self):
41 event = self.kwargs["event"]
42
43 return self._delay_mail(
44 to_email=self.user.email,
45 context={
46 "event": event.title,
47 "name": self.user.full_name,
48 "due_date": event.payment_due_date,
49 "id": event.id,
50 },
51 subject=f"Du har ikke betalt påmeldingen på arrangementet {event.title}",
52 plain_template="events/email/payment_overdue.txt",
53 html_template="events/email/payment_overdue.html",
54 )
55
56 def generate_push(self):
57 event = self.kwargs["event"]
58
59 return self._delay_push(
60 template="events/push/payment_overdue.txt",
61 context={"event": event.title},
62 instance=event,
63 )
64
65
66 class EventPaymentOverdueCreatorNotification(Notification):
67
68 name = EVENT_PAYMENT_OVERDUE_CREATOR
69
70 def generate_mail(self):
71 event = self.kwargs["event"]
72 users = self.kwargs["users"]
73
74 return self._delay_mail(
75 to_email=self.user.email,
76 context={
77 "event": event.title,
78 "users": users,
79 "name": self.user.full_name,
80 "id": event.id,
81 },
82 subject=f"Følgende registrerte har ikke betalt påmeldingen til arrangementet"
83 f" {event.title}",
84 plain_template="events/email/payment_overdue_author.txt",
85 html_template="events/email/payment_overdue_author.html",
86 )
87
88
89 class EventAdminRegistrationNotification(Notification):
90
91 name = EVENT_ADMIN_REGISTRATION
92
93 def generate_mail(self):
94 event = self.kwargs["event"]
95 reason = self.kwargs["reason"]
96
97 return self._delay_mail(
98 to_email=self.user.email,
99 context={
100 "event": event.title,
101 "name": self.user.full_name,
102 "reason": reason,
103 "id": event.id,
104 },
105 subject=f"Du har blitt adminpåmeldt på arrangementet {event.title}",
106 plain_template="events/email/admin_registration.txt",
107 html_template="events/email/admin_registration.html",
108 )
109
110 def generate_push(self):
111 event = self.kwargs["event"]
112
113 return self._delay_push(
114 template="events/push/admin_registration.txt",
115 context={"event": event.title},
116 instance=event,
117 )
118
119
120 class EventAdminUnregistrationNotification(Notification):
121
122 name = EVENT_ADMIN_UNREGISTRATION
123
124 def generate_mail(self):
125 event = self.kwargs["event"]
126 creator = self.kwargs["creator"]
127 reason = self.kwargs["reason"]
128
129 return self._delay_mail(
130 to_email=self.user.email,
131 context={
132 "event": event.title,
133 "creator_name": creator.full_name,
134 "creator_email": creator.email,
135 "name": self.user.full_name,
136 "reason": reason,
137 "id": event.id,
138 },
139 subject=f"Du har blitt fjernet fra arrangementet {event.title}",
140 plain_template="events/email/admin_unregistration.txt",
141 html_template="events/email/admin_unregistration.html",
142 )
143
144 def generate_push(self):
145 event = self.kwargs["event"]
146
147 return self._delay_push(
148 template="events/push/admin_unregistration.txt",
149 context={"event": event.title},
150 instance=event,
151 )
152
[end of lego/apps/events/notifications.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lego/apps/events/notifications.py b/lego/apps/events/notifications.py
--- a/lego/apps/events/notifications.py
+++ b/lego/apps/events/notifications.py
@@ -1,3 +1,7 @@
+from django.utils import timezone
+
+import pytz
+
from lego.apps.notifications.constants import (
EVENT_ADMIN_REGISTRATION,
EVENT_ADMIN_UNREGISTRATION,
@@ -40,12 +44,18 @@
def generate_mail(self):
event = self.kwargs["event"]
+ date = timezone.localtime(
+ value=event.payment_due_date, timezone=pytz.timezone("Europe/Oslo")
+ )
+
+ due_date = date.strftime("%d.%m.%y, kl. %H:%M")
+
return self._delay_mail(
to_email=self.user.email,
context={
"event": event.title,
"name": self.user.full_name,
- "due_date": event.payment_due_date,
+ "due_date": due_date,
"id": event.id,
},
subject=f"Du har ikke betalt påmeldingen på arrangementet {event.title}",
| {"golden_diff": "diff --git a/lego/apps/events/notifications.py b/lego/apps/events/notifications.py\n--- a/lego/apps/events/notifications.py\n+++ b/lego/apps/events/notifications.py\n@@ -1,3 +1,7 @@\n+from django.utils import timezone\n+\n+import pytz\n+\n from lego.apps.notifications.constants import (\n EVENT_ADMIN_REGISTRATION,\n EVENT_ADMIN_UNREGISTRATION,\n@@ -40,12 +44,18 @@\n def generate_mail(self):\n event = self.kwargs[\"event\"]\n \n+ date = timezone.localtime(\n+ value=event.payment_due_date, timezone=pytz.timezone(\"Europe/Oslo\")\n+ )\n+\n+ due_date = date.strftime(\"%d.%m.%y, kl. %H:%M\")\n+\n return self._delay_mail(\n to_email=self.user.email,\n context={\n \"event\": event.title,\n \"name\": self.user.full_name,\n- \"due_date\": event.payment_due_date,\n+ \"due_date\": due_date,\n \"id\": event.id,\n },\n subject=f\"Du har ikke betalt p\u00e5meldingen p\u00e5 arrangementet {event.title}\",\n", "issue": "Timezone email\nFormat dates in emails in the same language as the email template (Norwegian), and converted to the proper timezone. \r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from lego.apps.notifications.constants import (\n EVENT_ADMIN_REGISTRATION,\n EVENT_ADMIN_UNREGISTRATION,\n EVENT_BUMP,\n EVENT_PAYMENT_OVERDUE,\n EVENT_PAYMENT_OVERDUE_CREATOR,\n)\nfrom lego.apps.notifications.notification import Notification\n\n\nclass EventBumpNotification(Notification):\n\n name = EVENT_BUMP\n\n def generate_mail(self):\n event = self.kwargs[\"event\"]\n\n return self._delay_mail(\n to_email=self.user.email,\n context={\"event\": event.title, \"name\": self.user.full_name, \"id\": event.id},\n subject=f\"Du er flyttet opp fra ventelisten p\u00e5 arrangementet {event.title}\",\n plain_template=\"events/email/bump.txt\",\n html_template=\"events/email/bump.html\",\n )\n\n def generate_push(self):\n event = self.kwargs[\"event\"]\n\n return self._delay_push(\n template=\"events/push/bump.txt\",\n context={\"event\": event.title},\n instance=event,\n )\n\n\nclass EventPaymentOverdueNotification(Notification):\n\n name = EVENT_PAYMENT_OVERDUE\n\n def generate_mail(self):\n event = self.kwargs[\"event\"]\n\n return self._delay_mail(\n to_email=self.user.email,\n context={\n \"event\": event.title,\n \"name\": self.user.full_name,\n \"due_date\": event.payment_due_date,\n \"id\": event.id,\n },\n subject=f\"Du har ikke betalt p\u00e5meldingen p\u00e5 arrangementet {event.title}\",\n plain_template=\"events/email/payment_overdue.txt\",\n html_template=\"events/email/payment_overdue.html\",\n )\n\n def generate_push(self):\n event = self.kwargs[\"event\"]\n\n return self._delay_push(\n template=\"events/push/payment_overdue.txt\",\n context={\"event\": event.title},\n instance=event,\n )\n\n\nclass EventPaymentOverdueCreatorNotification(Notification):\n\n name = EVENT_PAYMENT_OVERDUE_CREATOR\n\n def generate_mail(self):\n event = self.kwargs[\"event\"]\n users = self.kwargs[\"users\"]\n\n return self._delay_mail(\n to_email=self.user.email,\n context={\n \"event\": event.title,\n \"users\": users,\n \"name\": self.user.full_name,\n \"id\": event.id,\n },\n subject=f\"F\u00f8lgende registrerte har ikke betalt p\u00e5meldingen til arrangementet\"\n f\" {event.title}\",\n plain_template=\"events/email/payment_overdue_author.txt\",\n html_template=\"events/email/payment_overdue_author.html\",\n )\n\n\nclass EventAdminRegistrationNotification(Notification):\n\n name = EVENT_ADMIN_REGISTRATION\n\n def generate_mail(self):\n event = self.kwargs[\"event\"]\n reason = self.kwargs[\"reason\"]\n\n return self._delay_mail(\n to_email=self.user.email,\n context={\n \"event\": event.title,\n \"name\": self.user.full_name,\n \"reason\": reason,\n \"id\": event.id,\n },\n subject=f\"Du har blitt adminp\u00e5meldt p\u00e5 arrangementet {event.title}\",\n plain_template=\"events/email/admin_registration.txt\",\n html_template=\"events/email/admin_registration.html\",\n )\n\n def generate_push(self):\n event = self.kwargs[\"event\"]\n\n return self._delay_push(\n template=\"events/push/admin_registration.txt\",\n context={\"event\": event.title},\n instance=event,\n )\n\n\nclass EventAdminUnregistrationNotification(Notification):\n\n name = EVENT_ADMIN_UNREGISTRATION\n\n def generate_mail(self):\n event = self.kwargs[\"event\"]\n creator = self.kwargs[\"creator\"]\n reason = self.kwargs[\"reason\"]\n\n return self._delay_mail(\n to_email=self.user.email,\n context={\n \"event\": event.title,\n \"creator_name\": creator.full_name,\n \"creator_email\": creator.email,\n \"name\": self.user.full_name,\n \"reason\": reason,\n \"id\": event.id,\n },\n subject=f\"Du har blitt fjernet fra arrangementet {event.title}\",\n plain_template=\"events/email/admin_unregistration.txt\",\n html_template=\"events/email/admin_unregistration.html\",\n )\n\n def generate_push(self):\n event = self.kwargs[\"event\"]\n\n return self._delay_push(\n template=\"events/push/admin_unregistration.txt\",\n context={\"event\": event.title},\n instance=event,\n )\n", "path": "lego/apps/events/notifications.py"}]} | 1,982 | 251 |
gh_patches_debug_38958 | rasdani/github-patches | git_diff | ansible-collections__community.general-2217 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable no-bin-links for community.general.npm
### Summary
I'm working on getting a vagrant development environment setup and am running into an issue where npm is having trouble installing on a synced folder. A solution would be to enable a flag to add the --no-bin-links setting to disable symlinking executables but there's no way to set this currently.
### Issue Type
Feature Idea
### Component Name
community.general.npm
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: NPM install for Login Service
community.general.npm:
path: "{{ project_root }}"
links: no
become_user: vagrant
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/packaging/language/npm.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3 # Copyright (c) 2017 Chris Hoffman <[email protected]>
4 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
5
6 from __future__ import absolute_import, division, print_function
7 __metaclass__ = type
8
9
10 DOCUMENTATION = r'''
11 ---
12 module: npm
13 short_description: Manage node.js packages with npm
14 description:
15 - Manage node.js packages with Node Package Manager (npm).
16 author: "Chris Hoffman (@chrishoffman)"
17 options:
18 name:
19 description:
20 - The name of a node.js library to install.
21 type: str
22 required: false
23 path:
24 description:
25 - The base path where to install the node.js libraries.
26 type: path
27 required: false
28 version:
29 description:
30 - The version to be installed.
31 type: str
32 required: false
33 global:
34 description:
35 - Install the node.js library globally.
36 required: false
37 default: no
38 type: bool
39 executable:
40 description:
41 - The executable location for npm.
42 - This is useful if you are using a version manager, such as nvm.
43 type: path
44 required: false
45 ignore_scripts:
46 description:
47 - Use the C(--ignore-scripts) flag when installing.
48 required: false
49 type: bool
50 default: no
51 unsafe_perm:
52 description:
53 - Use the C(--unsafe-perm) flag when installing.
54 type: bool
55 default: no
56 ci:
57 description:
58 - Install packages based on package-lock file, same as running C(npm ci).
59 type: bool
60 default: no
61 production:
62 description:
63 - Install dependencies in production mode, excluding devDependencies.
64 required: false
65 type: bool
66 default: no
67 registry:
68 description:
69 - The registry to install modules from.
70 required: false
71 type: str
72 state:
73 description:
74 - The state of the node.js library.
75 required: false
76 type: str
77 default: present
78 choices: [ "present", "absent", "latest" ]
79 no_optional:
80 description:
81 - Use the C(--no-optional) flag when installing.
82 type: bool
83 default: no
84 version_added: 2.0.0
85 requirements:
86 - npm installed in bin path (recommended /usr/local/bin)
87 '''
88
89 EXAMPLES = r'''
90 - name: Install "coffee-script" node.js package.
91 community.general.npm:
92 name: coffee-script
93 path: /app/location
94
95 - name: Install "coffee-script" node.js package on version 1.6.1.
96 community.general.npm:
97 name: coffee-script
98 version: '1.6.1'
99 path: /app/location
100
101 - name: Install "coffee-script" node.js package globally.
102 community.general.npm:
103 name: coffee-script
104 global: yes
105
106 - name: Remove the globally package "coffee-script".
107 community.general.npm:
108 name: coffee-script
109 global: yes
110 state: absent
111
112 - name: Install "coffee-script" node.js package from custom registry.
113 community.general.npm:
114 name: coffee-script
115 registry: 'http://registry.mysite.com'
116
117 - name: Install packages based on package.json.
118 community.general.npm:
119 path: /app/location
120
121 - name: Update packages based on package.json to their latest version.
122 community.general.npm:
123 path: /app/location
124 state: latest
125
126 - name: Install packages based on package.json using the npm installed with nvm v0.10.1.
127 community.general.npm:
128 path: /app/location
129 executable: /opt/nvm/v0.10.1/bin/npm
130 state: present
131 '''
132
133 import json
134 import os
135 import re
136
137 from ansible.module_utils.basic import AnsibleModule
138 from ansible.module_utils._text import to_native
139
140
141 class Npm(object):
142 def __init__(self, module, **kwargs):
143 self.module = module
144 self.glbl = kwargs['glbl']
145 self.name = kwargs['name']
146 self.version = kwargs['version']
147 self.path = kwargs['path']
148 self.registry = kwargs['registry']
149 self.production = kwargs['production']
150 self.ignore_scripts = kwargs['ignore_scripts']
151 self.unsafe_perm = kwargs['unsafe_perm']
152 self.state = kwargs['state']
153 self.no_optional = kwargs['no_optional']
154
155 if kwargs['executable']:
156 self.executable = kwargs['executable'].split(' ')
157 else:
158 self.executable = [module.get_bin_path('npm', True)]
159
160 if kwargs['version'] and self.state != 'absent':
161 self.name_version = self.name + '@' + str(self.version)
162 else:
163 self.name_version = self.name
164
165 def _exec(self, args, run_in_check_mode=False, check_rc=True, add_package_name=True):
166 if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):
167 cmd = self.executable + args
168
169 if self.glbl:
170 cmd.append('--global')
171 if self.production and ('install' in cmd or 'update' in cmd):
172 cmd.append('--production')
173 if self.ignore_scripts:
174 cmd.append('--ignore-scripts')
175 if self.unsafe_perm:
176 cmd.append('--unsafe-perm')
177 if self.name and add_package_name:
178 cmd.append(self.name_version)
179 if self.registry:
180 cmd.append('--registry')
181 cmd.append(self.registry)
182 if self.no_optional:
183 cmd.append('--no-optional')
184
185 # If path is specified, cd into that path and run the command.
186 cwd = None
187 if self.path:
188 if not os.path.exists(self.path):
189 os.makedirs(self.path)
190 if not os.path.isdir(self.path):
191 self.module.fail_json(msg="path %s is not a directory" % self.path)
192 cwd = self.path
193
194 rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)
195 return out
196 return ''
197
198 def list(self):
199 cmd = ['list', '--json', '--long']
200
201 installed = list()
202 missing = list()
203 data = {}
204 try:
205 data = json.loads(self._exec(cmd, True, False, False) or '{}')
206 except (getattr(json, 'JSONDecodeError', ValueError)) as e:
207 self.module.fail_json(msg="Failed to parse NPM output with error %s" % to_native(e))
208 if 'dependencies' in data:
209 for dep in data['dependencies']:
210 if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:
211 missing.append(dep)
212 elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:
213 missing.append(dep)
214 else:
215 installed.append(dep)
216 if self.name and self.name not in installed:
217 missing.append(self.name)
218 # Named dependency not installed
219 else:
220 missing.append(self.name)
221
222 return installed, missing
223
224 def install(self):
225 return self._exec(['install'])
226
227 def ci_install(self):
228 return self._exec(['ci'])
229
230 def update(self):
231 return self._exec(['update'])
232
233 def uninstall(self):
234 return self._exec(['uninstall'])
235
236 def list_outdated(self):
237 outdated = list()
238 data = self._exec(['outdated'], True, False)
239 for dep in data.splitlines():
240 if dep:
241 # node.js v0.10.22 changed the `npm outdated` module separator
242 # from "@" to " ". Split on both for backwards compatibility.
243 pkg, other = re.split(r'\s|@', dep, 1)
244 outdated.append(pkg)
245
246 return outdated
247
248
249 def main():
250 arg_spec = dict(
251 name=dict(default=None, type='str'),
252 path=dict(default=None, type='path'),
253 version=dict(default=None, type='str'),
254 production=dict(default=False, type='bool'),
255 executable=dict(default=None, type='path'),
256 registry=dict(default=None, type='str'),
257 state=dict(default='present', choices=['present', 'absent', 'latest']),
258 ignore_scripts=dict(default=False, type='bool'),
259 unsafe_perm=dict(default=False, type='bool'),
260 ci=dict(default=False, type='bool'),
261 no_optional=dict(default=False, type='bool'),
262 )
263 arg_spec['global'] = dict(default=False, type='bool')
264 module = AnsibleModule(
265 argument_spec=arg_spec,
266 supports_check_mode=True
267 )
268
269 name = module.params['name']
270 path = module.params['path']
271 version = module.params['version']
272 glbl = module.params['global']
273 production = module.params['production']
274 executable = module.params['executable']
275 registry = module.params['registry']
276 state = module.params['state']
277 ignore_scripts = module.params['ignore_scripts']
278 unsafe_perm = module.params['unsafe_perm']
279 ci = module.params['ci']
280 no_optional = module.params['no_optional']
281
282 if not path and not glbl:
283 module.fail_json(msg='path must be specified when not using global')
284 if state == 'absent' and not name:
285 module.fail_json(msg='uninstalling a package is only available for named packages')
286
287 npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production,
288 executable=executable, registry=registry, ignore_scripts=ignore_scripts,
289 unsafe_perm=unsafe_perm, state=state, no_optional=no_optional)
290
291 changed = False
292 if ci:
293 npm.ci_install()
294 changed = True
295 elif state == 'present':
296 installed, missing = npm.list()
297 if missing:
298 changed = True
299 npm.install()
300 elif state == 'latest':
301 installed, missing = npm.list()
302 outdated = npm.list_outdated()
303 if missing:
304 changed = True
305 npm.install()
306 if outdated:
307 changed = True
308 npm.update()
309 else: # absent
310 installed, missing = npm.list()
311 if name in installed:
312 changed = True
313 npm.uninstall()
314
315 module.exit_json(changed=changed)
316
317
318 if __name__ == '__main__':
319 main()
320
[end of plugins/modules/packaging/language/npm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/packaging/language/npm.py b/plugins/modules/packaging/language/npm.py
--- a/plugins/modules/packaging/language/npm.py
+++ b/plugins/modules/packaging/language/npm.py
@@ -82,6 +82,12 @@
type: bool
default: no
version_added: 2.0.0
+ no_bin_links:
+ description:
+ - Use the C(--no-bin-links) flag when installing.
+ type: bool
+ default: no
+ version_added: 2.5.0
requirements:
- npm installed in bin path (recommended /usr/local/bin)
'''
@@ -151,6 +157,7 @@
self.unsafe_perm = kwargs['unsafe_perm']
self.state = kwargs['state']
self.no_optional = kwargs['no_optional']
+ self.no_bin_links = kwargs['no_bin_links']
if kwargs['executable']:
self.executable = kwargs['executable'].split(' ')
@@ -181,6 +188,8 @@
cmd.append(self.registry)
if self.no_optional:
cmd.append('--no-optional')
+ if self.no_bin_links:
+ cmd.append('--no-bin-links')
# If path is specified, cd into that path and run the command.
cwd = None
@@ -259,6 +268,7 @@
unsafe_perm=dict(default=False, type='bool'),
ci=dict(default=False, type='bool'),
no_optional=dict(default=False, type='bool'),
+ no_bin_links=dict(default=False, type='bool'),
)
arg_spec['global'] = dict(default=False, type='bool')
module = AnsibleModule(
@@ -278,6 +288,7 @@
unsafe_perm = module.params['unsafe_perm']
ci = module.params['ci']
no_optional = module.params['no_optional']
+ no_bin_links = module.params['no_bin_links']
if not path and not glbl:
module.fail_json(msg='path must be specified when not using global')
@@ -286,7 +297,7 @@
npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production,
executable=executable, registry=registry, ignore_scripts=ignore_scripts,
- unsafe_perm=unsafe_perm, state=state, no_optional=no_optional)
+ unsafe_perm=unsafe_perm, state=state, no_optional=no_optional, no_bin_links=no_bin_links)
changed = False
if ci:
| {"golden_diff": "diff --git a/plugins/modules/packaging/language/npm.py b/plugins/modules/packaging/language/npm.py\n--- a/plugins/modules/packaging/language/npm.py\n+++ b/plugins/modules/packaging/language/npm.py\n@@ -82,6 +82,12 @@\n type: bool\n default: no\n version_added: 2.0.0\n+ no_bin_links:\n+ description:\n+ - Use the C(--no-bin-links) flag when installing.\n+ type: bool\n+ default: no\n+ version_added: 2.5.0\n requirements:\n - npm installed in bin path (recommended /usr/local/bin)\n '''\n@@ -151,6 +157,7 @@\n self.unsafe_perm = kwargs['unsafe_perm']\n self.state = kwargs['state']\n self.no_optional = kwargs['no_optional']\n+ self.no_bin_links = kwargs['no_bin_links']\n \n if kwargs['executable']:\n self.executable = kwargs['executable'].split(' ')\n@@ -181,6 +188,8 @@\n cmd.append(self.registry)\n if self.no_optional:\n cmd.append('--no-optional')\n+ if self.no_bin_links:\n+ cmd.append('--no-bin-links')\n \n # If path is specified, cd into that path and run the command.\n cwd = None\n@@ -259,6 +268,7 @@\n unsafe_perm=dict(default=False, type='bool'),\n ci=dict(default=False, type='bool'),\n no_optional=dict(default=False, type='bool'),\n+ no_bin_links=dict(default=False, type='bool'),\n )\n arg_spec['global'] = dict(default=False, type='bool')\n module = AnsibleModule(\n@@ -278,6 +288,7 @@\n unsafe_perm = module.params['unsafe_perm']\n ci = module.params['ci']\n no_optional = module.params['no_optional']\n+ no_bin_links = module.params['no_bin_links']\n \n if not path and not glbl:\n module.fail_json(msg='path must be specified when not using global')\n@@ -286,7 +297,7 @@\n \n npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production,\n executable=executable, registry=registry, ignore_scripts=ignore_scripts,\n- unsafe_perm=unsafe_perm, state=state, no_optional=no_optional)\n+ unsafe_perm=unsafe_perm, state=state, no_optional=no_optional, no_bin_links=no_bin_links)\n \n changed = False\n if ci:\n", "issue": "Enable no-bin-links for community.general.npm\n### Summary\n\nI'm working on getting a vagrant development environment setup and am running into an issue where npm is having trouble installing on a synced folder. A solution would be to enable a flag to add the --no-bin-links setting to disable symlinking executables but there's no way to set this currently.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\ncommunity.general.npm\n\n### Additional Information\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: NPM install for Login Service\r\n community.general.npm:\r\n path: \"{{ project_root }}\"\r\n links: no\r\n become_user: vagrant\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n# Copyright (c) 2017 Chris Hoffman <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: npm\nshort_description: Manage node.js packages with npm\ndescription:\n - Manage node.js packages with Node Package Manager (npm).\nauthor: \"Chris Hoffman (@chrishoffman)\"\noptions:\n name:\n description:\n - The name of a node.js library to install.\n type: str\n required: false\n path:\n description:\n - The base path where to install the node.js libraries.\n type: path\n required: false\n version:\n description:\n - The version to be installed.\n type: str\n required: false\n global:\n description:\n - Install the node.js library globally.\n required: false\n default: no\n type: bool\n executable:\n description:\n - The executable location for npm.\n - This is useful if you are using a version manager, such as nvm.\n type: path\n required: false\n ignore_scripts:\n description:\n - Use the C(--ignore-scripts) flag when installing.\n required: false\n type: bool\n default: no\n unsafe_perm:\n description:\n - Use the C(--unsafe-perm) flag when installing.\n type: bool\n default: no\n ci:\n description:\n - Install packages based on package-lock file, same as running C(npm ci).\n type: bool\n default: no\n production:\n description:\n - Install dependencies in production mode, excluding devDependencies.\n required: false\n type: bool\n default: no\n registry:\n description:\n - The registry to install modules from.\n required: false\n type: str\n state:\n description:\n - The state of the node.js library.\n required: false\n type: str\n default: present\n choices: [ \"present\", \"absent\", \"latest\" ]\n no_optional:\n description:\n - Use the C(--no-optional) flag when installing.\n type: bool\n default: no\n version_added: 2.0.0\nrequirements:\n - npm installed in bin path (recommended /usr/local/bin)\n'''\n\nEXAMPLES = r'''\n- name: Install \"coffee-script\" node.js package.\n community.general.npm:\n name: coffee-script\n path: /app/location\n\n- name: Install \"coffee-script\" node.js package on version 1.6.1.\n community.general.npm:\n name: coffee-script\n version: '1.6.1'\n path: /app/location\n\n- name: Install \"coffee-script\" node.js package globally.\n community.general.npm:\n name: coffee-script\n global: yes\n\n- name: Remove the globally package \"coffee-script\".\n community.general.npm:\n name: coffee-script\n global: yes\n state: absent\n\n- name: Install \"coffee-script\" node.js package from custom registry.\n community.general.npm:\n name: coffee-script\n registry: 'http://registry.mysite.com'\n\n- name: Install packages based on package.json.\n community.general.npm:\n path: /app/location\n\n- name: Update packages based on package.json to their latest version.\n community.general.npm:\n path: /app/location\n state: latest\n\n- name: Install packages based on package.json using the npm installed with nvm v0.10.1.\n community.general.npm:\n path: /app/location\n executable: /opt/nvm/v0.10.1/bin/npm\n state: present\n'''\n\nimport json\nimport os\nimport re\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils._text import to_native\n\n\nclass Npm(object):\n def __init__(self, module, **kwargs):\n self.module = module\n self.glbl = kwargs['glbl']\n self.name = kwargs['name']\n self.version = kwargs['version']\n self.path = kwargs['path']\n self.registry = kwargs['registry']\n self.production = kwargs['production']\n self.ignore_scripts = kwargs['ignore_scripts']\n self.unsafe_perm = kwargs['unsafe_perm']\n self.state = kwargs['state']\n self.no_optional = kwargs['no_optional']\n\n if kwargs['executable']:\n self.executable = kwargs['executable'].split(' ')\n else:\n self.executable = [module.get_bin_path('npm', True)]\n\n if kwargs['version'] and self.state != 'absent':\n self.name_version = self.name + '@' + str(self.version)\n else:\n self.name_version = self.name\n\n def _exec(self, args, run_in_check_mode=False, check_rc=True, add_package_name=True):\n if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):\n cmd = self.executable + args\n\n if self.glbl:\n cmd.append('--global')\n if self.production and ('install' in cmd or 'update' in cmd):\n cmd.append('--production')\n if self.ignore_scripts:\n cmd.append('--ignore-scripts')\n if self.unsafe_perm:\n cmd.append('--unsafe-perm')\n if self.name and add_package_name:\n cmd.append(self.name_version)\n if self.registry:\n cmd.append('--registry')\n cmd.append(self.registry)\n if self.no_optional:\n cmd.append('--no-optional')\n\n # If path is specified, cd into that path and run the command.\n cwd = None\n if self.path:\n if not os.path.exists(self.path):\n os.makedirs(self.path)\n if not os.path.isdir(self.path):\n self.module.fail_json(msg=\"path %s is not a directory\" % self.path)\n cwd = self.path\n\n rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)\n return out\n return ''\n\n def list(self):\n cmd = ['list', '--json', '--long']\n\n installed = list()\n missing = list()\n data = {}\n try:\n data = json.loads(self._exec(cmd, True, False, False) or '{}')\n except (getattr(json, 'JSONDecodeError', ValueError)) as e:\n self.module.fail_json(msg=\"Failed to parse NPM output with error %s\" % to_native(e))\n if 'dependencies' in data:\n for dep in data['dependencies']:\n if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:\n missing.append(dep)\n elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:\n missing.append(dep)\n else:\n installed.append(dep)\n if self.name and self.name not in installed:\n missing.append(self.name)\n # Named dependency not installed\n else:\n missing.append(self.name)\n\n return installed, missing\n\n def install(self):\n return self._exec(['install'])\n\n def ci_install(self):\n return self._exec(['ci'])\n\n def update(self):\n return self._exec(['update'])\n\n def uninstall(self):\n return self._exec(['uninstall'])\n\n def list_outdated(self):\n outdated = list()\n data = self._exec(['outdated'], True, False)\n for dep in data.splitlines():\n if dep:\n # node.js v0.10.22 changed the `npm outdated` module separator\n # from \"@\" to \" \". Split on both for backwards compatibility.\n pkg, other = re.split(r'\\s|@', dep, 1)\n outdated.append(pkg)\n\n return outdated\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None, type='str'),\n path=dict(default=None, type='path'),\n version=dict(default=None, type='str'),\n production=dict(default=False, type='bool'),\n executable=dict(default=None, type='path'),\n registry=dict(default=None, type='str'),\n state=dict(default='present', choices=['present', 'absent', 'latest']),\n ignore_scripts=dict(default=False, type='bool'),\n unsafe_perm=dict(default=False, type='bool'),\n ci=dict(default=False, type='bool'),\n no_optional=dict(default=False, type='bool'),\n )\n arg_spec['global'] = dict(default=False, type='bool')\n module = AnsibleModule(\n argument_spec=arg_spec,\n supports_check_mode=True\n )\n\n name = module.params['name']\n path = module.params['path']\n version = module.params['version']\n glbl = module.params['global']\n production = module.params['production']\n executable = module.params['executable']\n registry = module.params['registry']\n state = module.params['state']\n ignore_scripts = module.params['ignore_scripts']\n unsafe_perm = module.params['unsafe_perm']\n ci = module.params['ci']\n no_optional = module.params['no_optional']\n\n if not path and not glbl:\n module.fail_json(msg='path must be specified when not using global')\n if state == 'absent' and not name:\n module.fail_json(msg='uninstalling a package is only available for named packages')\n\n npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production,\n executable=executable, registry=registry, ignore_scripts=ignore_scripts,\n unsafe_perm=unsafe_perm, state=state, no_optional=no_optional)\n\n changed = False\n if ci:\n npm.ci_install()\n changed = True\n elif state == 'present':\n installed, missing = npm.list()\n if missing:\n changed = True\n npm.install()\n elif state == 'latest':\n installed, missing = npm.list()\n outdated = npm.list_outdated()\n if missing:\n changed = True\n npm.install()\n if outdated:\n changed = True\n npm.update()\n else: # absent\n installed, missing = npm.list()\n if name in installed:\n changed = True\n npm.uninstall()\n\n module.exit_json(changed=changed)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/packaging/language/npm.py"}]} | 3,826 | 563 |
gh_patches_debug_30420 | rasdani/github-patches | git_diff | PrefectHQ__prefect-347 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add back `environment.yml` file
I realized why we might want to maintain an `environment.yml` file in parallel with our `requirements.txt` file: `requirements.txt` will be installed via `pip`, whereas if you create an environment via `conda`, the packages will be installed / maintained via `conda`. This can be useful for those who try to `conda install` everything (since it has different package version logic + handles non-python dependencies).
</issue>
<code>
[start of setup.py]
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
2
3 from setuptools import find_packages, setup
4
5 import sys
6 import versioneer
7
8 install_requires = [
9 "click >= 6.7, < 7.0",
10 "cloudpickle >= 0.6.0",
11 "croniter >= 0.3.23, < 0.4",
12 "cryptography >= 2.2.2, < 3.0",
13 "dask >= 0.18, < 0.19",
14 "distributed >= 1.21.8, < 2.0",
15 "docker >= 3.4.1, < 3.5",
16 "marshmallow == 3.0.0b19",
17 "marshmallow-oneofschema >= 2.0.0b2, < 3.0",
18 "mypy >= 0.600, < 0.700",
19 "mypy_extensions >= 0.4.0, < 0.5",
20 "pendulum >= 2.0.4, < 3.0",
21 "python-dateutil >= 2.7.3, < 3.0",
22 "requests >= 2.20, < 3.0",
23 "toml >= 0.9.4, < 1.0",
24 "typing >= 3.6.4, < 4.0",
25 "typing_extensions >= 3.6.4, < 4.0",
26 "xxhash >= 1.2.0, < 2.0",
27 ]
28
29 templates = ["jinja2 >= 2.0, < 3.0"]
30 viz = ["bokeh == 0.13.0", "graphviz >= 0.8.3"]
31 dev = [
32 "pre-commit",
33 "pytest >= 3.8, < 4.0",
34 "pytest-cov",
35 "pytest-env",
36 "pytest-xdist",
37 "Pygments == 2.2.0",
38 ]
39
40 if sys.version_info >= (3, 6):
41 dev += ["black"]
42
43 extras = {
44 "dev": dev + viz,
45 "viz": viz,
46 "templates": templates,
47 "all_extras": dev + templates + viz,
48 }
49
50 setup(
51 name="prefect",
52 version=versioneer.get_version(),
53 cmdclass=versioneer.get_cmdclass(),
54 description="",
55 long_description=open("README.md").read(),
56 url="https://www.github.com/prefecthq/prefect",
57 author="Prefect Technologies, Inc.",
58 author_email="[email protected]",
59 install_requires=install_requires,
60 extras_require=extras,
61 scripts=[],
62 packages=find_packages(where="src"),
63 package_dir={"": "src"},
64 include_package_data=True,
65 entry_points={"console_scripts": ["prefect=prefect.cli:cli"]},
66 )
67
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,51 +2,39 @@
from setuptools import find_packages, setup
+import configparser
import sys
import versioneer
-install_requires = [
- "click >= 6.7, < 7.0",
- "cloudpickle >= 0.6.0",
- "croniter >= 0.3.23, < 0.4",
- "cryptography >= 2.2.2, < 3.0",
- "dask >= 0.18, < 0.19",
- "distributed >= 1.21.8, < 2.0",
- "docker >= 3.4.1, < 3.5",
- "marshmallow == 3.0.0b19",
- "marshmallow-oneofschema >= 2.0.0b2, < 3.0",
- "mypy >= 0.600, < 0.700",
- "mypy_extensions >= 0.4.0, < 0.5",
- "pendulum >= 2.0.4, < 3.0",
- "python-dateutil >= 2.7.3, < 3.0",
- "requests >= 2.20, < 3.0",
- "toml >= 0.9.4, < 1.0",
- "typing >= 3.6.4, < 4.0",
- "typing_extensions >= 3.6.4, < 4.0",
- "xxhash >= 1.2.0, < 2.0",
-]
+config = configparser.ConfigParser()
+config.read("requirements.ini")
-templates = ["jinja2 >= 2.0, < 3.0"]
-viz = ["bokeh == 0.13.0", "graphviz >= 0.8.3"]
-dev = [
- "pre-commit",
- "pytest >= 3.8, < 4.0",
- "pytest-cov",
- "pytest-env",
- "pytest-xdist",
- "Pygments == 2.2.0",
-]
+## base requirements
+install_requires = ["".join(req) for req in config["base"].items()]
-if sys.version_info >= (3, 6):
- dev += ["black"]
+## section dependencies
+includes = {}
+for section in config.sections():
+ includes[section] = config[section].pop("include", "").split(",")
extras = {
- "dev": dev + viz,
- "viz": viz,
- "templates": templates,
- "all_extras": dev + templates + viz,
+ "dev": ["".join(req) for req in config["dev"].items()],
+ "viz": ["".join(req) for req in config["viz"].items()],
+ "templates": ["".join(req) for req in config["templates"].items()],
}
+## process include keyword for related sections
+for section in extras:
+ for other in includes[section]:
+ extras[section] += extras.get(other.strip(), [])
+
+
+if sys.version_info >= (3, 6):
+ extras["dev"] += ["black"]
+
+extras["all_extras"] = extras["dev"] + extras["viz"] + extras["templates"]
+
+
setup(
name="prefect",
version=versioneer.get_version(),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,51 +2,39 @@\n \n from setuptools import find_packages, setup\n \n+import configparser\n import sys\n import versioneer\n \n-install_requires = [\n- \"click >= 6.7, < 7.0\",\n- \"cloudpickle >= 0.6.0\",\n- \"croniter >= 0.3.23, < 0.4\",\n- \"cryptography >= 2.2.2, < 3.0\",\n- \"dask >= 0.18, < 0.19\",\n- \"distributed >= 1.21.8, < 2.0\",\n- \"docker >= 3.4.1, < 3.5\",\n- \"marshmallow == 3.0.0b19\",\n- \"marshmallow-oneofschema >= 2.0.0b2, < 3.0\",\n- \"mypy >= 0.600, < 0.700\",\n- \"mypy_extensions >= 0.4.0, < 0.5\",\n- \"pendulum >= 2.0.4, < 3.0\",\n- \"python-dateutil >= 2.7.3, < 3.0\",\n- \"requests >= 2.20, < 3.0\",\n- \"toml >= 0.9.4, < 1.0\",\n- \"typing >= 3.6.4, < 4.0\",\n- \"typing_extensions >= 3.6.4, < 4.0\",\n- \"xxhash >= 1.2.0, < 2.0\",\n-]\n+config = configparser.ConfigParser()\n+config.read(\"requirements.ini\")\n \n-templates = [\"jinja2 >= 2.0, < 3.0\"]\n-viz = [\"bokeh == 0.13.0\", \"graphviz >= 0.8.3\"]\n-dev = [\n- \"pre-commit\",\n- \"pytest >= 3.8, < 4.0\",\n- \"pytest-cov\",\n- \"pytest-env\",\n- \"pytest-xdist\",\n- \"Pygments == 2.2.0\",\n-]\n+## base requirements\n+install_requires = [\"\".join(req) for req in config[\"base\"].items()]\n \n-if sys.version_info >= (3, 6):\n- dev += [\"black\"]\n+## section dependencies\n+includes = {}\n+for section in config.sections():\n+ includes[section] = config[section].pop(\"include\", \"\").split(\",\")\n \n extras = {\n- \"dev\": dev + viz,\n- \"viz\": viz,\n- \"templates\": templates,\n- \"all_extras\": dev + templates + viz,\n+ \"dev\": [\"\".join(req) for req in config[\"dev\"].items()],\n+ \"viz\": [\"\".join(req) for req in config[\"viz\"].items()],\n+ \"templates\": [\"\".join(req) for req in config[\"templates\"].items()],\n }\n \n+## process include keyword for related sections\n+for section in extras:\n+ for other in includes[section]:\n+ extras[section] += extras.get(other.strip(), [])\n+\n+\n+if sys.version_info >= (3, 6):\n+ extras[\"dev\"] += [\"black\"]\n+\n+extras[\"all_extras\"] = extras[\"dev\"] + extras[\"viz\"] + extras[\"templates\"]\n+\n+\n setup(\n name=\"prefect\",\n version=versioneer.get_version(),\n", "issue": "Add back `environment.yml` file\nI realized why we might want to maintain an `environment.yml` file in parallel with our `requirements.txt` file: `requirements.txt` will be installed via `pip`, whereas if you create an environment via `conda`, the packages will be installed / maintained via `conda`. This can be useful for those who try to `conda install` everything (since it has different package version logic + handles non-python dependencies).\n", "before_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\n\nfrom setuptools import find_packages, setup\n\nimport sys\nimport versioneer\n\ninstall_requires = [\n \"click >= 6.7, < 7.0\",\n \"cloudpickle >= 0.6.0\",\n \"croniter >= 0.3.23, < 0.4\",\n \"cryptography >= 2.2.2, < 3.0\",\n \"dask >= 0.18, < 0.19\",\n \"distributed >= 1.21.8, < 2.0\",\n \"docker >= 3.4.1, < 3.5\",\n \"marshmallow == 3.0.0b19\",\n \"marshmallow-oneofschema >= 2.0.0b2, < 3.0\",\n \"mypy >= 0.600, < 0.700\",\n \"mypy_extensions >= 0.4.0, < 0.5\",\n \"pendulum >= 2.0.4, < 3.0\",\n \"python-dateutil >= 2.7.3, < 3.0\",\n \"requests >= 2.20, < 3.0\",\n \"toml >= 0.9.4, < 1.0\",\n \"typing >= 3.6.4, < 4.0\",\n \"typing_extensions >= 3.6.4, < 4.0\",\n \"xxhash >= 1.2.0, < 2.0\",\n]\n\ntemplates = [\"jinja2 >= 2.0, < 3.0\"]\nviz = [\"bokeh == 0.13.0\", \"graphviz >= 0.8.3\"]\ndev = [\n \"pre-commit\",\n \"pytest >= 3.8, < 4.0\",\n \"pytest-cov\",\n \"pytest-env\",\n \"pytest-xdist\",\n \"Pygments == 2.2.0\",\n]\n\nif sys.version_info >= (3, 6):\n dev += [\"black\"]\n\nextras = {\n \"dev\": dev + viz,\n \"viz\": viz,\n \"templates\": templates,\n \"all_extras\": dev + templates + viz,\n}\n\nsetup(\n name=\"prefect\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n description=\"\",\n long_description=open(\"README.md\").read(),\n url=\"https://www.github.com/prefecthq/prefect\",\n author=\"Prefect Technologies, Inc.\",\n author_email=\"[email protected]\",\n install_requires=install_requires,\n extras_require=extras,\n scripts=[],\n packages=find_packages(where=\"src\"),\n package_dir={\"\": \"src\"},\n include_package_data=True,\n entry_points={\"console_scripts\": [\"prefect=prefect.cli:cli\"]},\n)\n", "path": "setup.py"}]} | 1,395 | 787 |
gh_patches_debug_7434 | rasdani/github-patches | git_diff | freedomofpress__securedrop-2640 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Brazil flag for nice display in the language picker
Since the `pt_BR` locale is ready to go (see #2630), we should add a Brazil flag to the language picker.
</issue>
<code>
[start of securedrop/version.py]
1 __version__ = '0.4.4'
2
[end of securedrop/version.py]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # SecureDrop documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 13 12:08:52 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17 import shlex
18
19 # Detect if we're being built by Read the Docs
20 # https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs
21 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
22
23 # If extensions (or modules to document with autodoc) are in another directory,
24 # add these directories to sys.path here. If the directory is relative to the
25 # documentation root, use os.path.abspath to make it absolute, like shown here.
26 #sys.path.insert(0, os.path.abspath('.'))
27
28 # -- General configuration ------------------------------------------------
29
30 # If your documentation needs a minimal Sphinx version, state it here.
31 #needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = ['sphinx.ext.todo', ]
37
38 # Add any paths that contain templates here, relative to this directory.
39 templates_path = ['_templates']
40
41 # The suffix(es) of source filenames.
42 # You can specify multiple suffix as a list of string:
43 # source_suffix = ['.rst', '.md']
44 source_suffix = '.rst'
45
46 # The encoding of source files.
47 #source_encoding = 'utf-8-sig'
48
49 # The master toctree document.
50 master_doc = 'index'
51
52 # General information about the project.
53 project = u'SecureDrop'
54 copyright = u'2017, Freedom of the Press Foundation'
55 author = u'SecureDrop Team and Contributors'
56
57 # The version info for the project you're documenting, acts as replacement for
58 # |version| and |release|, also used in various other places throughout the
59 # built documents.
60 #
61 # The short X.Y version.
62 version = '0.4.4'
63 # The full version, including alpha/beta/rc tags.
64 release = '0.4.4'
65
66 # The language for content autogenerated by Sphinx. Refer to documentation
67 # for a list of supported languages.
68 #
69 # This is also used if you do content translation via gettext catalogs.
70 # Usually you set "language" from the command line for these cases.
71 language = None
72
73 # There are two options for replacing |today|: either, you set today to some
74 # non-false value, then it is used:
75 #today = ''
76 # Else, today_fmt is used as the format for a strftime call.
77 #today_fmt = '%B %d, %Y'
78
79 # List of patterns, relative to source directory, that match files and
80 # directories to ignore when looking for source files.
81 exclude_patterns = ['_build']
82
83 # The reST default role (used for this markup: `text`) to use for all
84 # documents.
85 #default_role = None
86
87 # If true, '()' will be appended to :func: etc. cross-reference text.
88 #add_function_parentheses = True
89
90 # If true, the current module name will be prepended to all description
91 # unit titles (such as .. function::).
92 #add_module_names = True
93
94 # If true, sectionauthor and moduleauthor directives will be shown in the
95 # output. They are ignored by default.
96 #show_authors = False
97
98 # The name of the Pygments (syntax highlighting) style to use.
99 pygments_style = 'sphinx'
100
101 # A list of ignored prefixes for module index sorting.
102 #modindex_common_prefix = []
103
104 # If true, keep warnings as "system message" paragraphs in the built documents.
105 #keep_warnings = False
106
107 # If true, `todo` and `todoList` produce output, else they produce nothing.
108 todo_include_todos = False
109
110
111 # -- Options for HTML output ----------------------------------------------
112
113 # The theme to use for HTML and HTML Help pages. See the documentation for
114 # a list of builtin themes.
115 if on_rtd:
116 html_theme = 'default'
117 else:
118 try:
119 # If you want to build the docs locally using the RTD theme,
120 # you may need to install it: ``pip install sphinx_rtd_theme``.
121 # https://github.com/snide/sphinx_rtd_theme#via-package
122 import sphinx_rtd_theme
123 html_theme = "sphinx_rtd_theme"
124 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
125 except ImportError:
126 # This theme is included with Sphinx and is quite nice (based
127 # on the Pocoo themes), but since we're using the RTD theme
128 # for the production docs, it's best to use that to avoid
129 # issues due to discrepancies between the themes.
130 html_theme = 'alabaster'
131
132 # Theme options are theme-specific and customize the look and feel of a theme
133 # further. For a list of options available for each theme, see the
134 # documentation.
135 #html_theme_options = {}
136
137 # Add any paths that contain custom themes here, relative to this directory.
138 #html_theme_path = []
139
140 # The name for this set of Sphinx documents. If None, it defaults to
141 # "<project> v<release> documentation".
142 #html_title = None
143
144 # A shorter title for the navigation bar. Default is the same as html_title.
145 #html_short_title = None
146
147 # The name of an image file (relative to this directory) to place at the top
148 # of the sidebar.
149 html_logo = '../securedrop/static/i/favicon.png'
150
151 # The name of an image file (within the static path) to use as favicon of the
152 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
153 # pixels large.
154 #html_favicon = None
155
156 # Add any paths that contain custom static files (such as style sheets) here,
157 # relative to this directory. They are copied after the builtin static files,
158 # so a file named "default.css" will overwrite the builtin "default.css".
159 # html_static_path = ['_static']
160
161 # Add any extra paths that contain custom files (such as robots.txt or
162 # .htaccess) here, relative to this directory. These files are copied
163 # directly to the root of the documentation.
164 #html_extra_path = []
165
166 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
167 # using the given strftime format.
168 #html_last_updated_fmt = '%b %d, %Y'
169
170 # If true, SmartyPants will be used to convert quotes and dashes to
171 # typographically correct entities.
172 #html_use_smartypants = True
173
174 # Custom sidebar templates, maps document names to template names.
175 #html_sidebars = {}
176
177 # Additional templates that should be rendered to pages, maps page names to
178 # template names.
179 #html_additional_pages = {}
180
181 # If false, no module index is generated.
182 #html_domain_indices = True
183
184 # If false, no index is generated.
185 #html_use_index = True
186
187 # If true, the index is split into individual pages for each letter.
188 #html_split_index = False
189
190 # If true, links to the reST sources are added to the pages.
191 #html_show_sourcelink = True
192
193 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
194 #html_show_sphinx = True
195
196 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
197 #html_show_copyright = True
198
199 # If true, an OpenSearch description file will be output, and all pages will
200 # contain a <link> tag referring to it. The value of this option must be the
201 # base URL from which the finished HTML is served.
202 #html_use_opensearch = ''
203
204 # This is the file name suffix for HTML files (e.g. ".xhtml").
205 #html_file_suffix = None
206
207 # Language to be used for generating the HTML full-text search index.
208 # Sphinx supports the following languages:
209 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
210 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
211 #html_search_language = 'en'
212
213 # A dictionary with options for the search language support, empty by default.
214 # Now only 'ja' uses this config value
215 #html_search_options = {'type': 'default'}
216
217 # The name of a javascript file (relative to the configuration directory) that
218 # implements a search results scorer. If empty, the default will be used.
219 #html_search_scorer = 'scorer.js'
220
221 # Output file base name for HTML help builder.
222 htmlhelp_basename = 'SecureDropdoc'
223
224 # -- Options for LaTeX output ---------------------------------------------
225
226 latex_elements = {
227 # The paper size ('letterpaper' or 'a4paper').
228 #'papersize': 'letterpaper',
229
230 # The font size ('10pt', '11pt' or '12pt').
231 #'pointsize': '10pt',
232
233 # Additional stuff for the LaTeX preamble.
234 #'preamble': '',
235
236 # Latex figure (float) alignment
237 #'figure_align': 'htbp',
238 }
239
240 # Grouping the document tree into LaTeX files. List of tuples
241 # (source start file, target name, title,
242 # author, documentclass [howto, manual, or own class]).
243 latex_documents = [
244 (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',
245 author, 'manual'),
246 ]
247
248 # The name of an image file (relative to this directory) to place at the top of
249 # the title page.
250 #latex_logo = None
251
252 # For "manual" documents, if this is true, then toplevel headings are parts,
253 # not chapters.
254 #latex_use_parts = False
255
256 # If true, show page references after internal links.
257 #latex_show_pagerefs = False
258
259 # If true, show URL addresses after external links.
260 #latex_show_urls = False
261
262 # Documents to append as an appendix to all manuals.
263 #latex_appendices = []
264
265 # If false, no module index is generated.
266 #latex_domain_indices = True
267
268
269 # -- Options for manual page output ---------------------------------------
270
271 # One entry per manual page. List of tuples
272 # (source start file, name, description, authors, manual section).
273 man_pages = [
274 (master_doc, 'securedrop', u'SecureDrop Documentation',
275 [author], 1)
276 ]
277
278 # If true, show URL addresses after external links.
279 #man_show_urls = False
280
281
282 # -- Options for Texinfo output -------------------------------------------
283
284 # Grouping the document tree into Texinfo files. List of tuples
285 # (source start file, target name, title, author,
286 # dir menu entry, description, category)
287 texinfo_documents = [
288 (master_doc, 'SecureDrop', u'SecureDrop Documentation',
289 author, 'SecureDrop', 'One line description of project.',
290 'Miscellaneous'),
291 ]
292
293 # Documents to append as an appendix to all manuals.
294 #texinfo_appendices = []
295
296 # If false, no module index is generated.
297 #texinfo_domain_indices = True
298
299 # How to display URL addresses: 'footnote', 'no', or 'inline'.
300 #texinfo_show_urls = 'footnote'
301
302 # If true, do not generate a @detailmenu in the "Top" node's menu.
303 #texinfo_no_detailmenu = False
304
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,9 +59,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.4.4'
+version = '0.5-rc2'
# The full version, including alpha/beta/rc tags.
-release = '0.4.4'
+release = '0.5-rc2'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = '0.4.4'
+__version__ = '0.5-rc2'
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -59,9 +59,9 @@\n # built documents.\n #\n # The short X.Y version.\n-version = '0.4.4'\n+version = '0.5-rc2'\n # The full version, including alpha/beta/rc tags.\n-release = '0.4.4'\n+release = '0.5-rc2'\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = '0.4.4'\n+__version__ = '0.5-rc2'\n", "issue": "Add Brazil flag for nice display in the language picker\nSince the `pt_BR` locale is ready to go (see #2630), we should add a Brazil flag to the language picker.\n", "before_files": [{"content": "__version__ = '0.4.4'\n", "path": "securedrop/version.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2017, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.4.4'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.4.4'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '../securedrop/static/i/favicon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]} | 3,963 | 185 |
gh_patches_debug_13957 | rasdani/github-patches | git_diff | opendatacube__datacube-core-680 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Errors when running against the impending sqlalchemy 1.3 release (in beta)
Originally reported in #667
Datacube consistently fails when run against the current beta version of sqlalchemy. [According to](https://www.sqlalchemy.org/blog/2019/02/08/sqlalchemy-1.3.0b3-released/) the sqlalchemy devs this release "1.3b3 should hopefully be the last beta release for 1.3, as no additional major changes are planned."
This isn't currently a problem, but it will break all of our builds and guides if not resolved before 1.3 is declared stable.
Manually reproduce the error with:
```
pip install sqlalchemy==1.3b3
datacube system init
```
- Either the sqlalchemy 1.3 beta has a bug, which we should report to them.
- Or our own code is doing something incorrect and we should fix it before 1.3 is declared stable.
Errors when running against the impending sqlalchemy 1.3 release (in beta)
Originally reported in #667
Datacube consistently fails when run against the current beta version of sqlalchemy. [According to](https://www.sqlalchemy.org/blog/2019/02/08/sqlalchemy-1.3.0b3-released/) the sqlalchemy devs this release "1.3b3 should hopefully be the last beta release for 1.3, as no additional major changes are planned."
This isn't currently a problem, but it will break all of our builds and guides if not resolved before 1.3 is declared stable.
Manually reproduce the error with:
```
pip install sqlalchemy==1.3b3
datacube system init
```
- Either the sqlalchemy 1.3 beta has a bug, which we should report to them.
- Or our own code is doing something incorrect and we should fix it before 1.3 is declared stable.
</issue>
<code>
[start of datacube/drivers/postgres/sql.py]
1 # coding=utf-8
2 """
3 Custom types for postgres & sqlalchemy
4 """
5
6 from sqlalchemy import TIMESTAMP
7 from sqlalchemy.dialects.postgresql.ranges import RangeOperators
8 from sqlalchemy.ext.compiler import compiles
9 from sqlalchemy.sql import sqltypes
10 from sqlalchemy.sql.expression import Executable, ClauseElement
11 from sqlalchemy.sql.functions import GenericFunction
12
13 SCHEMA_NAME = 'agdc'
14
15
16 class CreateView(Executable, ClauseElement):
17 def __init__(self, name, select):
18 self.name = name
19 self.select = select
20
21
22 @compiles(CreateView)
23 def visit_create_view(element, compiler, **kw):
24 return "CREATE VIEW %s AS %s" % (
25 element.name,
26 compiler.process(element.select, literal_binds=True)
27 )
28
29
30 TYPES_INIT_SQL = """
31 create or replace function {schema}.common_timestamp(text)
32 returns timestamp with time zone as $$
33 select ($1)::timestamp at time zone 'utc';
34 $$ language sql immutable returns null on null input;
35
36 create type {schema}.float8range as range (
37 subtype = float8,
38 subtype_diff = float8mi
39 );
40 """.format(schema=SCHEMA_NAME)
41
42
43 # pylint: disable=abstract-method
44 class FLOAT8RANGE(RangeOperators, sqltypes.TypeEngine):
45 __visit_name__ = 'FLOAT8RANGE'
46
47
48 @compiles(FLOAT8RANGE)
49 def visit_float8range(element, compiler, **kw):
50 return "FLOAT8RANGE"
51
52
53 # Register the function with SQLAlchemhy.
54 # pylint: disable=too-many-ancestors
55 class CommonTimestamp(GenericFunction):
56 type = TIMESTAMP(timezone=True)
57 package = 'agdc'
58 identifier = 'common_timestamp'
59
60 name = '%s.common_timestamp' % SCHEMA_NAME
61
62
63 # pylint: disable=too-many-ancestors
64 class Float8Range(GenericFunction):
65 type = FLOAT8RANGE
66 package = 'agdc'
67 identifier = 'float8range'
68
69 name = '%s.float8range' % SCHEMA_NAME
70
71
72 class PGNAME(sqltypes.Text):
73 """Postgres 'NAME' type."""
74 __visit_name__ = 'NAME'
75
76
77 @compiles(PGNAME)
78 def visit_name(element, compiler, **kw):
79 return "NAME"
80
81
82 def pg_exists(conn, name):
83 """
84 Does a postgres object exist?
85 :rtype bool
86 """
87 return conn.execute("SELECT to_regclass(%s)", name).scalar() is not None
88
89
90 def pg_column_exists(conn, table, column):
91 """
92 Does a postgres object exist?
93 :rtype bool
94 """
95 return conn.execute("""
96 SELECT 1 FROM pg_attribute
97 WHERE attrelid = to_regclass(%s)
98 AND attname = %s
99 AND NOT attisdropped
100 """, table, column).scalar() is not None
101
102
103 def escape_pg_identifier(engine, name):
104 """
105 Escape identifiers (tables, fields, roles, etc) for inclusion in SQL statements.
106
107 psycopg2 can safely merge query arguments, but cannot do the same for dynamically
108 generating queries.
109
110 See http://initd.org/psycopg/docs/sql.html for more information.
111 """
112 # New (2.7+) versions of psycopg2 have function: extensions.quote_ident()
113 # But it's too bleeding edge right now. We'll ask the server to escape instead, as
114 # these are not performance sensitive.
115 return engine.execute("select quote_ident(%s)", name).scalar()
116
[end of datacube/drivers/postgres/sql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datacube/drivers/postgres/sql.py b/datacube/drivers/postgres/sql.py
--- a/datacube/drivers/postgres/sql.py
+++ b/datacube/drivers/postgres/sql.py
@@ -57,7 +57,11 @@
package = 'agdc'
identifier = 'common_timestamp'
- name = '%s.common_timestamp' % SCHEMA_NAME
+ name = 'common_timestamp'
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.packagenames = ['%s' % SCHEMA_NAME]
# pylint: disable=too-many-ancestors
@@ -66,7 +70,11 @@
package = 'agdc'
identifier = 'float8range'
- name = '%s.float8range' % SCHEMA_NAME
+ name = 'float8range'
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.packagenames = ['%s' % SCHEMA_NAME]
class PGNAME(sqltypes.Text):
| {"golden_diff": "diff --git a/datacube/drivers/postgres/sql.py b/datacube/drivers/postgres/sql.py\n--- a/datacube/drivers/postgres/sql.py\n+++ b/datacube/drivers/postgres/sql.py\n@@ -57,7 +57,11 @@\n package = 'agdc'\n identifier = 'common_timestamp'\n \n- name = '%s.common_timestamp' % SCHEMA_NAME\n+ name = 'common_timestamp'\n+\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.packagenames = ['%s' % SCHEMA_NAME]\n \n \n # pylint: disable=too-many-ancestors\n@@ -66,7 +70,11 @@\n package = 'agdc'\n identifier = 'float8range'\n \n- name = '%s.float8range' % SCHEMA_NAME\n+ name = 'float8range'\n+\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.packagenames = ['%s' % SCHEMA_NAME]\n \n \n class PGNAME(sqltypes.Text):\n", "issue": "Errors when running against the impending sqlalchemy 1.3 release (in beta)\nOriginally reported in #667\r\n\r\nDatacube consistently fails when run against the current beta version of sqlalchemy. [According to](https://www.sqlalchemy.org/blog/2019/02/08/sqlalchemy-1.3.0b3-released/) the sqlalchemy devs this release \"1.3b3 should hopefully be the last beta release for 1.3, as no additional major changes are planned.\"\r\n\r\nThis isn't currently a problem, but it will break all of our builds and guides if not resolved before 1.3 is declared stable.\r\n\r\nManually reproduce the error with:\r\n\r\n```\r\n pip install sqlalchemy==1.3b3\r\n datacube system init\r\n```\r\n\r\n- Either the sqlalchemy 1.3 beta has a bug, which we should report to them.\r\n- Or our own code is doing something incorrect and we should fix it before 1.3 is declared stable.\r\n\nErrors when running against the impending sqlalchemy 1.3 release (in beta)\nOriginally reported in #667\r\n\r\nDatacube consistently fails when run against the current beta version of sqlalchemy. [According to](https://www.sqlalchemy.org/blog/2019/02/08/sqlalchemy-1.3.0b3-released/) the sqlalchemy devs this release \"1.3b3 should hopefully be the last beta release for 1.3, as no additional major changes are planned.\"\r\n\r\nThis isn't currently a problem, but it will break all of our builds and guides if not resolved before 1.3 is declared stable.\r\n\r\nManually reproduce the error with:\r\n\r\n```\r\n pip install sqlalchemy==1.3b3\r\n datacube system init\r\n```\r\n\r\n- Either the sqlalchemy 1.3 beta has a bug, which we should report to them.\r\n- Or our own code is doing something incorrect and we should fix it before 1.3 is declared stable.\r\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nCustom types for postgres & sqlalchemy\n\"\"\"\n\nfrom sqlalchemy import TIMESTAMP\nfrom sqlalchemy.dialects.postgresql.ranges import RangeOperators\nfrom sqlalchemy.ext.compiler import compiles\nfrom sqlalchemy.sql import sqltypes\nfrom sqlalchemy.sql.expression import Executable, ClauseElement\nfrom sqlalchemy.sql.functions import GenericFunction\n\nSCHEMA_NAME = 'agdc'\n\n\nclass CreateView(Executable, ClauseElement):\n def __init__(self, name, select):\n self.name = name\n self.select = select\n\n\n@compiles(CreateView)\ndef visit_create_view(element, compiler, **kw):\n return \"CREATE VIEW %s AS %s\" % (\n element.name,\n compiler.process(element.select, literal_binds=True)\n )\n\n\nTYPES_INIT_SQL = \"\"\"\ncreate or replace function {schema}.common_timestamp(text)\nreturns timestamp with time zone as $$\nselect ($1)::timestamp at time zone 'utc';\n$$ language sql immutable returns null on null input;\n\ncreate type {schema}.float8range as range (\n subtype = float8,\n subtype_diff = float8mi\n);\n\"\"\".format(schema=SCHEMA_NAME)\n\n\n# pylint: disable=abstract-method\nclass FLOAT8RANGE(RangeOperators, sqltypes.TypeEngine):\n __visit_name__ = 'FLOAT8RANGE'\n\n\n@compiles(FLOAT8RANGE)\ndef visit_float8range(element, compiler, **kw):\n return \"FLOAT8RANGE\"\n\n\n# Register the function with SQLAlchemhy.\n# pylint: disable=too-many-ancestors\nclass CommonTimestamp(GenericFunction):\n type = TIMESTAMP(timezone=True)\n package = 'agdc'\n identifier = 'common_timestamp'\n\n name = '%s.common_timestamp' % SCHEMA_NAME\n\n\n# pylint: disable=too-many-ancestors\nclass Float8Range(GenericFunction):\n type = FLOAT8RANGE\n package = 'agdc'\n identifier = 'float8range'\n\n name = '%s.float8range' % SCHEMA_NAME\n\n\nclass PGNAME(sqltypes.Text):\n \"\"\"Postgres 'NAME' type.\"\"\"\n __visit_name__ = 'NAME'\n\n\n@compiles(PGNAME)\ndef visit_name(element, compiler, **kw):\n return \"NAME\"\n\n\ndef pg_exists(conn, name):\n \"\"\"\n Does a postgres object exist?\n :rtype bool\n \"\"\"\n return conn.execute(\"SELECT to_regclass(%s)\", name).scalar() is not None\n\n\ndef pg_column_exists(conn, table, column):\n \"\"\"\n Does a postgres object exist?\n :rtype bool\n \"\"\"\n return conn.execute(\"\"\"\n SELECT 1 FROM pg_attribute\n WHERE attrelid = to_regclass(%s)\n AND attname = %s\n AND NOT attisdropped\n \"\"\", table, column).scalar() is not None\n\n\ndef escape_pg_identifier(engine, name):\n \"\"\"\n Escape identifiers (tables, fields, roles, etc) for inclusion in SQL statements.\n\n psycopg2 can safely merge query arguments, but cannot do the same for dynamically\n generating queries.\n\n See http://initd.org/psycopg/docs/sql.html for more information.\n \"\"\"\n # New (2.7+) versions of psycopg2 have function: extensions.quote_ident()\n # But it's too bleeding edge right now. We'll ask the server to escape instead, as\n # these are not performance sensitive.\n return engine.execute(\"select quote_ident(%s)\", name).scalar()\n", "path": "datacube/drivers/postgres/sql.py"}]} | 1,944 | 248 |
gh_patches_debug_38519 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider lees_famous_recipe is broken
During the global build at 2021-10-20-14-42-48, spider **lees_famous_recipe** failed with **0 features** and **130 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/logs/lees_famous_recipe.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/lees_famous_recipe.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/lees_famous_recipe.geojson))
</issue>
<code>
[start of locations/spiders/lees_famous_recipe.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4 import re
5
6 daysKey = {
7 'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th',
8 'Friday': 'Fr', 'Saturday': 'Sa', 'Sunday': 'Su'
9 }
10
11
12 class LeesFamousRecipeSpider(scrapy.Spider):
13 name = "lees_famous_recipe"
14 item_attributes = { 'brand': "Lee's Famous Recipe Chicken" }
15 allowed_domains = ["www.leesfamousrecipe.com"]
16 start_urls = (
17 'https://www.leesfamousrecipe.com/locations/all',
18 )
19
20 def parse_phone(self, phone):
21 phone = phone.replace('.','')
22 phone = phone.replace(')','')
23 phone = phone.replace('(','')
24 phone = phone.replace('_','')
25 phone = phone.replace('-','')
26 phone = phone.replace('+','')
27 phone = phone.replace(' ','')
28 return phone
29
30 def store_hours(self, hours):
31 try:
32 days = hours.split(': ')[0].strip()
33 if('-' in days):
34 startDay = daysKey[days.split('-')[0]]
35 endDay = daysKey[days.split('-')[1]]
36 dayOutput = startDay + "-" + endDay
37 else:
38 dayOutput = daysKey[days]
39
40 bothHours = hours.split(': ')[1].replace(' ','')
41 openHours = bothHours.split("-")[0]
42 closeHours = bothHours.split("-")[1]
43
44 if("am" in openHours):
45 openHours = openHours.replace("am","")
46 if(":" in openHours):
47 openH = openHours.split(":")[0]
48 openM = openHours.split(":")[1]
49 else:
50 openH = openHours
51 openM = "00"
52 openHours = openH + ":" + openM
53
54 if("pm" in openHours):
55 openHours = openHours.replace("pm","")
56 if(":" in openHours):
57 openH = openHours.split(":")[0]
58 openM = openHours.split(":")[1]
59 else:
60 openH = openHours
61 openM = "00"
62 openH = str(int(openH) + 12)
63 openHours = openH + ":" + openM
64
65 if("am" in closeHours):
66 closeHours = closeHours.replace("am","")
67 if(":" in closeHours):
68 closeH = closeHours.split(":")[0]
69 closeM = closeHours.split(":")[1]
70 else:
71 closeH = closeHours
72 closeM = "00"
73 closeHours = closeH + ":" + closeM
74
75 if("pm" in closeHours):
76 closeHours = closeHours.replace("pm","")
77 if(":" in closeHours):
78 closeH = closeHours.split(":")[0]
79 closeM = closeHours.split(":")[1]
80 else:
81 closeH = closeHours
82 closeM = "00"
83 closeH = str(int(closeH) + 12)
84 closeHours = closeH + ":" + closeM
85 return dayOutput +' '+ openHours.replace(' ','') + "-" + closeHours + ';'
86 except KeyError:
87 return ""
88
89 def parse(self, response):
90 if("https://www.leesfamousrecipe.com/locations/all" == response.url):
91 for match in response.xpath("//div[contains(@class,'field-content')]/a/@href"):
92 request = scrapy.Request(match.extract())
93 yield request
94 else:
95 nameString = response.xpath("//h1[@class='node-title']/text()").extract_first().strip()
96 shortString = response.xpath("//h1[@class='node-title']/small/text()").extract_first()
97 if shortString is None:
98 shortString = ""
99 nameString = nameString + " " + shortString
100 nameString = nameString.strip()
101
102 scriptBody = response.xpath("//script[@type='text/javascript' and contains(.,'latitude')]/text()").extract_first()
103 latString = re.findall("latitude\":\"(.*?)\"", scriptBody)[0]
104 lonString = re.findall("longitude\":\"(.*?)\"", scriptBody)[0]
105
106 openingHoursString = ""
107 firstHourBlock = response.xpath("//div[contains(@class,'field-name-field-hours-summer')]/div/div/p/br/parent::p/text()")
108 for hourLine in firstHourBlock:
109 openingHoursString = openingHoursString +' '+self.store_hours(hourLine.extract())
110 openingHoursString = openingHoursString.strip(';').strip()
111
112
113 if("british-columbia" in response.url):
114 countryString = "CA"
115 stateString = "BC"
116 else:
117 countryString = "US"
118 mapUrl = response.xpath("//div[contains(@class,'map-link')]/div/a/@href").extract_first()
119 stateString = re.findall(r'(?<=\+)(.*?)(?=\+)', mapUrl)[len(re.findall(r'(?<=\+)(.*?)(?=\+)', mapUrl)) - 2].strip().replace('%2C','')
120
121 yield GeojsonPointItem(
122 ref=nameString,
123 addr_full=response.xpath("//div[@class='street-address']/text()").extract_first().strip(),
124 city=response.xpath("//div[@class='city-state-zip']/span[@class='locality']/text()").extract_first().strip(),
125 opening_hours=openingHoursString,
126 state=stateString,
127 postcode=response.xpath("//div[@class='city-state-zip']/span[@class='postal-code']/text()").extract_first().strip(),
128 phone=self.parse_phone(response.xpath("//div[contains(@class,'field-name-field-phone')]/div/div/text()").extract_first().strip()),
129 country = countryString,
130 lat=float(latString),
131 lon=float(lonString),
132 )
133
134
[end of locations/spiders/lees_famous_recipe.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/lees_famous_recipe.py b/locations/spiders/lees_famous_recipe.py
--- a/locations/spiders/lees_famous_recipe.py
+++ b/locations/spiders/lees_famous_recipe.py
@@ -83,7 +83,7 @@
closeH = str(int(closeH) + 12)
closeHours = closeH + ":" + closeM
return dayOutput +' '+ openHours.replace(' ','') + "-" + closeHours + ';'
- except KeyError:
+ except (KeyError, IndexError):
return ""
def parse(self, response):
@@ -99,9 +99,8 @@
nameString = nameString + " " + shortString
nameString = nameString.strip()
- scriptBody = response.xpath("//script[@type='text/javascript' and contains(.,'latitude')]/text()").extract_first()
- latString = re.findall("latitude\":\"(.*?)\"", scriptBody)[0]
- lonString = re.findall("longitude\":\"(.*?)\"", scriptBody)[0]
+ googleMapSrc = response.xpath("//*[@id='block-system-main']/div/div/iframe").extract_first()
+ [latString, lonString] = re.findall("center=(.*?)\"", googleMapSrc)[0].split(',')
openingHoursString = ""
firstHourBlock = response.xpath("//div[contains(@class,'field-name-field-hours-summer')]/div/div/p/br/parent::p/text()")
@@ -116,7 +115,7 @@
else:
countryString = "US"
mapUrl = response.xpath("//div[contains(@class,'map-link')]/div/a/@href").extract_first()
- stateString = re.findall(r'(?<=\+)(.*?)(?=\+)', mapUrl)[len(re.findall(r'(?<=\+)(.*?)(?=\+)', mapUrl)) - 2].strip().replace('%2C','')
+ stateString = response.xpath("//div[contains(@class,'adr')]/div[2]/span[2]/text()").extract_first()
yield GeojsonPointItem(
ref=nameString,
@@ -125,7 +124,7 @@
opening_hours=openingHoursString,
state=stateString,
postcode=response.xpath("//div[@class='city-state-zip']/span[@class='postal-code']/text()").extract_first().strip(),
- phone=self.parse_phone(response.xpath("//div[contains(@class,'field-name-field-phone')]/div/div/text()").extract_first().strip()),
+ phone=self.parse_phone(response.xpath("//div[contains(@class,'adr')]/div[3]/text()").extract_first().strip()),
country = countryString,
lat=float(latString),
lon=float(lonString),
| {"golden_diff": "diff --git a/locations/spiders/lees_famous_recipe.py b/locations/spiders/lees_famous_recipe.py\n--- a/locations/spiders/lees_famous_recipe.py\n+++ b/locations/spiders/lees_famous_recipe.py\n@@ -83,7 +83,7 @@\n closeH = str(int(closeH) + 12)\n closeHours = closeH + \":\" + closeM\n return dayOutput +' '+ openHours.replace(' ','') + \"-\" + closeHours + ';'\n- except KeyError:\n+ except (KeyError, IndexError):\n return \"\"\n \n def parse(self, response):\n@@ -99,9 +99,8 @@\n nameString = nameString + \" \" + shortString\n nameString = nameString.strip()\n \n- scriptBody = response.xpath(\"//script[@type='text/javascript' and contains(.,'latitude')]/text()\").extract_first()\n- latString = re.findall(\"latitude\\\":\\\"(.*?)\\\"\", scriptBody)[0]\n- lonString = re.findall(\"longitude\\\":\\\"(.*?)\\\"\", scriptBody)[0]\n+ googleMapSrc = response.xpath(\"//*[@id='block-system-main']/div/div/iframe\").extract_first()\n+ [latString, lonString] = re.findall(\"center=(.*?)\\\"\", googleMapSrc)[0].split(',')\n \n openingHoursString = \"\"\n firstHourBlock = response.xpath(\"//div[contains(@class,'field-name-field-hours-summer')]/div/div/p/br/parent::p/text()\")\n@@ -116,7 +115,7 @@\n else:\n countryString = \"US\"\n mapUrl = response.xpath(\"//div[contains(@class,'map-link')]/div/a/@href\").extract_first()\n- stateString = re.findall(r'(?<=\\+)(.*?)(?=\\+)', mapUrl)[len(re.findall(r'(?<=\\+)(.*?)(?=\\+)', mapUrl)) - 2].strip().replace('%2C','')\n+ stateString = response.xpath(\"//div[contains(@class,'adr')]/div[2]/span[2]/text()\").extract_first()\n \n yield GeojsonPointItem(\n ref=nameString,\n@@ -125,7 +124,7 @@\n opening_hours=openingHoursString,\n state=stateString,\n postcode=response.xpath(\"//div[@class='city-state-zip']/span[@class='postal-code']/text()\").extract_first().strip(),\n- phone=self.parse_phone(response.xpath(\"//div[contains(@class,'field-name-field-phone')]/div/div/text()\").extract_first().strip()),\n+ phone=self.parse_phone(response.xpath(\"//div[contains(@class,'adr')]/div[3]/text()\").extract_first().strip()),\n country = countryString,\n lat=float(latString),\n lon=float(lonString),\n", "issue": "Spider lees_famous_recipe is broken\nDuring the global build at 2021-10-20-14-42-48, spider **lees_famous_recipe** failed with **0 features** and **130 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/logs/lees_famous_recipe.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/lees_famous_recipe.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/lees_famous_recipe.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\nimport re\n\ndaysKey = {\n 'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th',\n 'Friday': 'Fr', 'Saturday': 'Sa', 'Sunday': 'Su'\n}\n\n\nclass LeesFamousRecipeSpider(scrapy.Spider):\n name = \"lees_famous_recipe\"\n item_attributes = { 'brand': \"Lee's Famous Recipe Chicken\" }\n allowed_domains = [\"www.leesfamousrecipe.com\"]\n start_urls = (\n 'https://www.leesfamousrecipe.com/locations/all',\n )\n\n def parse_phone(self, phone):\n phone = phone.replace('.','')\n phone = phone.replace(')','')\n phone = phone.replace('(','')\n phone = phone.replace('_','')\n phone = phone.replace('-','')\n phone = phone.replace('+','')\n phone = phone.replace(' ','')\n return phone\n\n def store_hours(self, hours):\n try:\n days = hours.split(': ')[0].strip()\n if('-' in days):\n startDay = daysKey[days.split('-')[0]]\n endDay = daysKey[days.split('-')[1]]\n dayOutput = startDay + \"-\" + endDay\n else:\n dayOutput = daysKey[days]\n\n bothHours = hours.split(': ')[1].replace(' ','')\n openHours = bothHours.split(\"-\")[0]\n closeHours = bothHours.split(\"-\")[1]\n\n if(\"am\" in openHours):\n openHours = openHours.replace(\"am\",\"\")\n if(\":\" in openHours):\n openH = openHours.split(\":\")[0]\n openM = openHours.split(\":\")[1]\n else:\n openH = openHours\n openM = \"00\"\n openHours = openH + \":\" + openM\n\n if(\"pm\" in openHours):\n openHours = openHours.replace(\"pm\",\"\")\n if(\":\" in openHours):\n openH = openHours.split(\":\")[0]\n openM = openHours.split(\":\")[1]\n else:\n openH = openHours\n openM = \"00\"\n openH = str(int(openH) + 12)\n openHours = openH + \":\" + openM\n\n if(\"am\" in closeHours):\n closeHours = closeHours.replace(\"am\",\"\")\n if(\":\" in closeHours):\n closeH = closeHours.split(\":\")[0]\n closeM = closeHours.split(\":\")[1]\n else:\n closeH = closeHours\n closeM = \"00\"\n closeHours = closeH + \":\" + closeM\n\n if(\"pm\" in closeHours):\n closeHours = closeHours.replace(\"pm\",\"\")\n if(\":\" in closeHours):\n closeH = closeHours.split(\":\")[0]\n closeM = closeHours.split(\":\")[1]\n else:\n closeH = closeHours\n closeM = \"00\"\n closeH = str(int(closeH) + 12)\n closeHours = closeH + \":\" + closeM\n return dayOutput +' '+ openHours.replace(' ','') + \"-\" + closeHours + ';'\n except KeyError:\n return \"\"\n\n def parse(self, response):\n if(\"https://www.leesfamousrecipe.com/locations/all\" == response.url):\n for match in response.xpath(\"//div[contains(@class,'field-content')]/a/@href\"):\n request = scrapy.Request(match.extract())\n yield request\n else:\n nameString = response.xpath(\"//h1[@class='node-title']/text()\").extract_first().strip()\n shortString = response.xpath(\"//h1[@class='node-title']/small/text()\").extract_first()\n if shortString is None:\n shortString = \"\"\n nameString = nameString + \" \" + shortString\n nameString = nameString.strip()\n\n scriptBody = response.xpath(\"//script[@type='text/javascript' and contains(.,'latitude')]/text()\").extract_first()\n latString = re.findall(\"latitude\\\":\\\"(.*?)\\\"\", scriptBody)[0]\n lonString = re.findall(\"longitude\\\":\\\"(.*?)\\\"\", scriptBody)[0]\n\n openingHoursString = \"\"\n firstHourBlock = response.xpath(\"//div[contains(@class,'field-name-field-hours-summer')]/div/div/p/br/parent::p/text()\")\n for hourLine in firstHourBlock:\n openingHoursString = openingHoursString +' '+self.store_hours(hourLine.extract())\n openingHoursString = openingHoursString.strip(';').strip()\n\n\n if(\"british-columbia\" in response.url):\n countryString = \"CA\"\n stateString = \"BC\"\n else:\n countryString = \"US\"\n mapUrl = response.xpath(\"//div[contains(@class,'map-link')]/div/a/@href\").extract_first()\n stateString = re.findall(r'(?<=\\+)(.*?)(?=\\+)', mapUrl)[len(re.findall(r'(?<=\\+)(.*?)(?=\\+)', mapUrl)) - 2].strip().replace('%2C','')\n\n yield GeojsonPointItem(\n ref=nameString,\n addr_full=response.xpath(\"//div[@class='street-address']/text()\").extract_first().strip(),\n city=response.xpath(\"//div[@class='city-state-zip']/span[@class='locality']/text()\").extract_first().strip(),\n opening_hours=openingHoursString,\n state=stateString,\n postcode=response.xpath(\"//div[@class='city-state-zip']/span[@class='postal-code']/text()\").extract_first().strip(),\n phone=self.parse_phone(response.xpath(\"//div[contains(@class,'field-name-field-phone')]/div/div/text()\").extract_first().strip()),\n country = countryString,\n lat=float(latString),\n lon=float(lonString),\n )\n\n", "path": "locations/spiders/lees_famous_recipe.py"}]} | 2,307 | 606 |
gh_patches_debug_1473 | rasdani/github-patches | git_diff | ivy-llc__ivy-13177 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tril_indces_from
</issue>
<code>
[start of ivy/functional/frontends/jax/numpy/indexing.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def diagonal(a, offset=0, axis1=0, axis2=1):
10 return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)
11
12
13 @to_ivy_arrays_and_back
14 def diag(v, k=0):
15 return ivy.diag(v, k=k)
16
17
18 @to_ivy_arrays_and_back
19 def diag_indices(n, ndim=2):
20 idx = ivy.arange(n, dtype=int)
21 return (idx,) * ndim
22
23
24 # take_along_axis
25 @to_ivy_arrays_and_back
26 def take_along_axis(arr, indices, axis, mode="fill"):
27 return ivy.take_along_axis(arr, indices, axis, mode=mode)
28
29
30 @to_ivy_arrays_and_back
31 def tril_indices(n_rows, n_cols=None, k=0):
32 return ivy.tril_indices(n_rows, n_cols, k)
33
[end of ivy/functional/frontends/jax/numpy/indexing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py
--- a/ivy/functional/frontends/jax/numpy/indexing.py
+++ b/ivy/functional/frontends/jax/numpy/indexing.py
@@ -30,3 +30,8 @@
@to_ivy_arrays_and_back
def tril_indices(n_rows, n_cols=None, k=0):
return ivy.tril_indices(n_rows, n_cols, k)
+
+
+@to_ivy_arrays_and_back
+def tril_indices_from(arr, k=0):
+ return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py\n--- a/ivy/functional/frontends/jax/numpy/indexing.py\n+++ b/ivy/functional/frontends/jax/numpy/indexing.py\n@@ -30,3 +30,8 @@\n @to_ivy_arrays_and_back\n def tril_indices(n_rows, n_cols=None, k=0):\n return ivy.tril_indices(n_rows, n_cols, k)\n+\n+\n+@to_ivy_arrays_and_back\n+def tril_indices_from(arr, k=0):\n+ return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n", "issue": "tril_indces_from\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef diagonal(a, offset=0, axis1=0, axis2=1):\n return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n, dtype=int)\n return (idx,) * ndim\n\n\n# take_along_axis\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis, mode=\"fill\"):\n return ivy.take_along_axis(arr, indices, axis, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n_rows, n_cols=None, k=0):\n return ivy.tril_indices(n_rows, n_cols, k)\n", "path": "ivy/functional/frontends/jax/numpy/indexing.py"}]} | 851 | 157 |
gh_patches_debug_54606 | rasdani/github-patches | git_diff | zulip__zulip-13843 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rename `subject_links` to `topic_links` in our API
This is an element of the broader `subject` -> `topic` migration (see #1192) that should be straightforward to change, because I believe the mobile apps don't access `subject_links` yet, so there's no compatibility work required. (What the data is used for in the webapp is the little in-topic-field links we show when there is a link or linkifier matching the topic line of the message).
@gnprice to confirm I'm reading the mobile codebase correctly that it's indeed not accessed.
Noticed in #13587; tagging as a priority since this sort of API migration gets more complex when delayed. We should be sure to look again at updating the docs as discussed in #13587 once this is complete.
</issue>
<code>
[start of zerver/lib/topic.py]
1 import datetime
2
3 from django.db import connection
4 from django.db.models.query import QuerySet, Q
5 from django.utils.timezone import now as timezone_now
6
7 from sqlalchemy.sql import (
8 column,
9 literal,
10 func,
11 )
12
13 from zerver.lib.request import REQ
14 from zerver.models import (
15 Message,
16 Recipient,
17 UserMessage,
18 UserProfile,
19 )
20
21 from typing import Any, Dict, List, Optional, Tuple
22
23 # Only use these constants for events.
24 ORIG_TOPIC = "orig_subject"
25 TOPIC_NAME = "subject"
26 TOPIC_LINKS = "subject_links"
27 MATCH_TOPIC = "match_subject"
28
29 # This constant is actually embedded into
30 # the JSON data for message edit history,
31 # so we'll always need to handle legacy data
32 # unless we do a pretty tricky migration.
33 LEGACY_PREV_TOPIC = "prev_subject"
34
35 # This constant is pretty closely coupled to the
36 # database, but it's the JSON field.
37 EXPORT_TOPIC_NAME = "subject"
38
39 '''
40 The following functions are for user-facing APIs
41 where we'll want to support "subject" for a while.
42 '''
43
44 def get_topic_from_message_info(message_info: Dict[str, Any]) -> str:
45 '''
46 Use this where you are getting dicts that are based off of messages
47 that may come from the outside world, especially from third party
48 APIs and bots.
49
50 We prefer 'topic' to 'subject' here. We expect at least one field
51 to be present (or the caller must know how to handle KeyError).
52 '''
53 if 'topic' in message_info:
54 return message_info['topic']
55
56 return message_info['subject']
57
58 def REQ_topic() -> Optional[str]:
59 # REQ handlers really return a REQ, but we
60 # lie to make the rest of the type matching work.
61 return REQ(
62 whence='topic',
63 aliases=['subject'],
64 converter=lambda x: x.strip(),
65 default=None,
66 )
67
68 '''
69 TRY TO KEEP THIS DIVIDING LINE.
70
71 Below this line we want to make it so that functions are only
72 using "subject" in the DB sense, and nothing customer facing.
73
74 '''
75
76 # This is used in low-level message functions in
77 # zerver/lib/message.py, and it's not user facing.
78 DB_TOPIC_NAME = "subject"
79 MESSAGE__TOPIC = 'message__subject'
80
81 def topic_match_sa(topic_name: str) -> Any:
82 # _sa is short for Sql Alchemy, which we use mostly for
83 # queries that search messages
84 topic_cond = func.upper(column("subject")) == func.upper(literal(topic_name))
85 return topic_cond
86
87 def topic_column_sa() -> Any:
88 return column("subject")
89
90 def filter_by_exact_message_topic(query: QuerySet, message: Message) -> QuerySet:
91 topic_name = message.topic_name()
92 return query.filter(subject=topic_name)
93
94 def filter_by_topic_name_via_message(query: QuerySet, topic_name: str) -> QuerySet:
95 return query.filter(message__subject__iexact=topic_name)
96
97 def messages_for_topic(stream_id: int, topic_name: str) -> QuerySet:
98 return Message.objects.filter(
99 recipient__type_id=stream_id,
100 subject__iexact=topic_name,
101 )
102
103 def save_message_for_edit_use_case(message: Message) -> None:
104 message.save(update_fields=[TOPIC_NAME, "content", "rendered_content",
105 "rendered_content_version", "last_edit_time",
106 "edit_history", "has_attachment", "has_image",
107 "has_link"])
108
109
110 def user_message_exists_for_topic(user_profile: UserProfile,
111 recipient: Recipient,
112 topic_name: str) -> bool:
113 return UserMessage.objects.filter(
114 user_profile=user_profile,
115 message__recipient=recipient,
116 message__subject__iexact=topic_name,
117 ).exists()
118
119 def update_messages_for_topic_edit(message: Message,
120 propagate_mode: str,
121 orig_topic_name: str,
122 topic_name: str) -> List[Message]:
123 propagate_query = Q(recipient = message.recipient, subject = orig_topic_name)
124 # We only change messages up to 7 days in the past, to avoid hammering our
125 # DB by changing an unbounded amount of messages
126 if propagate_mode == 'change_all':
127 before_bound = timezone_now() - datetime.timedelta(days=7)
128
129 propagate_query = (propagate_query & ~Q(id = message.id) &
130 Q(date_sent__range=(before_bound, timezone_now())))
131 if propagate_mode == 'change_later':
132 propagate_query = propagate_query & Q(id__gt = message.id)
133
134 messages = Message.objects.filter(propagate_query).select_related()
135
136 # Evaluate the query before running the update
137 messages_list = list(messages)
138 messages.update(subject=topic_name)
139
140 for m in messages_list:
141 # The cached ORM object is not changed by messages.update()
142 # and the remote cache update requires the new value
143 m.set_topic_name(topic_name)
144
145 return messages_list
146
147 def generate_topic_history_from_db_rows(rows: List[Tuple[str, int]]) -> List[Dict[str, Any]]:
148 canonical_topic_names = {} # type: Dict[str, Tuple[int, str]]
149
150 # Sort rows by max_message_id so that if a topic
151 # has many different casings, we use the most
152 # recent row.
153 rows = sorted(rows, key=lambda tup: tup[1])
154
155 for (topic_name, max_message_id) in rows:
156 canonical_name = topic_name.lower()
157 canonical_topic_names[canonical_name] = (max_message_id, topic_name)
158
159 history = []
160 for canonical_topic, (max_message_id, topic_name) in canonical_topic_names.items():
161 history.append(dict(
162 name=topic_name,
163 max_id=max_message_id)
164 )
165 return sorted(history, key=lambda x: -x['max_id'])
166
167 def get_topic_history_for_stream(user_profile: UserProfile,
168 recipient: Recipient,
169 public_history: bool) -> List[Dict[str, Any]]:
170 cursor = connection.cursor()
171 if public_history:
172 query = '''
173 SELECT
174 "zerver_message"."subject" as topic,
175 max("zerver_message".id) as max_message_id
176 FROM "zerver_message"
177 WHERE (
178 "zerver_message"."recipient_id" = %s
179 )
180 GROUP BY (
181 "zerver_message"."subject"
182 )
183 ORDER BY max("zerver_message".id) DESC
184 '''
185 cursor.execute(query, [recipient.id])
186 else:
187 query = '''
188 SELECT
189 "zerver_message"."subject" as topic,
190 max("zerver_message".id) as max_message_id
191 FROM "zerver_message"
192 INNER JOIN "zerver_usermessage" ON (
193 "zerver_usermessage"."message_id" = "zerver_message"."id"
194 )
195 WHERE (
196 "zerver_usermessage"."user_profile_id" = %s AND
197 "zerver_message"."recipient_id" = %s
198 )
199 GROUP BY (
200 "zerver_message"."subject"
201 )
202 ORDER BY max("zerver_message".id) DESC
203 '''
204 cursor.execute(query, [user_profile.id, recipient.id])
205 rows = cursor.fetchall()
206 cursor.close()
207
208 return generate_topic_history_from_db_rows(rows)
209
210 def get_topic_history_for_web_public_stream(recipient: Recipient) -> List[Dict[str, Any]]:
211 cursor = connection.cursor()
212 query = '''
213 SELECT
214 "zerver_message"."subject" as topic,
215 max("zerver_message".id) as max_message_id
216 FROM "zerver_message"
217 WHERE (
218 "zerver_message"."recipient_id" = %s
219 )
220 GROUP BY (
221 "zerver_message"."subject"
222 )
223 ORDER BY max("zerver_message".id) DESC
224 '''
225 cursor.execute(query, [recipient.id])
226 rows = cursor.fetchall()
227 cursor.close()
228
229 return generate_topic_history_from_db_rows(rows)
230
[end of zerver/lib/topic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/lib/topic.py b/zerver/lib/topic.py
--- a/zerver/lib/topic.py
+++ b/zerver/lib/topic.py
@@ -23,7 +23,7 @@
# Only use these constants for events.
ORIG_TOPIC = "orig_subject"
TOPIC_NAME = "subject"
-TOPIC_LINKS = "subject_links"
+TOPIC_LINKS = "topic_links"
MATCH_TOPIC = "match_subject"
# This constant is actually embedded into
| {"golden_diff": "diff --git a/zerver/lib/topic.py b/zerver/lib/topic.py\n--- a/zerver/lib/topic.py\n+++ b/zerver/lib/topic.py\n@@ -23,7 +23,7 @@\n # Only use these constants for events.\n ORIG_TOPIC = \"orig_subject\"\n TOPIC_NAME = \"subject\"\n-TOPIC_LINKS = \"subject_links\"\n+TOPIC_LINKS = \"topic_links\"\n MATCH_TOPIC = \"match_subject\"\n \n # This constant is actually embedded into\n", "issue": "Rename `subject_links` to `topic_links` in our API\nThis is an element of the broader `subject` -> `topic` migration (see #1192) that should be straightforward to change, because I believe the mobile apps don't access `subject_links` yet, so there's no compatibility work required. (What the data is used for in the webapp is the little in-topic-field links we show when there is a link or linkifier matching the topic line of the message).\r\n\r\n@gnprice to confirm I'm reading the mobile codebase correctly that it's indeed not accessed.\r\n\r\nNoticed in #13587; tagging as a priority since this sort of API migration gets more complex when delayed. We should be sure to look again at updating the docs as discussed in #13587 once this is complete.\n", "before_files": [{"content": "import datetime\n\nfrom django.db import connection\nfrom django.db.models.query import QuerySet, Q\nfrom django.utils.timezone import now as timezone_now\n\nfrom sqlalchemy.sql import (\n column,\n literal,\n func,\n)\n\nfrom zerver.lib.request import REQ\nfrom zerver.models import (\n Message,\n Recipient,\n UserMessage,\n UserProfile,\n)\n\nfrom typing import Any, Dict, List, Optional, Tuple\n\n# Only use these constants for events.\nORIG_TOPIC = \"orig_subject\"\nTOPIC_NAME = \"subject\"\nTOPIC_LINKS = \"subject_links\"\nMATCH_TOPIC = \"match_subject\"\n\n# This constant is actually embedded into\n# the JSON data for message edit history,\n# so we'll always need to handle legacy data\n# unless we do a pretty tricky migration.\nLEGACY_PREV_TOPIC = \"prev_subject\"\n\n# This constant is pretty closely coupled to the\n# database, but it's the JSON field.\nEXPORT_TOPIC_NAME = \"subject\"\n\n'''\nThe following functions are for user-facing APIs\nwhere we'll want to support \"subject\" for a while.\n'''\n\ndef get_topic_from_message_info(message_info: Dict[str, Any]) -> str:\n '''\n Use this where you are getting dicts that are based off of messages\n that may come from the outside world, especially from third party\n APIs and bots.\n\n We prefer 'topic' to 'subject' here. We expect at least one field\n to be present (or the caller must know how to handle KeyError).\n '''\n if 'topic' in message_info:\n return message_info['topic']\n\n return message_info['subject']\n\ndef REQ_topic() -> Optional[str]:\n # REQ handlers really return a REQ, but we\n # lie to make the rest of the type matching work.\n return REQ(\n whence='topic',\n aliases=['subject'],\n converter=lambda x: x.strip(),\n default=None,\n )\n\n'''\nTRY TO KEEP THIS DIVIDING LINE.\n\nBelow this line we want to make it so that functions are only\nusing \"subject\" in the DB sense, and nothing customer facing.\n\n'''\n\n# This is used in low-level message functions in\n# zerver/lib/message.py, and it's not user facing.\nDB_TOPIC_NAME = \"subject\"\nMESSAGE__TOPIC = 'message__subject'\n\ndef topic_match_sa(topic_name: str) -> Any:\n # _sa is short for Sql Alchemy, which we use mostly for\n # queries that search messages\n topic_cond = func.upper(column(\"subject\")) == func.upper(literal(topic_name))\n return topic_cond\n\ndef topic_column_sa() -> Any:\n return column(\"subject\")\n\ndef filter_by_exact_message_topic(query: QuerySet, message: Message) -> QuerySet:\n topic_name = message.topic_name()\n return query.filter(subject=topic_name)\n\ndef filter_by_topic_name_via_message(query: QuerySet, topic_name: str) -> QuerySet:\n return query.filter(message__subject__iexact=topic_name)\n\ndef messages_for_topic(stream_id: int, topic_name: str) -> QuerySet:\n return Message.objects.filter(\n recipient__type_id=stream_id,\n subject__iexact=topic_name,\n )\n\ndef save_message_for_edit_use_case(message: Message) -> None:\n message.save(update_fields=[TOPIC_NAME, \"content\", \"rendered_content\",\n \"rendered_content_version\", \"last_edit_time\",\n \"edit_history\", \"has_attachment\", \"has_image\",\n \"has_link\"])\n\n\ndef user_message_exists_for_topic(user_profile: UserProfile,\n recipient: Recipient,\n topic_name: str) -> bool:\n return UserMessage.objects.filter(\n user_profile=user_profile,\n message__recipient=recipient,\n message__subject__iexact=topic_name,\n ).exists()\n\ndef update_messages_for_topic_edit(message: Message,\n propagate_mode: str,\n orig_topic_name: str,\n topic_name: str) -> List[Message]:\n propagate_query = Q(recipient = message.recipient, subject = orig_topic_name)\n # We only change messages up to 7 days in the past, to avoid hammering our\n # DB by changing an unbounded amount of messages\n if propagate_mode == 'change_all':\n before_bound = timezone_now() - datetime.timedelta(days=7)\n\n propagate_query = (propagate_query & ~Q(id = message.id) &\n Q(date_sent__range=(before_bound, timezone_now())))\n if propagate_mode == 'change_later':\n propagate_query = propagate_query & Q(id__gt = message.id)\n\n messages = Message.objects.filter(propagate_query).select_related()\n\n # Evaluate the query before running the update\n messages_list = list(messages)\n messages.update(subject=topic_name)\n\n for m in messages_list:\n # The cached ORM object is not changed by messages.update()\n # and the remote cache update requires the new value\n m.set_topic_name(topic_name)\n\n return messages_list\n\ndef generate_topic_history_from_db_rows(rows: List[Tuple[str, int]]) -> List[Dict[str, Any]]:\n canonical_topic_names = {} # type: Dict[str, Tuple[int, str]]\n\n # Sort rows by max_message_id so that if a topic\n # has many different casings, we use the most\n # recent row.\n rows = sorted(rows, key=lambda tup: tup[1])\n\n for (topic_name, max_message_id) in rows:\n canonical_name = topic_name.lower()\n canonical_topic_names[canonical_name] = (max_message_id, topic_name)\n\n history = []\n for canonical_topic, (max_message_id, topic_name) in canonical_topic_names.items():\n history.append(dict(\n name=topic_name,\n max_id=max_message_id)\n )\n return sorted(history, key=lambda x: -x['max_id'])\n\ndef get_topic_history_for_stream(user_profile: UserProfile,\n recipient: Recipient,\n public_history: bool) -> List[Dict[str, Any]]:\n cursor = connection.cursor()\n if public_history:\n query = '''\n SELECT\n \"zerver_message\".\"subject\" as topic,\n max(\"zerver_message\".id) as max_message_id\n FROM \"zerver_message\"\n WHERE (\n \"zerver_message\".\"recipient_id\" = %s\n )\n GROUP BY (\n \"zerver_message\".\"subject\"\n )\n ORDER BY max(\"zerver_message\".id) DESC\n '''\n cursor.execute(query, [recipient.id])\n else:\n query = '''\n SELECT\n \"zerver_message\".\"subject\" as topic,\n max(\"zerver_message\".id) as max_message_id\n FROM \"zerver_message\"\n INNER JOIN \"zerver_usermessage\" ON (\n \"zerver_usermessage\".\"message_id\" = \"zerver_message\".\"id\"\n )\n WHERE (\n \"zerver_usermessage\".\"user_profile_id\" = %s AND\n \"zerver_message\".\"recipient_id\" = %s\n )\n GROUP BY (\n \"zerver_message\".\"subject\"\n )\n ORDER BY max(\"zerver_message\".id) DESC\n '''\n cursor.execute(query, [user_profile.id, recipient.id])\n rows = cursor.fetchall()\n cursor.close()\n\n return generate_topic_history_from_db_rows(rows)\n\ndef get_topic_history_for_web_public_stream(recipient: Recipient) -> List[Dict[str, Any]]:\n cursor = connection.cursor()\n query = '''\n SELECT\n \"zerver_message\".\"subject\" as topic,\n max(\"zerver_message\".id) as max_message_id\n FROM \"zerver_message\"\n WHERE (\n \"zerver_message\".\"recipient_id\" = %s\n )\n GROUP BY (\n \"zerver_message\".\"subject\"\n )\n ORDER BY max(\"zerver_message\".id) DESC\n '''\n cursor.execute(query, [recipient.id])\n rows = cursor.fetchall()\n cursor.close()\n\n return generate_topic_history_from_db_rows(rows)\n", "path": "zerver/lib/topic.py"}]} | 3,001 | 103 |
gh_patches_debug_15871 | rasdani/github-patches | git_diff | Kinto__kinto-1405 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Memcached to dockerfile and docker compose setup.
Following the addition of Memcached as a cache backend.
</issue>
<code>
[start of kinto/core/__init__.py]
1 """Main entry point
2 """
3 import logging
4 import pkg_resources
5 import tempfile
6
7 from cornice import Service as CorniceService
8 from dockerflow import logging as dockerflow_logging
9 from pyramid.settings import aslist
10
11 from kinto.core import errors
12 from kinto.core import events
13 from kinto.core.initialization import ( # NOQA
14 initialize, install_middlewares,
15 load_default_settings)
16 from kinto.core.utils import (
17 follow_subrequest, current_service, current_resource_name,
18 prefixed_userid, prefixed_principals, log_context)
19
20
21 logger = logging.getLogger(__name__)
22
23
24 # Module version, as defined in PEP-0396.
25 __version__ = pkg_resources.get_distribution('kinto').version # FIXME?
26
27 DEFAULT_SETTINGS = {
28 'backoff': None,
29 'batch_max_requests': 25,
30 'cache_backend': '',
31 'cache_url': '',
32 'cache_pool_size': 25,
33 'cache_prefix': '',
34 'cache_max_size_bytes': 524288,
35 'cors_origins': '*',
36 'cors_max_age_seconds': 3600,
37 'eos': None,
38 'eos_message': None,
39 'eos_url': None,
40 'error_info_link': 'https://github.com/Kinto/kinto/issues/',
41 'http_host': None,
42 'http_scheme': None,
43 'id_generator': 'kinto.core.storage.generators.UUID4',
44 'includes': '',
45 'initialization_sequence': (
46 'kinto.core.initialization.setup_request_bound_data',
47 'kinto.core.initialization.setup_json_serializer',
48 'kinto.core.initialization.setup_logging',
49 'kinto.core.initialization.setup_storage',
50 'kinto.core.initialization.setup_permission',
51 'kinto.core.initialization.setup_cache',
52 'kinto.core.initialization.setup_requests_scheme',
53 'kinto.core.initialization.setup_version_redirection',
54 'kinto.core.initialization.setup_deprecation',
55 'kinto.core.initialization.setup_authentication',
56 'kinto.core.initialization.setup_backoff',
57 'kinto.core.initialization.setup_statsd',
58 'kinto.core.initialization.setup_listeners',
59 'kinto.core.events.setup_transaction_hook',
60 ),
61 'event_listeners': '',
62 'heartbeat_timeout_seconds': 10,
63 'newrelic_config': None,
64 'newrelic_env': 'dev',
65 'paginate_by': None,
66 'pagination_token_validity_seconds': 10 * 60,
67 'permission_backend': '',
68 'permission_url': '',
69 'permission_pool_size': 25,
70 'profiler_dir': tempfile.gettempdir(),
71 'profiler_enabled': False,
72 'project_docs': '',
73 'project_name': '',
74 'project_version': '',
75 'readonly': False,
76 'retry_after_seconds': 30,
77 'statsd_backend': 'kinto.core.statsd',
78 'statsd_prefix': 'kinto.core',
79 'statsd_url': None,
80 'storage_backend': '',
81 'storage_url': '',
82 'storage_max_fetch_size': 10000,
83 'storage_pool_size': 25,
84 'tm.annotate_user': False, # Do annotate transactions with the user-id.
85 'transaction_per_request': True,
86 'userid_hmac_secret': '',
87 'version_json_path': 'version.json',
88 'version_prefix_redirect_enabled': True,
89 'trailing_slash_redirect_enabled': True,
90 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',
91 'multiauth.policies': 'basicauth',
92 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'
93 'BasicAuthAuthenticationPolicy'),
94 'multiauth.authorization_policy': ('kinto.core.authorization.'
95 'AuthorizationPolicy'),
96 }
97
98
99 class Service(CorniceService):
100 """Subclass of the default cornice service.
101
102 This is useful in order to attach specific behaviours without monkey
103 patching the default cornice service (which would impact other uses of it)
104 """
105 default_cors_headers = ('Backoff', 'Retry-After', 'Alert',
106 'Content-Length')
107
108 def error_handler(self, request):
109 return errors.json_error_handler(request)
110
111 @classmethod
112 def init_from_settings(cls, settings):
113 cls.cors_origins = tuple(aslist(settings['cors_origins']))
114 cors_max_age = settings['cors_max_age_seconds']
115 cls.cors_max_age = int(cors_max_age) if cors_max_age else None
116
117
118 class JsonLogFormatter(dockerflow_logging.JsonLogFormatter):
119 logger_name = 'kinto'
120
121 @classmethod
122 def init_from_settings(cls, settings):
123 cls.logger_name = settings['project_name']
124
125 def __init__(self, fmt=None, datefmt=None, style='%'):
126 # Do not let mozilla-cloud-services-logger constructor to improperly
127 # use style as the logger_name.
128 # See https://github.com/mozilla/mozilla-cloud-services-logger/issues/3
129 logger_name = self.logger_name
130 super().__init__(fmt, datefmt, style)
131 self.logger_name = logger_name
132
133
134 def includeme(config):
135 settings = config.get_settings()
136
137 # Heartbeat registry.
138 config.registry.heartbeats = {}
139
140 # Public settings registry.
141 config.registry.public_settings = {'batch_max_requests', 'readonly'}
142
143 # Directive to declare arbitrary API capabilities.
144 def add_api_capability(config, identifier, description='', url='', **kw):
145 existing = config.registry.api_capabilities.get(identifier)
146 if existing:
147 error_msg = "The '{}' API capability was already registered ({})."
148 raise ValueError(error_msg.format(identifier, existing))
149
150 capability = dict(description=description, url=url, **kw)
151 config.registry.api_capabilities[identifier] = capability
152
153 config.add_directive('add_api_capability', add_api_capability)
154 config.registry.api_capabilities = {}
155
156 # Resource events helpers.
157 config.add_request_method(events.get_resource_events,
158 name='get_resource_events')
159 config.add_request_method(events.notify_resource_event,
160 name='notify_resource_event')
161
162 # Setup cornice.
163 config.include('cornice')
164
165 # Setup cornice api documentation
166 config.include('cornice_swagger')
167
168 # Per-request transaction.
169 config.include('pyramid_tm')
170
171 # Add CORS settings to the base kinto.core Service class.
172 Service.init_from_settings(settings)
173
174 # Use the project name as the main logger name (Logger field in MozLog).
175 JsonLogFormatter.init_from_settings(settings)
176
177 # Setup components.
178 for step in aslist(settings['initialization_sequence']):
179 step_func = config.maybe_dotted(step)
180 step_func(config)
181
182 # Custom helpers.
183 config.add_request_method(log_context)
184 config.add_request_method(follow_subrequest)
185 config.add_request_method(prefixed_userid, property=True)
186 config.add_request_method(prefixed_principals, reify=True)
187 config.add_request_method(lambda r: {
188 'id': r.prefixed_userid,
189 'principals': r.prefixed_principals},
190 name='get_user_info')
191 config.add_request_method(current_resource_name, reify=True)
192 config.add_request_method(current_service, reify=True)
193 config.commit()
194
195 # Include plugins after init, unlike pyramid includes.
196 includes = aslist(settings['includes'])
197 for app in includes:
198 config.include(app)
199
200 # # Show settings to output.
201 # for key, value in settings.items():
202 # logger.info('Using {} = {}'.format(key, value))
203
204 # Scan views.
205 config.scan('kinto.core.views')
206
207 # Give sign of life.
208 msg = 'Running {project_name} {project_version}.'
209 logger.info(msg.format_map(settings))
210
[end of kinto/core/__init__.py]
[start of kinto/core/storage/postgresql/client.py]
1 import contextlib
2 import logging
3 import warnings
4 from collections import defaultdict
5
6 from kinto.core.storage import exceptions
7 from kinto.core.utils import sqlalchemy
8 import transaction as zope_transaction
9
10
11 logger = logging.getLogger(__name__)
12
13
14 class PostgreSQLClient:
15 def __init__(self, session_factory, commit_manually, invalidate):
16 self.session_factory = session_factory
17 self.commit_manually = commit_manually
18 self.invalidate = invalidate
19
20 @contextlib.contextmanager
21 def connect(self, readonly=False, force_commit=False):
22 """
23 Pulls a connection from the pool when context is entered and
24 returns it when context is exited.
25
26 A COMMIT is performed on the current transaction if everything went
27 well. Otherwise transaction is ROLLBACK, and everything cleaned up.
28 """
29 commit_manually = self.commit_manually and not readonly
30 session = None
31 try:
32 # Pull connection from pool.
33 session = self.session_factory()
34 # Start context
35 yield session
36 if not readonly and not self.commit_manually:
37 # Mark session as dirty.
38 self.invalidate(session)
39 # Success
40 if commit_manually:
41 session.commit()
42 elif force_commit:
43 # Commit like would do a succesful request.
44 zope_transaction.commit()
45
46 except sqlalchemy.exc.IntegrityError as e:
47 logger.error(e, exc_info=True)
48 if commit_manually: # pragma: no branch
49 session.rollback()
50 raise exceptions.IntegrityError(original=e) from e
51 except sqlalchemy.exc.SQLAlchemyError as e:
52 logger.error(e, exc_info=True)
53 if session and commit_manually:
54 session.rollback()
55 raise exceptions.BackendError(original=e) from e
56 finally:
57 if session and self.commit_manually:
58 # Give back to pool if commit done manually.
59 session.close()
60
61
62 # Reuse existing client if same URL.
63 _CLIENTS = defaultdict(dict)
64
65
66 def create_from_config(config, prefix='', with_transaction=True):
67 """Create a PostgreSQLClient client using settings in the provided config.
68 """
69 if sqlalchemy is None:
70 message = ('PostgreSQL SQLAlchemy dependency missing. '
71 'Refer to installation section in documentation.')
72 raise ImportWarning(message)
73
74 from zope.sqlalchemy import ZopeTransactionExtension, invalidate
75 from sqlalchemy.orm import sessionmaker, scoped_session
76
77 settings = {**config.get_settings()}
78 # Custom Kinto settings, unsupported by SQLAlchemy.
79 settings.pop(prefix + 'backend', None)
80 settings.pop(prefix + 'max_fetch_size', None)
81 settings.pop(prefix + 'max_size_bytes', None)
82 settings.pop(prefix + 'prefix', None)
83 settings.pop(prefix + 'strict_json', None)
84 transaction_per_request = with_transaction and settings.pop('transaction_per_request', False)
85
86 url = settings[prefix + 'url']
87 existing_client = _CLIENTS[transaction_per_request].get(url)
88 if existing_client:
89 msg = ('Reuse existing PostgreSQL connection. '
90 'Parameters {}* will be ignored.'.format(prefix))
91 warnings.warn(msg)
92 return existing_client
93
94 # Initialize SQLAlchemy engine from settings.
95 poolclass_key = prefix + 'poolclass'
96 settings.setdefault(poolclass_key, ('kinto.core.storage.postgresql.'
97 'pool.QueuePoolWithMaxBacklog'))
98 settings[poolclass_key] = config.maybe_dotted(settings[poolclass_key])
99 engine = sqlalchemy.engine_from_config(settings, prefix=prefix, url=url)
100
101 # Initialize thread-safe session factory.
102 options = {}
103 if transaction_per_request:
104 # Plug with Pyramid transaction manager
105 options['extension'] = ZopeTransactionExtension()
106 session_factory = scoped_session(sessionmaker(bind=engine, **options))
107
108 # Store one client per URI.
109 commit_manually = (not transaction_per_request)
110 client = PostgreSQLClient(session_factory, commit_manually, invalidate)
111 _CLIENTS[transaction_per_request][url] = client
112 return client
113
[end of kinto/core/storage/postgresql/client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py
--- a/kinto/core/__init__.py
+++ b/kinto/core/__init__.py
@@ -28,6 +28,7 @@
'backoff': None,
'batch_max_requests': 25,
'cache_backend': '',
+ 'cache_hosts': '',
'cache_url': '',
'cache_pool_size': 25,
'cache_prefix': '',
diff --git a/kinto/core/storage/postgresql/client.py b/kinto/core/storage/postgresql/client.py
--- a/kinto/core/storage/postgresql/client.py
+++ b/kinto/core/storage/postgresql/client.py
@@ -81,6 +81,7 @@
settings.pop(prefix + 'max_size_bytes', None)
settings.pop(prefix + 'prefix', None)
settings.pop(prefix + 'strict_json', None)
+ settings.pop(prefix + 'hosts', None)
transaction_per_request = with_transaction and settings.pop('transaction_per_request', False)
url = settings[prefix + 'url']
| {"golden_diff": "diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py\n--- a/kinto/core/__init__.py\n+++ b/kinto/core/__init__.py\n@@ -28,6 +28,7 @@\n 'backoff': None,\n 'batch_max_requests': 25,\n 'cache_backend': '',\n+ 'cache_hosts': '',\n 'cache_url': '',\n 'cache_pool_size': 25,\n 'cache_prefix': '',\ndiff --git a/kinto/core/storage/postgresql/client.py b/kinto/core/storage/postgresql/client.py\n--- a/kinto/core/storage/postgresql/client.py\n+++ b/kinto/core/storage/postgresql/client.py\n@@ -81,6 +81,7 @@\n settings.pop(prefix + 'max_size_bytes', None)\n settings.pop(prefix + 'prefix', None)\n settings.pop(prefix + 'strict_json', None)\n+ settings.pop(prefix + 'hosts', None)\n transaction_per_request = with_transaction and settings.pop('transaction_per_request', False)\n \n url = settings[prefix + 'url']\n", "issue": "Add Memcached to dockerfile and docker compose setup.\nFollowing the addition of Memcached as a cache backend.\n", "before_files": [{"content": "\"\"\"Main entry point\n\"\"\"\nimport logging\nimport pkg_resources\nimport tempfile\n\nfrom cornice import Service as CorniceService\nfrom dockerflow import logging as dockerflow_logging\nfrom pyramid.settings import aslist\n\nfrom kinto.core import errors\nfrom kinto.core import events\nfrom kinto.core.initialization import ( # NOQA\n initialize, install_middlewares,\n load_default_settings)\nfrom kinto.core.utils import (\n follow_subrequest, current_service, current_resource_name,\n prefixed_userid, prefixed_principals, log_context)\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Module version, as defined in PEP-0396.\n__version__ = pkg_resources.get_distribution('kinto').version # FIXME?\n\nDEFAULT_SETTINGS = {\n 'backoff': None,\n 'batch_max_requests': 25,\n 'cache_backend': '',\n 'cache_url': '',\n 'cache_pool_size': 25,\n 'cache_prefix': '',\n 'cache_max_size_bytes': 524288,\n 'cors_origins': '*',\n 'cors_max_age_seconds': 3600,\n 'eos': None,\n 'eos_message': None,\n 'eos_url': None,\n 'error_info_link': 'https://github.com/Kinto/kinto/issues/',\n 'http_host': None,\n 'http_scheme': None,\n 'id_generator': 'kinto.core.storage.generators.UUID4',\n 'includes': '',\n 'initialization_sequence': (\n 'kinto.core.initialization.setup_request_bound_data',\n 'kinto.core.initialization.setup_json_serializer',\n 'kinto.core.initialization.setup_logging',\n 'kinto.core.initialization.setup_storage',\n 'kinto.core.initialization.setup_permission',\n 'kinto.core.initialization.setup_cache',\n 'kinto.core.initialization.setup_requests_scheme',\n 'kinto.core.initialization.setup_version_redirection',\n 'kinto.core.initialization.setup_deprecation',\n 'kinto.core.initialization.setup_authentication',\n 'kinto.core.initialization.setup_backoff',\n 'kinto.core.initialization.setup_statsd',\n 'kinto.core.initialization.setup_listeners',\n 'kinto.core.events.setup_transaction_hook',\n ),\n 'event_listeners': '',\n 'heartbeat_timeout_seconds': 10,\n 'newrelic_config': None,\n 'newrelic_env': 'dev',\n 'paginate_by': None,\n 'pagination_token_validity_seconds': 10 * 60,\n 'permission_backend': '',\n 'permission_url': '',\n 'permission_pool_size': 25,\n 'profiler_dir': tempfile.gettempdir(),\n 'profiler_enabled': False,\n 'project_docs': '',\n 'project_name': '',\n 'project_version': '',\n 'readonly': False,\n 'retry_after_seconds': 30,\n 'statsd_backend': 'kinto.core.statsd',\n 'statsd_prefix': 'kinto.core',\n 'statsd_url': None,\n 'storage_backend': '',\n 'storage_url': '',\n 'storage_max_fetch_size': 10000,\n 'storage_pool_size': 25,\n 'tm.annotate_user': False, # Do annotate transactions with the user-id.\n 'transaction_per_request': True,\n 'userid_hmac_secret': '',\n 'version_json_path': 'version.json',\n 'version_prefix_redirect_enabled': True,\n 'trailing_slash_redirect_enabled': True,\n 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',\n 'multiauth.policies': 'basicauth',\n 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'\n 'BasicAuthAuthenticationPolicy'),\n 'multiauth.authorization_policy': ('kinto.core.authorization.'\n 'AuthorizationPolicy'),\n}\n\n\nclass Service(CorniceService):\n \"\"\"Subclass of the default cornice service.\n\n This is useful in order to attach specific behaviours without monkey\n patching the default cornice service (which would impact other uses of it)\n \"\"\"\n default_cors_headers = ('Backoff', 'Retry-After', 'Alert',\n 'Content-Length')\n\n def error_handler(self, request):\n return errors.json_error_handler(request)\n\n @classmethod\n def init_from_settings(cls, settings):\n cls.cors_origins = tuple(aslist(settings['cors_origins']))\n cors_max_age = settings['cors_max_age_seconds']\n cls.cors_max_age = int(cors_max_age) if cors_max_age else None\n\n\nclass JsonLogFormatter(dockerflow_logging.JsonLogFormatter):\n logger_name = 'kinto'\n\n @classmethod\n def init_from_settings(cls, settings):\n cls.logger_name = settings['project_name']\n\n def __init__(self, fmt=None, datefmt=None, style='%'):\n # Do not let mozilla-cloud-services-logger constructor to improperly\n # use style as the logger_name.\n # See https://github.com/mozilla/mozilla-cloud-services-logger/issues/3\n logger_name = self.logger_name\n super().__init__(fmt, datefmt, style)\n self.logger_name = logger_name\n\n\ndef includeme(config):\n settings = config.get_settings()\n\n # Heartbeat registry.\n config.registry.heartbeats = {}\n\n # Public settings registry.\n config.registry.public_settings = {'batch_max_requests', 'readonly'}\n\n # Directive to declare arbitrary API capabilities.\n def add_api_capability(config, identifier, description='', url='', **kw):\n existing = config.registry.api_capabilities.get(identifier)\n if existing:\n error_msg = \"The '{}' API capability was already registered ({}).\"\n raise ValueError(error_msg.format(identifier, existing))\n\n capability = dict(description=description, url=url, **kw)\n config.registry.api_capabilities[identifier] = capability\n\n config.add_directive('add_api_capability', add_api_capability)\n config.registry.api_capabilities = {}\n\n # Resource events helpers.\n config.add_request_method(events.get_resource_events,\n name='get_resource_events')\n config.add_request_method(events.notify_resource_event,\n name='notify_resource_event')\n\n # Setup cornice.\n config.include('cornice')\n\n # Setup cornice api documentation\n config.include('cornice_swagger')\n\n # Per-request transaction.\n config.include('pyramid_tm')\n\n # Add CORS settings to the base kinto.core Service class.\n Service.init_from_settings(settings)\n\n # Use the project name as the main logger name (Logger field in MozLog).\n JsonLogFormatter.init_from_settings(settings)\n\n # Setup components.\n for step in aslist(settings['initialization_sequence']):\n step_func = config.maybe_dotted(step)\n step_func(config)\n\n # Custom helpers.\n config.add_request_method(log_context)\n config.add_request_method(follow_subrequest)\n config.add_request_method(prefixed_userid, property=True)\n config.add_request_method(prefixed_principals, reify=True)\n config.add_request_method(lambda r: {\n 'id': r.prefixed_userid,\n 'principals': r.prefixed_principals},\n name='get_user_info')\n config.add_request_method(current_resource_name, reify=True)\n config.add_request_method(current_service, reify=True)\n config.commit()\n\n # Include plugins after init, unlike pyramid includes.\n includes = aslist(settings['includes'])\n for app in includes:\n config.include(app)\n\n # # Show settings to output.\n # for key, value in settings.items():\n # logger.info('Using {} = {}'.format(key, value))\n\n # Scan views.\n config.scan('kinto.core.views')\n\n # Give sign of life.\n msg = 'Running {project_name} {project_version}.'\n logger.info(msg.format_map(settings))\n", "path": "kinto/core/__init__.py"}, {"content": "import contextlib\nimport logging\nimport warnings\nfrom collections import defaultdict\n\nfrom kinto.core.storage import exceptions\nfrom kinto.core.utils import sqlalchemy\nimport transaction as zope_transaction\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PostgreSQLClient:\n def __init__(self, session_factory, commit_manually, invalidate):\n self.session_factory = session_factory\n self.commit_manually = commit_manually\n self.invalidate = invalidate\n\n @contextlib.contextmanager\n def connect(self, readonly=False, force_commit=False):\n \"\"\"\n Pulls a connection from the pool when context is entered and\n returns it when context is exited.\n\n A COMMIT is performed on the current transaction if everything went\n well. Otherwise transaction is ROLLBACK, and everything cleaned up.\n \"\"\"\n commit_manually = self.commit_manually and not readonly\n session = None\n try:\n # Pull connection from pool.\n session = self.session_factory()\n # Start context\n yield session\n if not readonly and not self.commit_manually:\n # Mark session as dirty.\n self.invalidate(session)\n # Success\n if commit_manually:\n session.commit()\n elif force_commit:\n # Commit like would do a succesful request.\n zope_transaction.commit()\n\n except sqlalchemy.exc.IntegrityError as e:\n logger.error(e, exc_info=True)\n if commit_manually: # pragma: no branch\n session.rollback()\n raise exceptions.IntegrityError(original=e) from e\n except sqlalchemy.exc.SQLAlchemyError as e:\n logger.error(e, exc_info=True)\n if session and commit_manually:\n session.rollback()\n raise exceptions.BackendError(original=e) from e\n finally:\n if session and self.commit_manually:\n # Give back to pool if commit done manually.\n session.close()\n\n\n# Reuse existing client if same URL.\n_CLIENTS = defaultdict(dict)\n\n\ndef create_from_config(config, prefix='', with_transaction=True):\n \"\"\"Create a PostgreSQLClient client using settings in the provided config.\n \"\"\"\n if sqlalchemy is None:\n message = ('PostgreSQL SQLAlchemy dependency missing. '\n 'Refer to installation section in documentation.')\n raise ImportWarning(message)\n\n from zope.sqlalchemy import ZopeTransactionExtension, invalidate\n from sqlalchemy.orm import sessionmaker, scoped_session\n\n settings = {**config.get_settings()}\n # Custom Kinto settings, unsupported by SQLAlchemy.\n settings.pop(prefix + 'backend', None)\n settings.pop(prefix + 'max_fetch_size', None)\n settings.pop(prefix + 'max_size_bytes', None)\n settings.pop(prefix + 'prefix', None)\n settings.pop(prefix + 'strict_json', None)\n transaction_per_request = with_transaction and settings.pop('transaction_per_request', False)\n\n url = settings[prefix + 'url']\n existing_client = _CLIENTS[transaction_per_request].get(url)\n if existing_client:\n msg = ('Reuse existing PostgreSQL connection. '\n 'Parameters {}* will be ignored.'.format(prefix))\n warnings.warn(msg)\n return existing_client\n\n # Initialize SQLAlchemy engine from settings.\n poolclass_key = prefix + 'poolclass'\n settings.setdefault(poolclass_key, ('kinto.core.storage.postgresql.'\n 'pool.QueuePoolWithMaxBacklog'))\n settings[poolclass_key] = config.maybe_dotted(settings[poolclass_key])\n engine = sqlalchemy.engine_from_config(settings, prefix=prefix, url=url)\n\n # Initialize thread-safe session factory.\n options = {}\n if transaction_per_request:\n # Plug with Pyramid transaction manager\n options['extension'] = ZopeTransactionExtension()\n session_factory = scoped_session(sessionmaker(bind=engine, **options))\n\n # Store one client per URI.\n commit_manually = (not transaction_per_request)\n client = PostgreSQLClient(session_factory, commit_manually, invalidate)\n _CLIENTS[transaction_per_request][url] = client\n return client\n", "path": "kinto/core/storage/postgresql/client.py"}]} | 3,843 | 232 |
gh_patches_debug_3285 | rasdani/github-patches | git_diff | pyro-ppl__pyro-576 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TransformedDistribution's event_shape forwards to incorrect base distribution's method
Probably a typo:
https://github.com/uber/pyro/blob/51a2ccfe9445c7072c3150c4abe1ab1d2ac17246/pyro/distributions/transformed_distribution.py#L62
</issue>
<code>
[start of pyro/distributions/transformed_distribution.py]
1 from __future__ import absolute_import, division, print_function
2
3 import torch
4 import torch.nn as nn
5 from torch.autograd import Variable
6
7 from pyro.distributions.distribution import Distribution
8 from pyro.nn import AutoRegressiveNN
9
10
11 class TransformedDistribution(Distribution):
12 """
13 Transforms the base distribution by applying a sequence of `Bijector`s to it.
14 This results in a scorable distribution (i.e. it has a `log_pdf()` method).
15
16 :param base_distribution: a (continuous) base distribution; samples from this distribution
17 are passed through the sequence of `Bijector`s to yield a sample from the
18 `TransformedDistribution`
19 :type base_distribution: pyro.distribution.Distribution
20 :param bijectors: either a single Bijector or a sequence of Bijectors wrapped in a nn.ModuleList
21 :returns: the transformed distribution
22 """
23
24 def __init__(self, base_distribution, bijectors, *args, **kwargs):
25 super(TransformedDistribution, self).__init__(*args, **kwargs)
26 self.reparameterized = base_distribution.reparameterized
27 self.base_dist = base_distribution
28 if isinstance(bijectors, Bijector):
29 self.bijectors = nn.ModuleList([bijectors])
30 elif isinstance(bijectors, nn.ModuleList):
31 for bijector in bijectors:
32 assert isinstance(bijector, Bijector), \
33 "bijectors must be a Bijector or a nn.ModuleList of Bijectors"
34 self.bijectors = bijectors
35
36 def sample(self, *args, **kwargs):
37 """
38 :returns: a sample y
39 :rtype: torch.autograd.Variable
40
41 Sample from base distribution and pass through bijector(s)
42 """
43 x = self.base_dist.sample(*args, **kwargs)
44 next_input = x
45 for bijector in self.bijectors:
46 y = bijector(next_input)
47 if bijector.add_inverse_to_cache:
48 bijector._add_intermediate_to_cache(next_input, y, 'x')
49 next_input = y
50 return next_input
51
52 def batch_shape(self, x=None, *args, **kwargs):
53 """
54 Ref: :py:meth:`pyro.distributions.distribution.Distribution.batch_shape`
55 """
56 return self.base_dist.batch_shape(x, *args, **kwargs)
57
58 def event_shape(self, *args, **kwargs):
59 """
60 Ref: :py:meth:`pyro.distributions.distribution.Distribution.event_shape`
61 """
62 return self.base_dist.batch_shape(*args, **kwargs)
63
64 def log_pdf(self, y, *args, **kwargs):
65 """
66 :param y: a value sampled from the transformed distribution
67 :type y: torch.autograd.Variable
68
69 :returns: the score (the log pdf) of y
70 :rtype: torch.autograd.Variable
71
72 Scores the sample by inverting the bijector(s) and computing the score using the score
73 of the base distribution and the log det jacobian
74 """
75 inverses = []
76 next_to_invert = y
77 for bijector in reversed(self.bijectors):
78 inverse = bijector.inverse(next_to_invert)
79 inverses.append(inverse)
80 next_to_invert = inverse
81 log_pdf_base = self.base_dist.log_pdf(inverses[-1], *args, **kwargs)
82 log_det_jacobian = self.bijectors[-1].log_det_jacobian(y, *args, **kwargs)
83 for bijector, inverse in zip(list(reversed(self.bijectors))[1:], inverses[:-1]):
84 log_det_jacobian += bijector.log_det_jacobian(inverse, *args, **kwargs)
85 return log_pdf_base - log_det_jacobian
86
87 def batch_log_pdf(self, y, *args, **kwargs):
88 raise NotImplementedError("https://github.com/uber/pyro/issues/293")
89
90
91 class Bijector(nn.Module):
92 """
93 Abstract class `Bijector`. `Bijector` are bijective transformations with computable
94 log det jacobians. They are meant for use in `TransformedDistribution`.
95 """
96
97 def __init__(self, *args, **kwargs):
98 super(Bijector, self).__init__(*args, **kwargs)
99 self.add_inverse_to_cache = False
100
101 def __call__(self, *args, **kwargs):
102 """
103 Virtual forward method
104
105 Invokes the bijection x=>y
106 """
107 raise NotImplementedError()
108
109 def inverse(self, *args, **kwargs):
110 """
111 Virtual inverse method
112
113 Inverts the bijection y => x.
114 """
115 raise NotImplementedError()
116
117 def log_det_jacobian(self, *args, **kwargs):
118 """
119 Virtual logdet jacobian method.
120
121 Computes the log det jacobian `|dy/dx|`
122 """
123 raise NotImplementedError()
124
125
126 class InverseAutoregressiveFlow(Bijector):
127 """
128 An implementation of an Inverse Autoregressive Flow. Together with the `TransformedDistribution` this
129 provides a way to create richer variational approximations.
130
131 Example usage::
132
133 >>> base_dist = Normal(...)
134 >>> iaf = InverseAutoregressiveFlow(...)
135 >>> pyro.module("my_iaf", iaf)
136 >>> iaf_dist = TransformedDistribution(base_dist, iaf)
137
138 Note that this implementation is only meant to be used in settings where the inverse of the Bijector
139 is never explicitly computed (rather the result is cached from the forward call). In the context of
140 variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide,
141 i.e. in the variational distribution. In other contexts the inverse could in principle be computed but
142 this would be a (potentially) costly computation that scales with the dimension of the input (and in
143 any case support for this is not included in this implementation).
144
145 :param input_dim: dimension of input
146 :type input_dim: int
147 :param hidden_dim: hidden dimension (number of hidden units)
148 :type hidden_dim: int
149 :param sigmoid_bias: bias on the hidden units fed into the sigmoid; default=`2.0`
150 :type sigmoid_bias: float
151 :param permutation: whether the order of the inputs should be permuted (by default the conditional
152 dependence structure of the autoregression follows the sequential order)
153 :type permutation: bool
154
155 References:
156
157 1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934]
158 Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
159
160 2. Variational Inference with Normalizing Flows [arXiv:1505.05770]
161 Danilo Jimenez Rezende, Shakir Mohamed
162
163 3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509]
164 Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
165 """
166
167 def __init__(self, input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None):
168 super(InverseAutoregressiveFlow, self).__init__()
169 self.input_dim = input_dim
170 self.hidden_dim = hidden_dim
171 self.arn = AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=2, permutation=permutation)
172 self.sigmoid = nn.Sigmoid()
173 self.sigmoid_bias = Variable(torch.Tensor([sigmoid_bias]))
174 self._intermediates_cache = {}
175 self.add_inverse_to_cache = True
176
177 def get_arn(self):
178 """
179 :rtype: pyro.nn.AutoRegressiveNN
180
181 Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow
182 """
183 return self.arn
184
185 def __call__(self, x, *args, **kwargs):
186 """
187 :param x: the input into the bijection
188 :type x: torch.autograd.Variable
189
190 Invokes the bijection x=>y; in the prototypical context of a TransformedDistribution `x` is a
191 sample from the base distribution (or the output of a previous flow)
192 """
193 hidden = self.arn(x)
194 sigma = self.sigmoid(hidden[:, 0:self.input_dim] + self.sigmoid_bias.type_as(hidden))
195 mean = hidden[:, self.input_dim:]
196 y = sigma * x + (Variable(torch.ones(sigma.size())).type_as(sigma) - sigma) * mean
197 self._add_intermediate_to_cache(sigma, y, 'sigma')
198 return y
199
200 def inverse(self, y, *args, **kwargs):
201 """
202 :param y: the output of the bijection
203 :type y: torch.autograd.Variable
204
205 Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values
206 `y`; rather it assumes `y` is the result of a previously computed application of the bijector
207 to some `x` (which was cached on the forward call)
208 """
209 if (y, 'x') in self._intermediates_cache:
210 x = self._intermediates_cache.pop((y, 'x'))
211 return x
212 else:
213 raise KeyError("Bijector InverseAutoregressiveFlow expected to find" +
214 "key in intermediates cache but didn't")
215
216 def _add_intermediate_to_cache(self, intermediate, y, name):
217 """
218 Internal function used to cache intermediate results computed during the forward call
219 """
220 assert((y, name) not in self._intermediates_cache),\
221 "key collision in _add_intermediate_to_cache"
222 self._intermediates_cache[(y, name)] = intermediate
223
224 def log_det_jacobian(self, y, *args, **kwargs):
225 """
226 Calculates the determinant of the log jacobian
227 """
228 if (y, 'sigma') in self._intermediates_cache:
229 sigma = self._intermediates_cache.pop((y, 'sigma'))
230 else:
231 raise KeyError("Bijector InverseAutoregressiveFlow expected to find" +
232 "key in intermediates cache but didn't")
233 if 'log_pdf_mask' in kwargs:
234 return torch.sum(kwargs['log_pdf_mask'] * torch.log(sigma))
235 return torch.sum(torch.log(sigma))
236
[end of pyro/distributions/transformed_distribution.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyro/distributions/transformed_distribution.py b/pyro/distributions/transformed_distribution.py
--- a/pyro/distributions/transformed_distribution.py
+++ b/pyro/distributions/transformed_distribution.py
@@ -59,7 +59,7 @@
"""
Ref: :py:meth:`pyro.distributions.distribution.Distribution.event_shape`
"""
- return self.base_dist.batch_shape(*args, **kwargs)
+ return self.base_dist.event_shape(*args, **kwargs)
def log_pdf(self, y, *args, **kwargs):
"""
| {"golden_diff": "diff --git a/pyro/distributions/transformed_distribution.py b/pyro/distributions/transformed_distribution.py\n--- a/pyro/distributions/transformed_distribution.py\n+++ b/pyro/distributions/transformed_distribution.py\n@@ -59,7 +59,7 @@\n \"\"\"\n Ref: :py:meth:`pyro.distributions.distribution.Distribution.event_shape`\n \"\"\"\n- return self.base_dist.batch_shape(*args, **kwargs)\n+ return self.base_dist.event_shape(*args, **kwargs)\n \n def log_pdf(self, y, *args, **kwargs):\n \"\"\"\n", "issue": "TransformedDistribution's event_shape forwards to incorrect base distribution's method\nProbably a typo:\r\n\r\nhttps://github.com/uber/pyro/blob/51a2ccfe9445c7072c3150c4abe1ab1d2ac17246/pyro/distributions/transformed_distribution.py#L62\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\nfrom pyro.nn import AutoRegressiveNN\n\n\nclass TransformedDistribution(Distribution):\n \"\"\"\n Transforms the base distribution by applying a sequence of `Bijector`s to it.\n This results in a scorable distribution (i.e. it has a `log_pdf()` method).\n\n :param base_distribution: a (continuous) base distribution; samples from this distribution\n are passed through the sequence of `Bijector`s to yield a sample from the\n `TransformedDistribution`\n :type base_distribution: pyro.distribution.Distribution\n :param bijectors: either a single Bijector or a sequence of Bijectors wrapped in a nn.ModuleList\n :returns: the transformed distribution\n \"\"\"\n\n def __init__(self, base_distribution, bijectors, *args, **kwargs):\n super(TransformedDistribution, self).__init__(*args, **kwargs)\n self.reparameterized = base_distribution.reparameterized\n self.base_dist = base_distribution\n if isinstance(bijectors, Bijector):\n self.bijectors = nn.ModuleList([bijectors])\n elif isinstance(bijectors, nn.ModuleList):\n for bijector in bijectors:\n assert isinstance(bijector, Bijector), \\\n \"bijectors must be a Bijector or a nn.ModuleList of Bijectors\"\n self.bijectors = bijectors\n\n def sample(self, *args, **kwargs):\n \"\"\"\n :returns: a sample y\n :rtype: torch.autograd.Variable\n\n Sample from base distribution and pass through bijector(s)\n \"\"\"\n x = self.base_dist.sample(*args, **kwargs)\n next_input = x\n for bijector in self.bijectors:\n y = bijector(next_input)\n if bijector.add_inverse_to_cache:\n bijector._add_intermediate_to_cache(next_input, y, 'x')\n next_input = y\n return next_input\n\n def batch_shape(self, x=None, *args, **kwargs):\n \"\"\"\n Ref: :py:meth:`pyro.distributions.distribution.Distribution.batch_shape`\n \"\"\"\n return self.base_dist.batch_shape(x, *args, **kwargs)\n\n def event_shape(self, *args, **kwargs):\n \"\"\"\n Ref: :py:meth:`pyro.distributions.distribution.Distribution.event_shape`\n \"\"\"\n return self.base_dist.batch_shape(*args, **kwargs)\n\n def log_pdf(self, y, *args, **kwargs):\n \"\"\"\n :param y: a value sampled from the transformed distribution\n :type y: torch.autograd.Variable\n\n :returns: the score (the log pdf) of y\n :rtype: torch.autograd.Variable\n\n Scores the sample by inverting the bijector(s) and computing the score using the score\n of the base distribution and the log det jacobian\n \"\"\"\n inverses = []\n next_to_invert = y\n for bijector in reversed(self.bijectors):\n inverse = bijector.inverse(next_to_invert)\n inverses.append(inverse)\n next_to_invert = inverse\n log_pdf_base = self.base_dist.log_pdf(inverses[-1], *args, **kwargs)\n log_det_jacobian = self.bijectors[-1].log_det_jacobian(y, *args, **kwargs)\n for bijector, inverse in zip(list(reversed(self.bijectors))[1:], inverses[:-1]):\n log_det_jacobian += bijector.log_det_jacobian(inverse, *args, **kwargs)\n return log_pdf_base - log_det_jacobian\n\n def batch_log_pdf(self, y, *args, **kwargs):\n raise NotImplementedError(\"https://github.com/uber/pyro/issues/293\")\n\n\nclass Bijector(nn.Module):\n \"\"\"\n Abstract class `Bijector`. `Bijector` are bijective transformations with computable\n log det jacobians. They are meant for use in `TransformedDistribution`.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super(Bijector, self).__init__(*args, **kwargs)\n self.add_inverse_to_cache = False\n\n def __call__(self, *args, **kwargs):\n \"\"\"\n Virtual forward method\n\n Invokes the bijection x=>y\n \"\"\"\n raise NotImplementedError()\n\n def inverse(self, *args, **kwargs):\n \"\"\"\n Virtual inverse method\n\n Inverts the bijection y => x.\n \"\"\"\n raise NotImplementedError()\n\n def log_det_jacobian(self, *args, **kwargs):\n \"\"\"\n Virtual logdet jacobian method.\n\n Computes the log det jacobian `|dy/dx|`\n \"\"\"\n raise NotImplementedError()\n\n\nclass InverseAutoregressiveFlow(Bijector):\n \"\"\"\n An implementation of an Inverse Autoregressive Flow. Together with the `TransformedDistribution` this\n provides a way to create richer variational approximations.\n\n Example usage::\n\n >>> base_dist = Normal(...)\n >>> iaf = InverseAutoregressiveFlow(...)\n >>> pyro.module(\"my_iaf\", iaf)\n >>> iaf_dist = TransformedDistribution(base_dist, iaf)\n\n Note that this implementation is only meant to be used in settings where the inverse of the Bijector\n is never explicitly computed (rather the result is cached from the forward call). In the context of\n variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide,\n i.e. in the variational distribution. In other contexts the inverse could in principle be computed but\n this would be a (potentially) costly computation that scales with the dimension of the input (and in\n any case support for this is not included in this implementation).\n\n :param input_dim: dimension of input\n :type input_dim: int\n :param hidden_dim: hidden dimension (number of hidden units)\n :type hidden_dim: int\n :param sigmoid_bias: bias on the hidden units fed into the sigmoid; default=`2.0`\n :type sigmoid_bias: float\n :param permutation: whether the order of the inputs should be permuted (by default the conditional\n dependence structure of the autoregression follows the sequential order)\n :type permutation: bool\n\n References:\n\n 1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934]\n Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling\n\n 2. Variational Inference with Normalizing Flows [arXiv:1505.05770]\n Danilo Jimenez Rezende, Shakir Mohamed\n\n 3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509]\n Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle\n \"\"\"\n\n def __init__(self, input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None):\n super(InverseAutoregressiveFlow, self).__init__()\n self.input_dim = input_dim\n self.hidden_dim = hidden_dim\n self.arn = AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=2, permutation=permutation)\n self.sigmoid = nn.Sigmoid()\n self.sigmoid_bias = Variable(torch.Tensor([sigmoid_bias]))\n self._intermediates_cache = {}\n self.add_inverse_to_cache = True\n\n def get_arn(self):\n \"\"\"\n :rtype: pyro.nn.AutoRegressiveNN\n\n Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow\n \"\"\"\n return self.arn\n\n def __call__(self, x, *args, **kwargs):\n \"\"\"\n :param x: the input into the bijection\n :type x: torch.autograd.Variable\n\n Invokes the bijection x=>y; in the prototypical context of a TransformedDistribution `x` is a\n sample from the base distribution (or the output of a previous flow)\n \"\"\"\n hidden = self.arn(x)\n sigma = self.sigmoid(hidden[:, 0:self.input_dim] + self.sigmoid_bias.type_as(hidden))\n mean = hidden[:, self.input_dim:]\n y = sigma * x + (Variable(torch.ones(sigma.size())).type_as(sigma) - sigma) * mean\n self._add_intermediate_to_cache(sigma, y, 'sigma')\n return y\n\n def inverse(self, y, *args, **kwargs):\n \"\"\"\n :param y: the output of the bijection\n :type y: torch.autograd.Variable\n\n Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values\n `y`; rather it assumes `y` is the result of a previously computed application of the bijector\n to some `x` (which was cached on the forward call)\n \"\"\"\n if (y, 'x') in self._intermediates_cache:\n x = self._intermediates_cache.pop((y, 'x'))\n return x\n else:\n raise KeyError(\"Bijector InverseAutoregressiveFlow expected to find\" +\n \"key in intermediates cache but didn't\")\n\n def _add_intermediate_to_cache(self, intermediate, y, name):\n \"\"\"\n Internal function used to cache intermediate results computed during the forward call\n \"\"\"\n assert((y, name) not in self._intermediates_cache),\\\n \"key collision in _add_intermediate_to_cache\"\n self._intermediates_cache[(y, name)] = intermediate\n\n def log_det_jacobian(self, y, *args, **kwargs):\n \"\"\"\n Calculates the determinant of the log jacobian\n \"\"\"\n if (y, 'sigma') in self._intermediates_cache:\n sigma = self._intermediates_cache.pop((y, 'sigma'))\n else:\n raise KeyError(\"Bijector InverseAutoregressiveFlow expected to find\" +\n \"key in intermediates cache but didn't\")\n if 'log_pdf_mask' in kwargs:\n return torch.sum(kwargs['log_pdf_mask'] * torch.log(sigma))\n return torch.sum(torch.log(sigma))\n", "path": "pyro/distributions/transformed_distribution.py"}]} | 3,508 | 128 |
gh_patches_debug_14265 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reimplement user categories feature
We recently removed some features from the user Profile view and now want to reimplement them using only Django.
This task will be to add a "categories" field to the User Profile ("settings") view).
## Task
All of these tasks should be done in the `accounts` app.
- [ ] determine how to add a multi-select field to a Django form
- [ ] users should be able to select zero or more categories when editing their profile
- [ ] when a user updates their categories, the categories should be listed on the user profile page
</issue>
<code>
[start of project/accounts/views.py]
1 """
2 Class based views.
3
4 This module will include views for the accounts app.
5 """
6
7 from accounts.authentication import account_activation_token, send_activation_email
8 from accounts.forms import ProfileEditForm, UserRegistrationForm
9 from accounts.models import Profile
10 from django.conf import settings
11 from django.contrib.auth import get_user_model, login
12 from django.contrib.auth import views as auth_views
13 from django.contrib.auth.decorators import login_required
14 from django.contrib.auth.mixins import LoginRequiredMixin
15 from django.contrib.sites.shortcuts import get_current_site
16 from django.http import HttpResponseRedirect
17 from django.shortcuts import get_object_or_404, redirect
18 from django.template.response import TemplateResponse
19 from django.urls import reverse, reverse_lazy
20 from django.utils.encoding import force_str
21 from django.utils.http import urlsafe_base64_decode
22 from django.views import View
23 from django.views.generic.edit import FormView, UpdateView
24
25
26 class ProfileFollow(LoginRequiredMixin, View):
27 def get(self, request, *args, **kwargs):
28 # Prevent users from following themselves.
29 if request.user.username == kwargs["username"]:
30 pass
31 else:
32 following_profile = Profile.objects.get(user__username=kwargs["username"])
33
34 self.request.user.profile.following.add(following_profile)
35
36 redirect_to = reverse("profile", kwargs={"username": kwargs["username"]})
37
38 return HttpResponseRedirect(redirect_to)
39
40
41 class ProfileUnfollow(LoginRequiredMixin, View):
42 def get(self, request, *args, **kwargs):
43 # Prevent users from following themselves.
44 if request.user.username == kwargs["username"]:
45 pass
46 else:
47 following_profile = Profile.objects.get(user__username=kwargs["username"])
48
49 self.request.user.profile.following.remove(following_profile)
50
51 redirect_to = reverse("profile", kwargs={"username": kwargs["username"]})
52
53 return HttpResponseRedirect(redirect_to)
54
55
56 class RegisterView(FormView):
57 """
58 A form view that handles user registration.
59 """
60
61 template_name = "accounts/register/register.html"
62 form_class = UserRegistrationForm
63 success_url = "/"
64
65 def _create_user(self, form):
66 username = form.cleaned_data["username"]
67 password = form.cleaned_data["password"]
68 email = form.cleaned_data["email"]
69 user = get_user_model().objects.create_user(username, email, password)
70 return user
71
72 def _send_email(self, user):
73 domain = get_current_site(self.request).domain
74 send_activation_email(user, domain)
75
76 def _login(self, user):
77 login(self.request, user)
78
79 def form_valid(self, form):
80 user = self._create_user(form)
81
82 self._send_email(user)
83 self._login(user)
84
85 return super(RegisterView, self).form_valid(form)
86
87
88 class ProfileActivationView(View):
89 """
90 This shows different views to the user when they are verifying
91 their account based on whether they are already verified or not.
92 """
93
94 def get(self, request, uidb64, token):
95
96 try:
97 uid = force_str(urlsafe_base64_decode(uidb64))
98 user = get_user_model().objects.get(pk=uid)
99
100 except (TypeError, ValueError, OverflowError, get_user_model().DoesNotExist):
101 user = None
102
103 redirect_link = {"href": "/", "label": "Back to Main"}
104
105 template_var = {
106 "link": redirect_link,
107 }
108
109 if user is not None and account_activation_token.check_token(user, token):
110 profile = user.profile
111
112 if profile.is_verified:
113 template_var["title"] = "Email Already Verified"
114 template_var["content"] = "You have already verified your email."
115 else:
116 profile.is_verified = True
117 profile.save()
118
119 template_var["title"] = "Email Verification Successful"
120 template_var["content"] = "Thank you for verifying your email."
121 else:
122 # invalid link
123 template_var["title"] = "Email Verification Error"
124 template_var["content"] = "Email could not be verified"
125
126 return TemplateResponse(request, "general_message.html", template_var)
127
128
129 class PasswordResetView(auth_views.PasswordResetView):
130 template_name = "accounts/users/password_reset.html"
131 email_template_name = "accounts/users/password_reset_email.html"
132 subject_template_name = "accounts/users/password_reset_subject.txt"
133 from_email = settings.EMAIL_HOST_USER
134 success_url = reverse_lazy("accounts_password_reset_done")
135
136
137 class PasswordResetDoneView(auth_views.PasswordResetDoneView):
138 template_name = "accounts/users/password_reset_done.html"
139
140
141 class PasswordResetConfirmView(auth_views.PasswordResetConfirmView):
142 template_name = "accounts/users/password_reset_confirm.html"
143 success_url = reverse_lazy("accounts_password_reset_complete")
144
145
146 class PasswordResetCompleteView(auth_views.PasswordResetCompleteView):
147 template_name = "accounts/users/password_reset_complete.html"
148
149
150 class SettingsView(LoginRequiredMixin, UpdateView):
151 """A form view to edit Profile"""
152
153 login_url = "accounts_login"
154 form_class = ProfileEditForm
155 success_url = reverse_lazy("accounts_settings")
156 template_name = "accounts/settings.html"
157
158 def get_object(self, queryset=None):
159 return Profile.objects.get(user=self.request.user)
160
161 def get_initial(self):
162 profile = Profile.objects.get(user=self.request.user)
163 self.initial.update(
164 {
165 "username": profile.user.username,
166 "email": profile.user.email,
167 "first_name": profile.first_name or None,
168 "last_name": profile.last_name or None,
169 "about_me": profile.about_me or None,
170 "profile_image": profile.profile_image or None,
171 }
172 )
173 return super(SettingsView, self).get_initial()
174
175
176 class UserProfileView(LoginRequiredMixin, View):
177 """A view that shows profile for authorized users"""
178
179 def get(self, request, username=None):
180 profile = get_object_or_404(Profile, user__username=username)
181
182 return TemplateResponse(
183 request,
184 "account.html",
185 {
186 "profile": profile,
187 },
188 )
189
190
191 class UserFollowers(LoginRequiredMixin, View):
192 """A view that shows the followers for authorized users"""
193
194 def get(self, request, username=None):
195 profile = get_object_or_404(Profile, user__username=username)
196
197 return TemplateResponse(
198 request,
199 "user_followers.html",
200 {
201 "profile": profile,
202 },
203 )
204
205
206 class ProfileFollowing(LoginRequiredMixin, View):
207 """
208 A view that shows list of profiles
209 that profile with given username is following
210 """
211
212 def get(self, request, username=None):
213 profile = get_object_or_404(Profile, user__username=username)
214
215 return TemplateResponse(
216 request,
217 "profile_following.html",
218 {
219 "profile": profile,
220 },
221 )
222
223
224 class UserCivis(LoginRequiredMixin, View):
225 """
226 A view that shows list of civis
227 that profile with given username created
228 """
229
230 def get(self, request, username=None):
231 profile = get_object_or_404(Profile, user__username=username)
232 user = profile.user
233 civis = user.civis.all()
234
235 return TemplateResponse(
236 request,
237 "user_civis.html",
238 {"profile": profile, "civis": civis},
239 )
240
241
242 @login_required
243 def expunge_user(request):
244 """
245 Delete User Information
246 """
247
248 user_model = get_user_model()
249 user = get_object_or_404(user_model, username=request.user.username)
250
251 profile = get_object_or_404(Profile, user=user)
252
253 # Expunge personally identifiable data in user
254 expunged_user_data = {
255 "is_active": False,
256 "email": "",
257 "first_name": "",
258 "last_name": "",
259 "username": f"expunged-{ user.id }",
260 }
261 user.__dict__.update(expunged_user_data)
262 user.save()
263
264 # Expunge personally identifiable data in profile
265 expunged_profile_data = {
266 "first_name": "",
267 "last_name": "",
268 "about_me": "",
269 }
270 profile.__dict__.update(expunged_profile_data)
271 profile.save()
272
273 return redirect("/")
274
275
276 class UserIssues(LoginRequiredMixin, View):
277 def get(self, request, username=None):
278 profile = get_object_or_404(Profile, user__username=username)
279 user = profile.user
280 civis = user.civis.all()
281 followers = profile.followers.all()
282
283 return TemplateResponse(
284 request,
285 "user_civis.html",
286 {"profile": profile, "followers": followers, "civis": civis},
287 )
288
[end of project/accounts/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/project/accounts/views.py b/project/accounts/views.py
--- a/project/accounts/views.py
+++ b/project/accounts/views.py
@@ -178,7 +178,6 @@
def get(self, request, username=None):
profile = get_object_or_404(Profile, user__username=username)
-
return TemplateResponse(
request,
"account.html",
@@ -193,7 +192,6 @@
def get(self, request, username=None):
profile = get_object_or_404(Profile, user__username=username)
-
return TemplateResponse(
request,
"user_followers.html",
| {"golden_diff": "diff --git a/project/accounts/views.py b/project/accounts/views.py\n--- a/project/accounts/views.py\n+++ b/project/accounts/views.py\n@@ -178,7 +178,6 @@\n \n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n-\n return TemplateResponse(\n request,\n \"account.html\",\n@@ -193,7 +192,6 @@\n \n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n-\n return TemplateResponse(\n request,\n \"user_followers.html\",\n", "issue": "Reimplement user categories feature\nWe recently removed some features from the user Profile view and now want to reimplement them using only Django.\r\n\r\nThis task will be to add a \"categories\" field to the User Profile (\"settings\") view).\r\n\r\n## Task\r\n\r\nAll of these tasks should be done in the `accounts` app.\r\n\r\n- [ ] determine how to add a multi-select field to a Django form\r\n- [ ] users should be able to select zero or more categories when editing their profile\r\n- [ ] when a user updates their categories, the categories should be listed on the user profile page\r\n\n", "before_files": [{"content": "\"\"\"\nClass based views.\n\nThis module will include views for the accounts app.\n\"\"\"\n\nfrom accounts.authentication import account_activation_token, send_activation_email\nfrom accounts.forms import ProfileEditForm, UserRegistrationForm\nfrom accounts.models import Profile\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model, login\nfrom django.contrib.auth import views as auth_views\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.sites.shortcuts import get_current_site\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.encoding import force_str\nfrom django.utils.http import urlsafe_base64_decode\nfrom django.views import View\nfrom django.views.generic.edit import FormView, UpdateView\n\n\nclass ProfileFollow(LoginRequiredMixin, View):\n def get(self, request, *args, **kwargs):\n # Prevent users from following themselves.\n if request.user.username == kwargs[\"username\"]:\n pass\n else:\n following_profile = Profile.objects.get(user__username=kwargs[\"username\"])\n\n self.request.user.profile.following.add(following_profile)\n\n redirect_to = reverse(\"profile\", kwargs={\"username\": kwargs[\"username\"]})\n\n return HttpResponseRedirect(redirect_to)\n\n\nclass ProfileUnfollow(LoginRequiredMixin, View):\n def get(self, request, *args, **kwargs):\n # Prevent users from following themselves.\n if request.user.username == kwargs[\"username\"]:\n pass\n else:\n following_profile = Profile.objects.get(user__username=kwargs[\"username\"])\n\n self.request.user.profile.following.remove(following_profile)\n\n redirect_to = reverse(\"profile\", kwargs={\"username\": kwargs[\"username\"]})\n\n return HttpResponseRedirect(redirect_to)\n\n\nclass RegisterView(FormView):\n \"\"\"\n A form view that handles user registration.\n \"\"\"\n\n template_name = \"accounts/register/register.html\"\n form_class = UserRegistrationForm\n success_url = \"/\"\n\n def _create_user(self, form):\n username = form.cleaned_data[\"username\"]\n password = form.cleaned_data[\"password\"]\n email = form.cleaned_data[\"email\"]\n user = get_user_model().objects.create_user(username, email, password)\n return user\n\n def _send_email(self, user):\n domain = get_current_site(self.request).domain\n send_activation_email(user, domain)\n\n def _login(self, user):\n login(self.request, user)\n\n def form_valid(self, form):\n user = self._create_user(form)\n\n self._send_email(user)\n self._login(user)\n\n return super(RegisterView, self).form_valid(form)\n\n\nclass ProfileActivationView(View):\n \"\"\"\n This shows different views to the user when they are verifying\n their account based on whether they are already verified or not.\n \"\"\"\n\n def get(self, request, uidb64, token):\n\n try:\n uid = force_str(urlsafe_base64_decode(uidb64))\n user = get_user_model().objects.get(pk=uid)\n\n except (TypeError, ValueError, OverflowError, get_user_model().DoesNotExist):\n user = None\n\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n\n template_var = {\n \"link\": redirect_link,\n }\n\n if user is not None and account_activation_token.check_token(user, token):\n profile = user.profile\n\n if profile.is_verified:\n template_var[\"title\"] = \"Email Already Verified\"\n template_var[\"content\"] = \"You have already verified your email.\"\n else:\n profile.is_verified = True\n profile.save()\n\n template_var[\"title\"] = \"Email Verification Successful\"\n template_var[\"content\"] = \"Thank you for verifying your email.\"\n else:\n # invalid link\n template_var[\"title\"] = \"Email Verification Error\"\n template_var[\"content\"] = \"Email could not be verified\"\n\n return TemplateResponse(request, \"general_message.html\", template_var)\n\n\nclass PasswordResetView(auth_views.PasswordResetView):\n template_name = \"accounts/users/password_reset.html\"\n email_template_name = \"accounts/users/password_reset_email.html\"\n subject_template_name = \"accounts/users/password_reset_subject.txt\"\n from_email = settings.EMAIL_HOST_USER\n success_url = reverse_lazy(\"accounts_password_reset_done\")\n\n\nclass PasswordResetDoneView(auth_views.PasswordResetDoneView):\n template_name = \"accounts/users/password_reset_done.html\"\n\n\nclass PasswordResetConfirmView(auth_views.PasswordResetConfirmView):\n template_name = \"accounts/users/password_reset_confirm.html\"\n success_url = reverse_lazy(\"accounts_password_reset_complete\")\n\n\nclass PasswordResetCompleteView(auth_views.PasswordResetCompleteView):\n template_name = \"accounts/users/password_reset_complete.html\"\n\n\nclass SettingsView(LoginRequiredMixin, UpdateView):\n \"\"\"A form view to edit Profile\"\"\"\n\n login_url = \"accounts_login\"\n form_class = ProfileEditForm\n success_url = reverse_lazy(\"accounts_settings\")\n template_name = \"accounts/settings.html\"\n\n def get_object(self, queryset=None):\n return Profile.objects.get(user=self.request.user)\n\n def get_initial(self):\n profile = Profile.objects.get(user=self.request.user)\n self.initial.update(\n {\n \"username\": profile.user.username,\n \"email\": profile.user.email,\n \"first_name\": profile.first_name or None,\n \"last_name\": profile.last_name or None,\n \"about_me\": profile.about_me or None,\n \"profile_image\": profile.profile_image or None,\n }\n )\n return super(SettingsView, self).get_initial()\n\n\nclass UserProfileView(LoginRequiredMixin, View):\n \"\"\"A view that shows profile for authorized users\"\"\"\n\n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n\n return TemplateResponse(\n request,\n \"account.html\",\n {\n \"profile\": profile,\n },\n )\n\n\nclass UserFollowers(LoginRequiredMixin, View):\n \"\"\"A view that shows the followers for authorized users\"\"\"\n\n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n\n return TemplateResponse(\n request,\n \"user_followers.html\",\n {\n \"profile\": profile,\n },\n )\n\n\nclass ProfileFollowing(LoginRequiredMixin, View):\n \"\"\"\n A view that shows list of profiles\n that profile with given username is following\n \"\"\"\n\n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n\n return TemplateResponse(\n request,\n \"profile_following.html\",\n {\n \"profile\": profile,\n },\n )\n\n\nclass UserCivis(LoginRequiredMixin, View):\n \"\"\"\n A view that shows list of civis\n that profile with given username created\n \"\"\"\n\n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n user = profile.user\n civis = user.civis.all()\n\n return TemplateResponse(\n request,\n \"user_civis.html\",\n {\"profile\": profile, \"civis\": civis},\n )\n\n\n@login_required\ndef expunge_user(request):\n \"\"\"\n Delete User Information\n \"\"\"\n\n user_model = get_user_model()\n user = get_object_or_404(user_model, username=request.user.username)\n\n profile = get_object_or_404(Profile, user=user)\n\n # Expunge personally identifiable data in user\n expunged_user_data = {\n \"is_active\": False,\n \"email\": \"\",\n \"first_name\": \"\",\n \"last_name\": \"\",\n \"username\": f\"expunged-{ user.id }\",\n }\n user.__dict__.update(expunged_user_data)\n user.save()\n\n # Expunge personally identifiable data in profile\n expunged_profile_data = {\n \"first_name\": \"\",\n \"last_name\": \"\",\n \"about_me\": \"\",\n }\n profile.__dict__.update(expunged_profile_data)\n profile.save()\n\n return redirect(\"/\")\n\n\nclass UserIssues(LoginRequiredMixin, View):\n def get(self, request, username=None):\n profile = get_object_or_404(Profile, user__username=username)\n user = profile.user\n civis = user.civis.all()\n followers = profile.followers.all()\n\n return TemplateResponse(\n request,\n \"user_civis.html\",\n {\"profile\": profile, \"followers\": followers, \"civis\": civis},\n )\n", "path": "project/accounts/views.py"}]} | 3,263 | 142 |
gh_patches_debug_336 | rasdani/github-patches | git_diff | piskvorky__gensim-919 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
import gensim fails since updating to Xcode 7.3
I just updated my version of Xcode to 7.3. When I run `pip install --upgrade gensim` the process completed without any issues. However, when I try `import gensim` within the python shell the terminal barfs a bunch of C++ output with a block of execution errors that begins with:
`Exception: Compilation failed (return status=1): clang: error: unsupported option '-b mi2'. clang: error: unsupported option '-b mi'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-sse4a'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-tbm'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-target-feature'....`
I think this has something to do with where gensim is looking for its header files, but I'm somewhat at a loss. Any help debugging would be greatly appreciated.
</issue>
<code>
[start of gensim/corpora/__init__.py]
1 """
2 This package contains implementations of various streaming corpus I/O format.
3 """
4
5 # bring corpus classes directly into package namespace, to save some typing
6 from .indexedcorpus import IndexedCorpus # must appear before the other classes
7
8 from .mmcorpus import MmCorpus
9 from .bleicorpus import BleiCorpus
10 from .svmlightcorpus import SvmLightCorpus
11 from .lowcorpus import LowCorpus
12 from .dictionary import Dictionary
13 from .hashdictionary import HashDictionary
14 from .wikicorpus import WikiCorpus
15 from .textcorpus import TextCorpus
16 from .ucicorpus import UciCorpus
17 from .malletcorpus import MalletCorpus
18 from .sharded_corpus import ShardedCorpus
19
[end of gensim/corpora/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gensim/corpora/__init__.py b/gensim/corpora/__init__.py
--- a/gensim/corpora/__init__.py
+++ b/gensim/corpora/__init__.py
@@ -15,4 +15,3 @@
from .textcorpus import TextCorpus
from .ucicorpus import UciCorpus
from .malletcorpus import MalletCorpus
-from .sharded_corpus import ShardedCorpus
| {"golden_diff": "diff --git a/gensim/corpora/__init__.py b/gensim/corpora/__init__.py\n--- a/gensim/corpora/__init__.py\n+++ b/gensim/corpora/__init__.py\n@@ -15,4 +15,3 @@\n from .textcorpus import TextCorpus\n from .ucicorpus import UciCorpus\n from .malletcorpus import MalletCorpus\n-from .sharded_corpus import ShardedCorpus\n", "issue": "import gensim fails since updating to Xcode 7.3 \nI just updated my version of Xcode to 7.3. When I run `pip install --upgrade gensim` the process completed without any issues. However, when I try `import gensim` within the python shell the terminal barfs a bunch of C++ output with a block of execution errors that begins with: \n\n`Exception: Compilation failed (return status=1): clang: error: unsupported option '-b mi2'. clang: error: unsupported option '-b mi'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-sse4a'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-tbm'. clang: error: unknown argument: '-target-feature'. clang: error: unknown argument: '-target-feature'....`\n\nI think this has something to do with where gensim is looking for its header files, but I'm somewhat at a loss. Any help debugging would be greatly appreciated. \n\n", "before_files": [{"content": "\"\"\"\nThis package contains implementations of various streaming corpus I/O format.\n\"\"\"\n\n# bring corpus classes directly into package namespace, to save some typing\nfrom .indexedcorpus import IndexedCorpus # must appear before the other classes\n\nfrom .mmcorpus import MmCorpus\nfrom .bleicorpus import BleiCorpus\nfrom .svmlightcorpus import SvmLightCorpus\nfrom .lowcorpus import LowCorpus\nfrom .dictionary import Dictionary\nfrom .hashdictionary import HashDictionary\nfrom .wikicorpus import WikiCorpus\nfrom .textcorpus import TextCorpus\nfrom .ucicorpus import UciCorpus\nfrom .malletcorpus import MalletCorpus\nfrom .sharded_corpus import ShardedCorpus\n", "path": "gensim/corpora/__init__.py"}]} | 975 | 109 |
gh_patches_debug_1358 | rasdani/github-patches | git_diff | mirumee__ariadne-270 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade to GraphQL-core v3
I'm getting the following deprecation warning. Is this something that is already on your radar / that you are planning to resolve for the next release?
>**DeprecationWarning**: GraphQL-core-next has been discontinued. It is now released as GraphQL-core v3 and newer.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Development Status :: 4 - Beta",
7 "Intended Audience :: Developers",
8 "License :: OSI Approved :: BSD License",
9 "Operating System :: OS Independent",
10 "Programming Language :: Python",
11 "Programming Language :: Python :: 3.6",
12 "Programming Language :: Python :: 3.7",
13 "Programming Language :: Python :: 3.8",
14 "Topic :: Software Development :: Libraries :: Python Modules",
15 ]
16
17 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
18 with open(README_PATH, "r") as f:
19 README = f.read()
20
21 setup(
22 name="ariadne",
23 author="Mirumee Software",
24 author_email="[email protected]",
25 description="Ariadne is a Python library for implementing GraphQL servers.",
26 long_description=README,
27 long_description_content_type="text/markdown",
28 license="BSD",
29 version="0.8.0",
30 url="https://github.com/mirumee/ariadne",
31 packages=["ariadne"],
32 include_package_data=True,
33 install_requires=[
34 "graphql-core-next<3.0.0",
35 "starlette<0.14",
36 "typing_extensions>=3.6.0",
37 ],
38 extras_require={"asgi-file-uploads": ["python-multipart>=0.0.5"]},
39 classifiers=CLASSIFIERS,
40 platforms=["any"],
41 zip_safe=False,
42 )
43
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
packages=["ariadne"],
include_package_data=True,
install_requires=[
- "graphql-core-next<3.0.0",
+ "graphql-core>=3.0.0",
"starlette<0.14",
"typing_extensions>=3.6.0",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n- \"graphql-core-next<3.0.0\",\n+ \"graphql-core>=3.0.0\",\n \"starlette<0.14\",\n \"typing_extensions>=3.6.0\",\n ],\n", "issue": "Upgrade to GraphQL-core v3\nI'm getting the following deprecation warning. Is this something that is already on your radar / that you are planning to resolve for the next release?\r\n\r\n>**DeprecationWarning**: GraphQL-core-next has been discontinued. It is now released as GraphQL-core v3 and newer.\n", "before_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.8.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n \"graphql-core-next<3.0.0\",\n \"starlette<0.14\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 1,005 | 97 |
gh_patches_debug_39045 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-580 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Event callback should not check the existence of master pod
Currently, there's error log when event callback checks the pod existence like this `Unknown pod name: elasticdl-master-test-mnist`. This is because `elasticdl-master-test-mnist` is the master pod that's created by users instead of being created by `WorkerManager`. We should fix this since it should only check pod existence for worker pod.
Complete log:
```
2019-06-06 17:38:37,952 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Running for pod: elasticdl-master-test-mnist
2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:71] Starting worker: 1
2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager ERROR [k8s_worker_manager.py:127] Unknown pod name: elasticdl-master-test-mnist
2019-06-06 17:38:37,954 elasticdl.python.elasticdl.master.k8s_client INFO [k8s_client.py:105] Creating worker: 1
2019-06-06 17:38:37,955 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Pending for pod: elasticdl-worker-test-mnist-0
```
Event callback should not check the existence of master pod
Currently, there's error log when event callback checks the pod existence like this `Unknown pod name: elasticdl-master-test-mnist`. This is because `elasticdl-master-test-mnist` is the master pod that's created by users instead of being created by `WorkerManager`. We should fix this since it should only check pod existence for worker pod.
Complete log:
```
2019-06-06 17:38:37,952 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Running for pod: elasticdl-master-test-mnist
2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:71] Starting worker: 1
2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager ERROR [k8s_worker_manager.py:127] Unknown pod name: elasticdl-master-test-mnist
2019-06-06 17:38:37,954 elasticdl.python.elasticdl.master.k8s_client INFO [k8s_client.py:105] Creating worker: 1
2019-06-06 17:38:37,955 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Pending for pod: elasticdl-worker-test-mnist-0
```
</issue>
<code>
[start of elasticdl/python/elasticdl/master/k8s_worker_manager.py]
1 import itertools
2 import logging
3 import threading
4
5 from collections import Counter
6 from elasticdl.python.elasticdl.master import k8s_client as k8s
7
8
9 class WorkerManager(object):
10 def __init__(
11 self,
12 task_q,
13 command,
14 args,
15 num_worker=1,
16 cpu_request="1000m",
17 cpu_limit="1000m",
18 memory_request="4096Mi",
19 memory_limit="4096Mi",
20 pod_priority=None,
21 mount_path=None,
22 volume_name=None,
23 image_pull_policy=None,
24 restart_policy="OnFailure",
25 **kwargs):
26 self._logger = logging.getLogger(__name__)
27 self._command = command
28 self._args = args
29 self._num_worker = num_worker
30 self._resource_requests = {
31 "cpu": cpu_request,
32 "memory": memory_request
33 }
34 self._resource_limits = {
35 "cpu": cpu_limit,
36 "memory": memory_limit
37 }
38 self._restart_policy = restart_policy
39 self._pod_priority = pod_priority
40 self._mount_path = mount_path
41 self._volume_name = volume_name
42 self._image_pull_policy = image_pull_policy
43 self._task_q = task_q
44 self._next_worker_id = itertools.count().__next__
45
46 # protects followed variables, which are accessed from event_cb.
47 self._lock = threading.Lock()
48 # worker id to (pod name, phase) mapping
49 # phase: None/Pending/Running/Succeeded/Failed/Unknown
50 # None: worker was just launched, haven't received event yet.
51 # Pending: worker pod not started yet
52 # Running: worker pod is running
53 # Succeeded: worker pod finishes all tasks and terminates with
54 # no issue.
55 # Failed: worker pod is killed for some reason
56 # Unknown: unknown
57 self._pods_phase = {}
58 # pod name to worker id mapping
59 self._pod_name_to_id = {}
60
61 self._relaunch_deleted_live_worker = True
62
63 self._k8s_client = k8s.Client(
64 event_callback=self._event_cb, **kwargs
65 )
66
67 def set_relaunch_deleted_live_worker(self, val):
68 self._relaunch_deleted_live_worker = bool(val)
69
70 def _start_worker(self, worker_id):
71 self._logger.info("Starting worker: %d" % worker_id)
72 with self._lock:
73 pod = self._k8s_client.create_worker(
74 worker_id,
75 self._resource_requests,
76 self._resource_limits,
77 self._pod_priority,
78 self._mount_path,
79 self._volume_name,
80 self._image_pull_policy,
81 command=self._command,
82 args=self._args + ["--worker_id", str(worker_id)],
83 restart_policy=self._restart_policy,
84 )
85 name = pod.metadata.name
86 self._pod_name_to_id[name] = worker_id
87 self._pods_phase[worker_id] = (name, None)
88
89 def start_workers(self):
90 for i in range(self._num_worker):
91 self._start_worker(self._next_worker_id())
92
93 def _remove_worker(self, worker_id):
94 with self._lock:
95 if worker_id not in self._pods_phase:
96 self._logger.error("Unknown worker id: %s" % worker_id)
97 return
98
99 # TODO: change _k8s_client to accept pod name instead of worker id.
100 self._k8s_client.delete_worker(worker_id)
101
102 def stop_relaunch_and_remove_workers(self):
103 with self._lock:
104 self._relaunch_deleted_live_worker = False
105 for worker_id in self._pods_phase:
106 self._k8s_client.delete_worker(worker_id)
107
108 def get_counters(self):
109 with self._lock:
110 return Counter([v for _, v in self._pods_phase.values()])
111
112 def _event_cb(self, event):
113 evt_obj = event.get("object")
114 evt_type = event.get("type")
115 if not evt_obj or not evt_type:
116 self._logger.error("Event doesn't have object or type: %s" % event)
117 return
118
119 pod_name = evt_obj.metadata.name
120 phase = evt_obj.status.phase
121 self._logger.info("Got event %s, phase %s for pod: %s" % (evt_type, phase, pod_name))
122
123 relaunch = False
124 with self._lock:
125 worker_id = self._pod_name_to_id.get(pod_name)
126 if worker_id is None:
127 self._logger.error("Unknown pod name: %s" % pod_name)
128 return
129
130 self._pods_phase[worker_id] = (pod_name, phase)
131 if evt_type == "DELETED":
132 del self._pods_phase[worker_id]
133 del self._pod_name_to_id[pod_name]
134 self._task_q.recover_tasks(worker_id)
135
136 # If the pod being deleted was not "Succeeded", relaunch a worker.
137 relaunch = self._relaunch_deleted_live_worker and phase != "Succeeded"
138 if relaunch:
139 self._logger.info("Relaunching worker.")
140 self._start_worker(self._next_worker_id())
141
[end of elasticdl/python/elasticdl/master/k8s_worker_manager.py]
[start of elasticdl/python/elasticdl/master/k8s_client.py]
1 import logging
2 import os
3 import threading
4 import traceback
5
6 from kubernetes import client, config, watch
7
8
9 class Client(object):
10 def __init__(
11 self, *, worker_image, namespace, job_name, event_callback
12 ):
13 """
14 ElasticDL k8s client.
15
16 Args:
17 worker_image: Docker image path for ElasticDL workers.
18 namespace: k8s namespace for ElasticDL pods.
19 job_name: ElasticDL job name, should be unique in the namespace.
20 Used as worker pod name prefix and value for "elasticdl" label.
21 event_callback: If not None, an event watcher will be created and
22 events passed to the callback.
23 """
24 if os.getenv("KUBERNETES_SERVICE_HOST"):
25 # We are running inside k8s
26 config.load_incluster_config()
27 else:
28 # Use user's kube config
29 config.load_kube_config()
30
31 self._v1 = client.CoreV1Api()
32 self._logger = logging.getLogger(__name__)
33 self._image = worker_image
34 self._ns = namespace
35 self._job_name = job_name
36 self._event_cb = event_callback
37 if self._event_cb:
38 threading.Thread(
39 target=self._watch, name="event_watcher", daemon=True
40 ).start()
41
42 def _watch(self):
43 stream = watch.Watch().stream(
44 self._v1.list_namespaced_pod,
45 self._ns,
46 label_selector="elasticdl_job_name=" + self._job_name,
47 )
48 for event in stream:
49 try:
50 self._event_cb(event)
51 except Exception:
52 traceback.print_exc()
53
54 def get_pod_name(self, worker_id):
55 return "elasticdl-worker-" + self._job_name + "-" + str(worker_id)
56
57 def _create_worker_pod(self, worker_id, resource_requests, resource_limits, priority,
58 mount_path, volume_name, image_pull_policy, command, args, restart_policy):
59 # Worker container config
60 container = client.V1Container(
61 name=self.get_pod_name(worker_id),
62 image=self._image,
63 command=command,
64 resources=client.V1ResourceRequirements(
65 requests=resource_requests,
66 limits=resource_limits
67 ),
68 image_pull_policy=image_pull_policy,
69 args=args
70 )
71
72 # Pod
73 spec = client.V1PodSpec(
74 containers=[container],
75 restart_policy=restart_policy,
76 )
77
78 # Mount data path
79 if mount_path is not None and volume_name is not None:
80 volume = client.V1Volume(
81 name='data-volume',
82 persistent_volume_claim=client.V1PersistentVolumeClaimVolumeSource(
83 claim_name="fileserver-claim", read_only=False))
84 spec.volumes = [volume]
85 container.volume_mounts = [client.V1VolumeMount(name=volume_name, mount_path=mount_path)]
86
87 if priority is not None:
88 spec.priority_class_name = priority
89
90 pod = client.V1Pod(
91 spec=spec,
92 metadata=client.V1ObjectMeta(
93 name=self.get_pod_name(worker_id),
94 labels={
95 "app": "elasticdl",
96 "elasticdl_job_name": self._job_name
97 },
98 ),
99 )
100 return pod
101
102 def create_worker(self, worker_id, resource_requests, resource_limits, priority=None,
103 mount_path=None, volume_name=None, image_pull_policy=None,
104 command=None, args=None, restart_policy="OnFailure"):
105 self._logger.info("Creating worker: " + str(worker_id))
106 pod = self._create_worker_pod(
107 worker_id, resource_requests, resource_limits, priority,
108 mount_path, volume_name, image_pull_policy, command=command,
109 args=args, restart_policy=restart_policy)
110 return self._v1.create_namespaced_pod(self._ns, pod)
111
112 def delete_worker(self, worker_id):
113 self._logger.info("Deleting worker: " + str(worker_id))
114 self._v1.delete_namespaced_pod(
115 self.get_pod_name(worker_id),
116 self._ns,
117 body=client.V1DeleteOptions(grace_period_seconds=0),
118 )
119
[end of elasticdl/python/elasticdl/master/k8s_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/python/elasticdl/master/k8s_client.py b/elasticdl/python/elasticdl/master/k8s_client.py
--- a/elasticdl/python/elasticdl/master/k8s_client.py
+++ b/elasticdl/python/elasticdl/master/k8s_client.py
@@ -5,6 +5,8 @@
from kubernetes import client, config, watch
+WORKER_POD_NAME_PREFIX = "elasticdl-worker-"
+
class Client(object):
def __init__(
@@ -51,14 +53,14 @@
except Exception:
traceback.print_exc()
- def get_pod_name(self, worker_id):
- return "elasticdl-worker-" + self._job_name + "-" + str(worker_id)
+ def get_worker_pod_name(self, worker_id):
+ return WORKER_POD_NAME_PREFIX + self._job_name + "-" + str(worker_id)
def _create_worker_pod(self, worker_id, resource_requests, resource_limits, priority,
mount_path, volume_name, image_pull_policy, command, args, restart_policy):
# Worker container config
container = client.V1Container(
- name=self.get_pod_name(worker_id),
+ name=self.get_worker_pod_name(worker_id),
image=self._image,
command=command,
resources=client.V1ResourceRequirements(
@@ -90,7 +92,7 @@
pod = client.V1Pod(
spec=spec,
metadata=client.V1ObjectMeta(
- name=self.get_pod_name(worker_id),
+ name=self.get_worker_pod_name(worker_id),
labels={
"app": "elasticdl",
"elasticdl_job_name": self._job_name
@@ -112,7 +114,7 @@
def delete_worker(self, worker_id):
self._logger.info("Deleting worker: " + str(worker_id))
self._v1.delete_namespaced_pod(
- self.get_pod_name(worker_id),
+ self.get_worker_pod_name(worker_id),
self._ns,
body=client.V1DeleteOptions(grace_period_seconds=0),
)
diff --git a/elasticdl/python/elasticdl/master/k8s_worker_manager.py b/elasticdl/python/elasticdl/master/k8s_worker_manager.py
--- a/elasticdl/python/elasticdl/master/k8s_worker_manager.py
+++ b/elasticdl/python/elasticdl/master/k8s_worker_manager.py
@@ -123,8 +123,8 @@
relaunch = False
with self._lock:
worker_id = self._pod_name_to_id.get(pod_name)
- if worker_id is None:
- self._logger.error("Unknown pod name: %s" % pod_name)
+ if worker_id is None and k8s.WORKER_POD_NAME_PREFIX in pod_name:
+ self._logger.error("Unknown worker pod name: %s" % pod_name)
return
self._pods_phase[worker_id] = (pod_name, phase)
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/master/k8s_client.py b/elasticdl/python/elasticdl/master/k8s_client.py\n--- a/elasticdl/python/elasticdl/master/k8s_client.py\n+++ b/elasticdl/python/elasticdl/master/k8s_client.py\n@@ -5,6 +5,8 @@\n \n from kubernetes import client, config, watch\n \n+WORKER_POD_NAME_PREFIX = \"elasticdl-worker-\"\n+\n \n class Client(object):\n def __init__(\n@@ -51,14 +53,14 @@\n except Exception:\n traceback.print_exc()\n \n- def get_pod_name(self, worker_id):\n- return \"elasticdl-worker-\" + self._job_name + \"-\" + str(worker_id)\n+ def get_worker_pod_name(self, worker_id):\n+ return WORKER_POD_NAME_PREFIX + self._job_name + \"-\" + str(worker_id)\n \n def _create_worker_pod(self, worker_id, resource_requests, resource_limits, priority,\n mount_path, volume_name, image_pull_policy, command, args, restart_policy):\n # Worker container config\n container = client.V1Container(\n- name=self.get_pod_name(worker_id),\n+ name=self.get_worker_pod_name(worker_id),\n image=self._image,\n command=command,\n resources=client.V1ResourceRequirements(\n@@ -90,7 +92,7 @@\n pod = client.V1Pod(\n spec=spec,\n metadata=client.V1ObjectMeta(\n- name=self.get_pod_name(worker_id),\n+ name=self.get_worker_pod_name(worker_id),\n labels={\n \"app\": \"elasticdl\",\n \"elasticdl_job_name\": self._job_name\n@@ -112,7 +114,7 @@\n def delete_worker(self, worker_id):\n self._logger.info(\"Deleting worker: \" + str(worker_id))\n self._v1.delete_namespaced_pod(\n- self.get_pod_name(worker_id),\n+ self.get_worker_pod_name(worker_id),\n self._ns,\n body=client.V1DeleteOptions(grace_period_seconds=0),\n )\ndiff --git a/elasticdl/python/elasticdl/master/k8s_worker_manager.py b/elasticdl/python/elasticdl/master/k8s_worker_manager.py\n--- a/elasticdl/python/elasticdl/master/k8s_worker_manager.py\n+++ b/elasticdl/python/elasticdl/master/k8s_worker_manager.py\n@@ -123,8 +123,8 @@\n relaunch = False\n with self._lock:\n worker_id = self._pod_name_to_id.get(pod_name)\n- if worker_id is None:\n- self._logger.error(\"Unknown pod name: %s\" % pod_name)\n+ if worker_id is None and k8s.WORKER_POD_NAME_PREFIX in pod_name:\n+ self._logger.error(\"Unknown worker pod name: %s\" % pod_name)\n return\n \n self._pods_phase[worker_id] = (pod_name, phase)\n", "issue": "Event callback should not check the existence of master pod\nCurrently, there's error log when event callback checks the pod existence like this `Unknown pod name: elasticdl-master-test-mnist`. This is because `elasticdl-master-test-mnist` is the master pod that's created by users instead of being created by `WorkerManager`. We should fix this since it should only check pod existence for worker pod.\r\n\r\nComplete log:\r\n```\r\n2019-06-06 17:38:37,952 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Running for pod: elasticdl-master-test-mnist\r\n2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:71] Starting worker: 1\r\n2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager ERROR [k8s_worker_manager.py:127] Unknown pod name: elasticdl-master-test-mnist\r\n2019-06-06 17:38:37,954 elasticdl.python.elasticdl.master.k8s_client INFO [k8s_client.py:105] Creating worker: 1\r\n2019-06-06 17:38:37,955 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Pending for pod: elasticdl-worker-test-mnist-0\r\n```\nEvent callback should not check the existence of master pod\nCurrently, there's error log when event callback checks the pod existence like this `Unknown pod name: elasticdl-master-test-mnist`. This is because `elasticdl-master-test-mnist` is the master pod that's created by users instead of being created by `WorkerManager`. We should fix this since it should only check pod existence for worker pod.\r\n\r\nComplete log:\r\n```\r\n2019-06-06 17:38:37,952 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Running for pod: elasticdl-master-test-mnist\r\n2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:71] Starting worker: 1\r\n2019-06-06 17:38:37,953 elasticdl.python.elasticdl.master.k8s_worker_manager ERROR [k8s_worker_manager.py:127] Unknown pod name: elasticdl-master-test-mnist\r\n2019-06-06 17:38:37,954 elasticdl.python.elasticdl.master.k8s_client INFO [k8s_client.py:105] Creating worker: 1\r\n2019-06-06 17:38:37,955 elasticdl.python.elasticdl.master.k8s_worker_manager INFO [k8s_worker_manager.py:121] Got event ADDED, phase Pending for pod: elasticdl-worker-test-mnist-0\r\n```\n", "before_files": [{"content": "import itertools\nimport logging\nimport threading\n\nfrom collections import Counter\nfrom elasticdl.python.elasticdl.master import k8s_client as k8s\n\n\nclass WorkerManager(object):\n def __init__(\n self,\n task_q,\n command,\n args,\n num_worker=1,\n cpu_request=\"1000m\",\n cpu_limit=\"1000m\",\n memory_request=\"4096Mi\",\n memory_limit=\"4096Mi\",\n pod_priority=None,\n mount_path=None,\n volume_name=None,\n image_pull_policy=None,\n restart_policy=\"OnFailure\",\n **kwargs):\n self._logger = logging.getLogger(__name__)\n self._command = command\n self._args = args\n self._num_worker = num_worker\n self._resource_requests = {\n \"cpu\": cpu_request,\n \"memory\": memory_request\n }\n self._resource_limits = {\n \"cpu\": cpu_limit,\n \"memory\": memory_limit\n }\n self._restart_policy = restart_policy\n self._pod_priority = pod_priority\n self._mount_path = mount_path\n self._volume_name = volume_name\n self._image_pull_policy = image_pull_policy\n self._task_q = task_q\n self._next_worker_id = itertools.count().__next__\n\n # protects followed variables, which are accessed from event_cb.\n self._lock = threading.Lock()\n # worker id to (pod name, phase) mapping\n # phase: None/Pending/Running/Succeeded/Failed/Unknown\n # None: worker was just launched, haven't received event yet.\n # Pending: worker pod not started yet\n # Running: worker pod is running\n # Succeeded: worker pod finishes all tasks and terminates with\n # no issue.\n # Failed: worker pod is killed for some reason\n # Unknown: unknown\n self._pods_phase = {}\n # pod name to worker id mapping\n self._pod_name_to_id = {}\n\n self._relaunch_deleted_live_worker = True\n\n self._k8s_client = k8s.Client(\n event_callback=self._event_cb, **kwargs\n )\n\n def set_relaunch_deleted_live_worker(self, val):\n self._relaunch_deleted_live_worker = bool(val)\n\n def _start_worker(self, worker_id):\n self._logger.info(\"Starting worker: %d\" % worker_id)\n with self._lock:\n pod = self._k8s_client.create_worker(\n worker_id,\n self._resource_requests,\n self._resource_limits,\n self._pod_priority,\n self._mount_path,\n self._volume_name,\n self._image_pull_policy,\n command=self._command,\n args=self._args + [\"--worker_id\", str(worker_id)],\n restart_policy=self._restart_policy,\n )\n name = pod.metadata.name\n self._pod_name_to_id[name] = worker_id\n self._pods_phase[worker_id] = (name, None)\n\n def start_workers(self):\n for i in range(self._num_worker):\n self._start_worker(self._next_worker_id())\n\n def _remove_worker(self, worker_id):\n with self._lock:\n if worker_id not in self._pods_phase:\n self._logger.error(\"Unknown worker id: %s\" % worker_id)\n return\n\n # TODO: change _k8s_client to accept pod name instead of worker id.\n self._k8s_client.delete_worker(worker_id)\n\n def stop_relaunch_and_remove_workers(self):\n with self._lock:\n self._relaunch_deleted_live_worker = False\n for worker_id in self._pods_phase:\n self._k8s_client.delete_worker(worker_id)\n\n def get_counters(self):\n with self._lock:\n return Counter([v for _, v in self._pods_phase.values()])\n\n def _event_cb(self, event):\n evt_obj = event.get(\"object\")\n evt_type = event.get(\"type\")\n if not evt_obj or not evt_type:\n self._logger.error(\"Event doesn't have object or type: %s\" % event)\n return\n\n pod_name = evt_obj.metadata.name\n phase = evt_obj.status.phase\n self._logger.info(\"Got event %s, phase %s for pod: %s\" % (evt_type, phase, pod_name))\n\n relaunch = False\n with self._lock:\n worker_id = self._pod_name_to_id.get(pod_name)\n if worker_id is None:\n self._logger.error(\"Unknown pod name: %s\" % pod_name)\n return\n\n self._pods_phase[worker_id] = (pod_name, phase)\n if evt_type == \"DELETED\":\n del self._pods_phase[worker_id]\n del self._pod_name_to_id[pod_name]\n self._task_q.recover_tasks(worker_id)\n\n # If the pod being deleted was not \"Succeeded\", relaunch a worker.\n relaunch = self._relaunch_deleted_live_worker and phase != \"Succeeded\"\n if relaunch:\n self._logger.info(\"Relaunching worker.\")\n self._start_worker(self._next_worker_id())\n", "path": "elasticdl/python/elasticdl/master/k8s_worker_manager.py"}, {"content": "import logging\nimport os\nimport threading\nimport traceback\n\nfrom kubernetes import client, config, watch\n\n\nclass Client(object):\n def __init__(\n self, *, worker_image, namespace, job_name, event_callback\n ):\n \"\"\"\n ElasticDL k8s client.\n\n Args:\n worker_image: Docker image path for ElasticDL workers.\n namespace: k8s namespace for ElasticDL pods.\n job_name: ElasticDL job name, should be unique in the namespace.\n Used as worker pod name prefix and value for \"elasticdl\" label.\n event_callback: If not None, an event watcher will be created and\n events passed to the callback.\n \"\"\"\n if os.getenv(\"KUBERNETES_SERVICE_HOST\"):\n # We are running inside k8s\n config.load_incluster_config()\n else:\n # Use user's kube config\n config.load_kube_config()\n\n self._v1 = client.CoreV1Api()\n self._logger = logging.getLogger(__name__)\n self._image = worker_image\n self._ns = namespace\n self._job_name = job_name\n self._event_cb = event_callback\n if self._event_cb:\n threading.Thread(\n target=self._watch, name=\"event_watcher\", daemon=True\n ).start()\n\n def _watch(self):\n stream = watch.Watch().stream(\n self._v1.list_namespaced_pod,\n self._ns,\n label_selector=\"elasticdl_job_name=\" + self._job_name,\n )\n for event in stream:\n try:\n self._event_cb(event)\n except Exception:\n traceback.print_exc()\n\n def get_pod_name(self, worker_id):\n return \"elasticdl-worker-\" + self._job_name + \"-\" + str(worker_id)\n\n def _create_worker_pod(self, worker_id, resource_requests, resource_limits, priority,\n mount_path, volume_name, image_pull_policy, command, args, restart_policy):\n # Worker container config\n container = client.V1Container(\n name=self.get_pod_name(worker_id),\n image=self._image,\n command=command,\n resources=client.V1ResourceRequirements(\n requests=resource_requests,\n limits=resource_limits\n ),\n image_pull_policy=image_pull_policy,\n args=args\n )\n\n # Pod\n spec = client.V1PodSpec(\n containers=[container],\n restart_policy=restart_policy,\n )\n\n # Mount data path\n if mount_path is not None and volume_name is not None:\n volume = client.V1Volume(\n name='data-volume',\n persistent_volume_claim=client.V1PersistentVolumeClaimVolumeSource(\n claim_name=\"fileserver-claim\", read_only=False))\n spec.volumes = [volume]\n container.volume_mounts = [client.V1VolumeMount(name=volume_name, mount_path=mount_path)]\n\n if priority is not None:\n spec.priority_class_name = priority\n\n pod = client.V1Pod(\n spec=spec,\n metadata=client.V1ObjectMeta(\n name=self.get_pod_name(worker_id),\n labels={\n \"app\": \"elasticdl\",\n \"elasticdl_job_name\": self._job_name\n },\n ),\n )\n return pod\n\n def create_worker(self, worker_id, resource_requests, resource_limits, priority=None,\n mount_path=None, volume_name=None, image_pull_policy=None,\n command=None, args=None, restart_policy=\"OnFailure\"):\n self._logger.info(\"Creating worker: \" + str(worker_id))\n pod = self._create_worker_pod(\n worker_id, resource_requests, resource_limits, priority,\n mount_path, volume_name, image_pull_policy, command=command,\n args=args, restart_policy=restart_policy)\n return self._v1.create_namespaced_pod(self._ns, pod)\n\n def delete_worker(self, worker_id):\n self._logger.info(\"Deleting worker: \" + str(worker_id))\n self._v1.delete_namespaced_pod(\n self.get_pod_name(worker_id),\n self._ns,\n body=client.V1DeleteOptions(grace_period_seconds=0),\n )\n", "path": "elasticdl/python/elasticdl/master/k8s_client.py"}]} | 3,941 | 654 |
gh_patches_debug_40122 | rasdani/github-patches | git_diff | google__timesketch-1717 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow other OIDC providers for authentication
**Is your feature request related to a problem? Please describe.**
Currently authentication using OIDC is only available to Google identity federation
**Describe the solution you'd like**
To be able to use other OIDC providers for authentication
</issue>
<code>
[start of timesketch/lib/google_auth.py]
1 # Copyright 2018 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Based on the example code from the public Google IAP documentation:
16 https://cloud.google.com/iap/docs/signed-headers-howto
17 """
18
19 from __future__ import unicode_literals
20
21 import time
22 import json
23 import hashlib
24 import os
25 import six
26
27 # six.moves is a dynamically-created namespace that doesn't actually
28 # exist and therefore pylint can't statically analyze it.
29 # pylint: disable-msg=import-error
30 from six.moves.urllib import parse as urlparse
31
32 import jwt
33 import requests
34
35 from flask import url_for
36 from flask import current_app
37 from flask import session
38
39 from timesketch.lib.definitions import HTTP_STATUS_CODE_OK
40
41
42 CSRF_KEY = 'google_oauth2_csrf_token'
43 AUTH_URI = 'https://accounts.google.com/o/oauth2/v2/auth'
44 DISCOVERY_URL = 'https://accounts.google.com/.well-known/openid-configuration'
45
46
47 class JwtValidationError(Exception):
48 """Raised when a JSON Web Token cannot be validated."""
49
50
51 class JwtKeyError(Exception):
52 """Raised when there is a problem with the public key used for signing."""
53
54
55 class JwtFetchError(Exception):
56 """Raised when there is a problem with the public key used for signing."""
57
58
59 class DiscoveryDocumentError(Exception):
60 """Raised when there is a problem with the discovery document."""
61
62
63 def _fetch_public_keys(url):
64 """Fetch public keys used for verifying signatures.
65
66 Args:
67 url: URL where keys can be fetched.
68
69 Raises:
70 JwTKeyError if keys cannot be fetched.
71
72 Returns:
73 HTTP response.
74 """
75 try:
76 resp = requests.get(url)
77 except requests.exceptions.RequestException as e:
78 raise JwtKeyError('Cannot fetch public keys: {}'.format(e)) from e
79 if resp.status_code != HTTP_STATUS_CODE_OK:
80 raise JwtKeyError(
81 'Cannot fetch public keys: {}'.format(resp.status_code))
82 return resp.json()
83
84
85 def _fetch_oauth2_discovery_document():
86 """Fetch Google OAuth2 discovery document.
87
88 Raises:
89 DiscoveryDocumentError if document cannot be fetched.
90
91 Returns:
92 HTTP response.
93 """
94 try:
95 resp = requests.get(DISCOVERY_URL)
96 except requests.exceptions.RequestException as e:
97 raise DiscoveryDocumentError(
98 'Cannot fetch discovery document: {}'.format(e)) from e
99 if resp.status_code != HTTP_STATUS_CODE_OK:
100 raise DiscoveryDocumentError(
101 'Cannot fetch discovery_document: {}'.format(resp.status_code))
102 return resp.json()
103
104
105 def _generate_random_token():
106 """Generate random string to use as CSRF and nonce tokens.
107
108 Returns:
109 Random string.
110 """
111 return hashlib.sha256(os.urandom(1024)).hexdigest()
112
113
114 def get_oauth2_authorize_url(hosted_domain=None):
115 """Generate an authorization URL for Google's OAuth2 service.
116
117 Args:
118 hosted_domain: Optional GSuite domain to limit access to.
119
120 Returns:
121 Authorization URL.
122 """
123 csrf_token = _generate_random_token()
124 nonce = _generate_random_token()
125 redirect_uri = url_for(
126 'user_views.google_openid_connect',
127 _scheme='https',
128 _external=True
129 )
130 scopes = ('openid', 'email', 'profile')
131
132 # Add the generated CSRF token to the client session for later validation.
133 session[CSRF_KEY] = csrf_token
134
135 # Generate authorization URL
136 params = dict(
137 client_id=current_app.config.get('GOOGLE_OIDC_CLIENT_ID'),
138 scope=' '.join(scopes),
139 response_type='code',
140 access_type='online', # Because we don't need a refresh token.
141 state=csrf_token,
142 nonce=nonce, # Enable replay attack protection attack.
143 redirect_uri=redirect_uri
144 )
145 if hosted_domain:
146 params['hd'] = hosted_domain
147
148 urlencoded_params = urlparse.urlencode(params)
149 google_authorization_url = '{}?{}'.format(AUTH_URI, urlencoded_params)
150 return google_authorization_url
151
152
153 def get_encoded_jwt_over_https(code):
154 """Fetch a JSON Web Token (JWT) using a authentication code.
155
156 Args:
157 code: Authentication code obtained from an OAuth2 flow.
158
159 Raises:
160 JwtFetchError if JWT cannot be fetched.
161
162 Returns:
163 Encoded JWT.
164 """
165
166 discovery_document = get_oauth2_discovery_document()
167 redirect_uri = url_for(
168 'user_views.google_openid_connect',
169 _scheme='https',
170 _external=True
171 )
172 post_data = {
173 'code': code,
174 'client_id': current_app.config.get('GOOGLE_OIDC_CLIENT_ID'),
175 'client_secret': current_app.config.get('GOOGLE_OIDC_CLIENT_SECRET'),
176 'redirect_uri': redirect_uri,
177 'grant_type': 'authorization_code'
178 }
179 token_url = discovery_document.get('token_endpoint')
180 try:
181 response = requests.post(token_url, data=post_data)
182 encoded_jwt = response.json().get('id_token')
183 except requests.exceptions.RequestException as e:
184 raise JwtFetchError(
185 'Cannot fetch JWT: {}'.format(e)) from e
186 if response.status_code != HTTP_STATUS_CODE_OK:
187 raise JwtFetchError(
188 'Cannot fetch JWT: {}'.format(response.status_code))
189
190 if not encoded_jwt:
191 raise JwtFetchError('Cannot fetch JWT: Missing id_token in response')
192
193 return encoded_jwt
194
195
196 def decode_jwt(encoded_jwt, public_key, algorithm, expected_audience):
197 """Decode a JSON Web Token (JWT).
198
199 Args:
200 encoded_jwt: The contents of the X-Goog-IAP-JWT-Assertion header.
201 public_key: Key to verify signature of the JWT.
202 algorithm: Algorithm used for the key. E.g. ES256, RS256
203 expected_audience: Expected audience in the JWT.
204
205 Returns:
206 Decoded JWT as a dict object.
207
208 Raises:
209 JwtValidationError: if the JWT token cannot be decoded.
210 """
211 try:
212 decoded_jwt = jwt.decode(
213 jwt=encoded_jwt, key=public_key, algorithms=[algorithm],
214 audience=expected_audience)
215 return decoded_jwt
216 except (jwt.exceptions.InvalidTokenError,
217 jwt.exceptions.InvalidKeyError) as e:
218 raise JwtValidationError('JWT validation error: {}'.format(e)) from e
219
220 return None
221
222
223 def validate_jwt(decoded_jwt, expected_issuer, expected_domain=None):
224 """Decode and validate a JSON Web token (JWT).
225
226 Cloud IAP:
227 https://cloud.google.com/iap/docs/signed-headers-howto
228
229 Google OpenID Connect:
230 https://developers.google.com/identity/protocols/OpenIDConnect
231
232 Args:
233 decoded_jwt: A dict object containing the decoded JWT token.
234 expected_issuer: Expected issuer of the JWT.
235 expected_domain: Expected GSuite domain in the JWT (optional).
236
237 Raises:
238 JwtValidationError: If unable to validate the JWT.
239 """
240 # Make sure the token is not created in the future or has expired.
241 try:
242 now = int(time.time())
243 issued_at = decoded_jwt['iat']
244 if isinstance(issued_at, six.string_types):
245 issued_at = int(issued_at, 10)
246
247 expires_at = decoded_jwt['exp']
248 if isinstance(expires_at, six.string_types):
249 expires_at = int(expires_at, 10)
250
251 if issued_at > now:
252 raise JwtValidationError('Token was issued in the future')
253 if expires_at < now:
254 raise JwtValidationError('Token has expired')
255 except KeyError as e:
256 raise JwtValidationError('Missing timestamp: {}'.format(e)) from e
257
258 # Check that the issuer of the token is correct.
259 try:
260 issuer = decoded_jwt['iss']
261 if issuer != expected_issuer:
262 raise JwtValidationError('Wrong issuer: {}'.format(issuer))
263 except KeyError as e:
264 raise JwtValidationError('Missing issuer') from e
265
266 if 'email' not in decoded_jwt:
267 raise JwtValidationError('Missing email field in token')
268
269 if expected_domain:
270 try:
271 domain = decoded_jwt['hd']
272 if not domain == expected_domain:
273 raise JwtValidationError('Wrong domain: {}'.format(domain))
274 except KeyError as e:
275 raise JwtValidationError('Missing domain: {}'.format(e)) from e
276
277
278 def get_public_key_for_jwt(encoded_jwt, url):
279 """Get public key for JWT in order to verify the signature.
280
281 The keys get cached in order to limit the amount of network round trips.
282
283 Args:
284 encoded_jwt: Base64 encoded JWT.
285 url: URL where keys can be fetched.
286
287 Raises:
288 JwTKeyError if keys cannot be fetched.
289
290 Returns:
291 Key as string.
292 """
293 # Get the Key ID from the JWT header.
294 key_id = jwt.get_unverified_header(encoded_jwt).get('kid')
295 if not key_id:
296 raise JwtKeyError('Missing key ID field in token header')
297 key_cache = get_public_key_for_jwt.key_cache
298 key = key_cache.get(key_id)
299 if key:
300 return key
301
302 # Re-fetch the key file.
303 keys_json = _fetch_public_keys(url)
304 if 'keys' in keys_json:
305 _new_keys_dict = {}
306 for key_dict in keys_json['keys']:
307 public_key = jwt.algorithms.RSAAlgorithm.from_jwk(
308 json.dumps(key_dict))
309 _new_keys_dict[key_dict['kid']] = public_key
310 key_cache = _new_keys_dict
311 else:
312 key_cache = keys_json
313 get_public_key_for_jwt.key_cache = key_cache
314 key = key_cache.get(key_id)
315 if not key:
316 raise JwtKeyError('IAP public key {!r} not found'.format(key_id))
317
318 return key
319
320
321 def get_oauth2_discovery_document():
322 """Get Google OAuth2 discovery document.
323
324 The document is cached in order to limit the amount of network round trips
325 and is set to expire in 12 hours from when it was fetched.
326
327 Returns:
328 Discovery document as a dictionary.
329 """
330 now = int(time.time())
331 cache = get_oauth2_discovery_document.cache
332 discovery_document = cache.get('current')
333 if discovery_document:
334 # Check if the document has expired.
335 created_at = discovery_document['created_at']
336 expires_at = created_at + 12*60*60 # 12 hours in seconds
337 if now < expires_at:
338 return discovery_document['document']
339
340 # Re-fetch the discovery document.
341 new_discovery_document = _fetch_oauth2_discovery_document()
342 cache = {
343 'current': {
344 'created_at': now,
345 'document': new_discovery_document
346 }
347 }
348 get_oauth2_discovery_document.cache = cache
349 return new_discovery_document
350
351
352 # Used to cache public keys.
353 get_public_key_for_jwt.key_cache = {}
354
355 # Used to cache discovery documents.
356 get_oauth2_discovery_document.cache = {}
357
[end of timesketch/lib/google_auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/timesketch/lib/google_auth.py b/timesketch/lib/google_auth.py
--- a/timesketch/lib/google_auth.py
+++ b/timesketch/lib/google_auth.py
@@ -38,12 +38,10 @@
from timesketch.lib.definitions import HTTP_STATUS_CODE_OK
-
CSRF_KEY = 'google_oauth2_csrf_token'
AUTH_URI = 'https://accounts.google.com/o/oauth2/v2/auth'
DISCOVERY_URL = 'https://accounts.google.com/.well-known/openid-configuration'
-
class JwtValidationError(Exception):
"""Raised when a JSON Web Token cannot be validated."""
@@ -91,8 +89,10 @@
Returns:
HTTP response.
"""
+ discovery_url = current_app.config.get(
+ 'GOOGLE_OIDC_DISCOVERY_URL', DISCOVERY_URL)
try:
- resp = requests.get(DISCOVERY_URL)
+ resp = requests.get(discovery_url)
except requests.exceptions.RequestException as e:
raise DiscoveryDocumentError(
'Cannot fetch discovery document: {}'.format(e)) from e
@@ -120,6 +120,7 @@
Returns:
Authorization URL.
"""
+ auth_uri = current_app.config.get('GOOGLE_OIDC_AUTH_URL', AUTH_URI)
csrf_token = _generate_random_token()
nonce = _generate_random_token()
redirect_uri = url_for(
@@ -146,7 +147,7 @@
params['hd'] = hosted_domain
urlencoded_params = urlparse.urlencode(params)
- google_authorization_url = '{}?{}'.format(AUTH_URI, urlencoded_params)
+ google_authorization_url = '{}?{}'.format(auth_uri, urlencoded_params)
return google_authorization_url
@@ -199,7 +200,9 @@
Args:
encoded_jwt: The contents of the X-Goog-IAP-JWT-Assertion header.
public_key: Key to verify signature of the JWT.
- algorithm: Algorithm used for the key. E.g. ES256, RS256
+ algorithm: Algorithm used for the key. E.g. ES256, RS256. If the
+ GOOGLE_OIDC_ALGORITHM is set in the config, it will overwrite
+ the algorithm used here.
expected_audience: Expected audience in the JWT.
Returns:
@@ -208,9 +211,12 @@
Raises:
JwtValidationError: if the JWT token cannot be decoded.
"""
+ chosen_algorithm = current_app.config.get(
+ 'GOOGLE_OIDC_ALGORITHM', algorithm)
try:
decoded_jwt = jwt.decode(
- jwt=encoded_jwt, key=public_key, algorithms=[algorithm],
+ jwt=encoded_jwt, key=public_key,
+ algorithms=[chosen_algorithm],
audience=expected_audience)
return decoded_jwt
except (jwt.exceptions.InvalidTokenError,
| {"golden_diff": "diff --git a/timesketch/lib/google_auth.py b/timesketch/lib/google_auth.py\n--- a/timesketch/lib/google_auth.py\n+++ b/timesketch/lib/google_auth.py\n@@ -38,12 +38,10 @@\n \n from timesketch.lib.definitions import HTTP_STATUS_CODE_OK\n \n-\n CSRF_KEY = 'google_oauth2_csrf_token'\n AUTH_URI = 'https://accounts.google.com/o/oauth2/v2/auth'\n DISCOVERY_URL = 'https://accounts.google.com/.well-known/openid-configuration'\n \n-\n class JwtValidationError(Exception):\n \"\"\"Raised when a JSON Web Token cannot be validated.\"\"\"\n \n@@ -91,8 +89,10 @@\n Returns:\n HTTP response.\n \"\"\"\n+ discovery_url = current_app.config.get(\n+ 'GOOGLE_OIDC_DISCOVERY_URL', DISCOVERY_URL)\n try:\n- resp = requests.get(DISCOVERY_URL)\n+ resp = requests.get(discovery_url)\n except requests.exceptions.RequestException as e:\n raise DiscoveryDocumentError(\n 'Cannot fetch discovery document: {}'.format(e)) from e\n@@ -120,6 +120,7 @@\n Returns:\n Authorization URL.\n \"\"\"\n+ auth_uri = current_app.config.get('GOOGLE_OIDC_AUTH_URL', AUTH_URI)\n csrf_token = _generate_random_token()\n nonce = _generate_random_token()\n redirect_uri = url_for(\n@@ -146,7 +147,7 @@\n params['hd'] = hosted_domain\n \n urlencoded_params = urlparse.urlencode(params)\n- google_authorization_url = '{}?{}'.format(AUTH_URI, urlencoded_params)\n+ google_authorization_url = '{}?{}'.format(auth_uri, urlencoded_params)\n return google_authorization_url\n \n \n@@ -199,7 +200,9 @@\n Args:\n encoded_jwt: The contents of the X-Goog-IAP-JWT-Assertion header.\n public_key: Key to verify signature of the JWT.\n- algorithm: Algorithm used for the key. E.g. ES256, RS256\n+ algorithm: Algorithm used for the key. E.g. ES256, RS256. If the\n+ GOOGLE_OIDC_ALGORITHM is set in the config, it will overwrite\n+ the algorithm used here.\n expected_audience: Expected audience in the JWT.\n \n Returns:\n@@ -208,9 +211,12 @@\n Raises:\n JwtValidationError: if the JWT token cannot be decoded.\n \"\"\"\n+ chosen_algorithm = current_app.config.get(\n+ 'GOOGLE_OIDC_ALGORITHM', algorithm)\n try:\n decoded_jwt = jwt.decode(\n- jwt=encoded_jwt, key=public_key, algorithms=[algorithm],\n+ jwt=encoded_jwt, key=public_key,\n+ algorithms=[chosen_algorithm],\n audience=expected_audience)\n return decoded_jwt\n except (jwt.exceptions.InvalidTokenError,\n", "issue": "Allow other OIDC providers for authentication\n**Is your feature request related to a problem? Please describe.**\r\nCurrently authentication using OIDC is only available to Google identity federation\r\n\r\n**Describe the solution you'd like**\r\nTo be able to use other OIDC providers for authentication \r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nBased on the example code from the public Google IAP documentation:\nhttps://cloud.google.com/iap/docs/signed-headers-howto\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport time\nimport json\nimport hashlib\nimport os\nimport six\n\n# six.moves is a dynamically-created namespace that doesn't actually\n# exist and therefore pylint can't statically analyze it.\n# pylint: disable-msg=import-error\nfrom six.moves.urllib import parse as urlparse\n\nimport jwt\nimport requests\n\nfrom flask import url_for\nfrom flask import current_app\nfrom flask import session\n\nfrom timesketch.lib.definitions import HTTP_STATUS_CODE_OK\n\n\nCSRF_KEY = 'google_oauth2_csrf_token'\nAUTH_URI = 'https://accounts.google.com/o/oauth2/v2/auth'\nDISCOVERY_URL = 'https://accounts.google.com/.well-known/openid-configuration'\n\n\nclass JwtValidationError(Exception):\n \"\"\"Raised when a JSON Web Token cannot be validated.\"\"\"\n\n\nclass JwtKeyError(Exception):\n \"\"\"Raised when there is a problem with the public key used for signing.\"\"\"\n\n\nclass JwtFetchError(Exception):\n \"\"\"Raised when there is a problem with the public key used for signing.\"\"\"\n\n\nclass DiscoveryDocumentError(Exception):\n \"\"\"Raised when there is a problem with the discovery document.\"\"\"\n\n\ndef _fetch_public_keys(url):\n \"\"\"Fetch public keys used for verifying signatures.\n\n Args:\n url: URL where keys can be fetched.\n\n Raises:\n JwTKeyError if keys cannot be fetched.\n\n Returns:\n HTTP response.\n \"\"\"\n try:\n resp = requests.get(url)\n except requests.exceptions.RequestException as e:\n raise JwtKeyError('Cannot fetch public keys: {}'.format(e)) from e\n if resp.status_code != HTTP_STATUS_CODE_OK:\n raise JwtKeyError(\n 'Cannot fetch public keys: {}'.format(resp.status_code))\n return resp.json()\n\n\ndef _fetch_oauth2_discovery_document():\n \"\"\"Fetch Google OAuth2 discovery document.\n\n Raises:\n DiscoveryDocumentError if document cannot be fetched.\n\n Returns:\n HTTP response.\n \"\"\"\n try:\n resp = requests.get(DISCOVERY_URL)\n except requests.exceptions.RequestException as e:\n raise DiscoveryDocumentError(\n 'Cannot fetch discovery document: {}'.format(e)) from e\n if resp.status_code != HTTP_STATUS_CODE_OK:\n raise DiscoveryDocumentError(\n 'Cannot fetch discovery_document: {}'.format(resp.status_code))\n return resp.json()\n\n\ndef _generate_random_token():\n \"\"\"Generate random string to use as CSRF and nonce tokens.\n\n Returns:\n Random string.\n \"\"\"\n return hashlib.sha256(os.urandom(1024)).hexdigest()\n\n\ndef get_oauth2_authorize_url(hosted_domain=None):\n \"\"\"Generate an authorization URL for Google's OAuth2 service.\n\n Args:\n hosted_domain: Optional GSuite domain to limit access to.\n\n Returns:\n Authorization URL.\n \"\"\"\n csrf_token = _generate_random_token()\n nonce = _generate_random_token()\n redirect_uri = url_for(\n 'user_views.google_openid_connect',\n _scheme='https',\n _external=True\n )\n scopes = ('openid', 'email', 'profile')\n\n # Add the generated CSRF token to the client session for later validation.\n session[CSRF_KEY] = csrf_token\n\n # Generate authorization URL\n params = dict(\n client_id=current_app.config.get('GOOGLE_OIDC_CLIENT_ID'),\n scope=' '.join(scopes),\n response_type='code',\n access_type='online', # Because we don't need a refresh token.\n state=csrf_token,\n nonce=nonce, # Enable replay attack protection attack.\n redirect_uri=redirect_uri\n )\n if hosted_domain:\n params['hd'] = hosted_domain\n\n urlencoded_params = urlparse.urlencode(params)\n google_authorization_url = '{}?{}'.format(AUTH_URI, urlencoded_params)\n return google_authorization_url\n\n\ndef get_encoded_jwt_over_https(code):\n \"\"\"Fetch a JSON Web Token (JWT) using a authentication code.\n\n Args:\n code: Authentication code obtained from an OAuth2 flow.\n\n Raises:\n JwtFetchError if JWT cannot be fetched.\n\n Returns:\n Encoded JWT.\n \"\"\"\n\n discovery_document = get_oauth2_discovery_document()\n redirect_uri = url_for(\n 'user_views.google_openid_connect',\n _scheme='https',\n _external=True\n )\n post_data = {\n 'code': code,\n 'client_id': current_app.config.get('GOOGLE_OIDC_CLIENT_ID'),\n 'client_secret': current_app.config.get('GOOGLE_OIDC_CLIENT_SECRET'),\n 'redirect_uri': redirect_uri,\n 'grant_type': 'authorization_code'\n }\n token_url = discovery_document.get('token_endpoint')\n try:\n response = requests.post(token_url, data=post_data)\n encoded_jwt = response.json().get('id_token')\n except requests.exceptions.RequestException as e:\n raise JwtFetchError(\n 'Cannot fetch JWT: {}'.format(e)) from e\n if response.status_code != HTTP_STATUS_CODE_OK:\n raise JwtFetchError(\n 'Cannot fetch JWT: {}'.format(response.status_code))\n\n if not encoded_jwt:\n raise JwtFetchError('Cannot fetch JWT: Missing id_token in response')\n\n return encoded_jwt\n\n\ndef decode_jwt(encoded_jwt, public_key, algorithm, expected_audience):\n \"\"\"Decode a JSON Web Token (JWT).\n\n Args:\n encoded_jwt: The contents of the X-Goog-IAP-JWT-Assertion header.\n public_key: Key to verify signature of the JWT.\n algorithm: Algorithm used for the key. E.g. ES256, RS256\n expected_audience: Expected audience in the JWT.\n\n Returns:\n Decoded JWT as a dict object.\n\n Raises:\n JwtValidationError: if the JWT token cannot be decoded.\n \"\"\"\n try:\n decoded_jwt = jwt.decode(\n jwt=encoded_jwt, key=public_key, algorithms=[algorithm],\n audience=expected_audience)\n return decoded_jwt\n except (jwt.exceptions.InvalidTokenError,\n jwt.exceptions.InvalidKeyError) as e:\n raise JwtValidationError('JWT validation error: {}'.format(e)) from e\n\n return None\n\n\ndef validate_jwt(decoded_jwt, expected_issuer, expected_domain=None):\n \"\"\"Decode and validate a JSON Web token (JWT).\n\n Cloud IAP:\n https://cloud.google.com/iap/docs/signed-headers-howto\n\n Google OpenID Connect:\n https://developers.google.com/identity/protocols/OpenIDConnect\n\n Args:\n decoded_jwt: A dict object containing the decoded JWT token.\n expected_issuer: Expected issuer of the JWT.\n expected_domain: Expected GSuite domain in the JWT (optional).\n\n Raises:\n JwtValidationError: If unable to validate the JWT.\n \"\"\"\n # Make sure the token is not created in the future or has expired.\n try:\n now = int(time.time())\n issued_at = decoded_jwt['iat']\n if isinstance(issued_at, six.string_types):\n issued_at = int(issued_at, 10)\n\n expires_at = decoded_jwt['exp']\n if isinstance(expires_at, six.string_types):\n expires_at = int(expires_at, 10)\n\n if issued_at > now:\n raise JwtValidationError('Token was issued in the future')\n if expires_at < now:\n raise JwtValidationError('Token has expired')\n except KeyError as e:\n raise JwtValidationError('Missing timestamp: {}'.format(e)) from e\n\n # Check that the issuer of the token is correct.\n try:\n issuer = decoded_jwt['iss']\n if issuer != expected_issuer:\n raise JwtValidationError('Wrong issuer: {}'.format(issuer))\n except KeyError as e:\n raise JwtValidationError('Missing issuer') from e\n\n if 'email' not in decoded_jwt:\n raise JwtValidationError('Missing email field in token')\n\n if expected_domain:\n try:\n domain = decoded_jwt['hd']\n if not domain == expected_domain:\n raise JwtValidationError('Wrong domain: {}'.format(domain))\n except KeyError as e:\n raise JwtValidationError('Missing domain: {}'.format(e)) from e\n\n\ndef get_public_key_for_jwt(encoded_jwt, url):\n \"\"\"Get public key for JWT in order to verify the signature.\n\n The keys get cached in order to limit the amount of network round trips.\n\n Args:\n encoded_jwt: Base64 encoded JWT.\n url: URL where keys can be fetched.\n\n Raises:\n JwTKeyError if keys cannot be fetched.\n\n Returns:\n Key as string.\n \"\"\"\n # Get the Key ID from the JWT header.\n key_id = jwt.get_unverified_header(encoded_jwt).get('kid')\n if not key_id:\n raise JwtKeyError('Missing key ID field in token header')\n key_cache = get_public_key_for_jwt.key_cache\n key = key_cache.get(key_id)\n if key:\n return key\n\n # Re-fetch the key file.\n keys_json = _fetch_public_keys(url)\n if 'keys' in keys_json:\n _new_keys_dict = {}\n for key_dict in keys_json['keys']:\n public_key = jwt.algorithms.RSAAlgorithm.from_jwk(\n json.dumps(key_dict))\n _new_keys_dict[key_dict['kid']] = public_key\n key_cache = _new_keys_dict\n else:\n key_cache = keys_json\n get_public_key_for_jwt.key_cache = key_cache\n key = key_cache.get(key_id)\n if not key:\n raise JwtKeyError('IAP public key {!r} not found'.format(key_id))\n\n return key\n\n\ndef get_oauth2_discovery_document():\n \"\"\"Get Google OAuth2 discovery document.\n\n The document is cached in order to limit the amount of network round trips\n and is set to expire in 12 hours from when it was fetched.\n\n Returns:\n Discovery document as a dictionary.\n \"\"\"\n now = int(time.time())\n cache = get_oauth2_discovery_document.cache\n discovery_document = cache.get('current')\n if discovery_document:\n # Check if the document has expired.\n created_at = discovery_document['created_at']\n expires_at = created_at + 12*60*60 # 12 hours in seconds\n if now < expires_at:\n return discovery_document['document']\n\n # Re-fetch the discovery document.\n new_discovery_document = _fetch_oauth2_discovery_document()\n cache = {\n 'current': {\n 'created_at': now,\n 'document': new_discovery_document\n }\n }\n get_oauth2_discovery_document.cache = cache\n return new_discovery_document\n\n\n# Used to cache public keys.\nget_public_key_for_jwt.key_cache = {}\n\n# Used to cache discovery documents.\nget_oauth2_discovery_document.cache = {}\n", "path": "timesketch/lib/google_auth.py"}]} | 4,046 | 641 |
gh_patches_debug_36393 | rasdani/github-patches | git_diff | ethereum__web3.py-1020 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rename middleware_stack to middleware_onion
### How can it be fixed?
see #1020
</issue>
<code>
[start of web3/__init__.py]
1 import pkg_resources
2 import sys
3
4 if sys.version_info < (3, 5):
5 raise EnvironmentError("Python 3.5 or above is required")
6
7 from eth_account import Account # noqa: E402
8 from web3.main import Web3 # noqa: E402
9 from web3.providers.rpc import ( # noqa: E402
10 HTTPProvider,
11 )
12 from web3.providers.eth_tester import ( # noqa: E402
13 EthereumTesterProvider,
14 )
15 from web3.providers.tester import ( # noqa: E402
16 TestRPCProvider,
17 )
18 from web3.providers.ipc import ( # noqa: E402
19 IPCProvider,
20 )
21 from web3.providers.websocket import ( # noqa: E402
22 WebsocketProvider,
23 )
24
25 __version__ = pkg_resources.get_distribution("web3").version
26
27 __all__ = [
28 "__version__",
29 "Web3",
30 "HTTPProvider",
31 "IPCProvider",
32 "WebsocketProvider",
33 "TestRPCProvider",
34 "EthereumTesterProvider",
35 "Account",
36 ]
37
[end of web3/__init__.py]
[start of web3/main.py]
1 from eth_utils import (
2 apply_to_return_value,
3 add_0x_prefix,
4 from_wei,
5 is_address,
6 is_checksum_address,
7 keccak,
8 remove_0x_prefix,
9 to_checksum_address,
10 to_wei,
11 )
12
13 from ens import ENS
14
15 from web3.admin import Admin
16 from web3.eth import Eth
17 from web3.iban import Iban
18 from web3.miner import Miner
19 from web3.net import Net
20 from web3.parity import Parity
21 from web3.personal import Personal
22 from web3.testing import Testing
23 from web3.txpool import TxPool
24 from web3.version import Version
25
26 from web3.providers.eth_tester import (
27 EthereumTesterProvider,
28 )
29 from web3.providers.ipc import (
30 IPCProvider,
31 )
32 from web3.providers.rpc import (
33 HTTPProvider,
34 )
35 from web3.providers.tester import (
36 TestRPCProvider,
37 )
38 from web3.providers.websocket import (
39 WebsocketProvider
40 )
41
42 from web3.manager import (
43 RequestManager,
44 )
45
46 from web3.utils.abi import (
47 map_abi_data,
48 )
49 from hexbytes import (
50 HexBytes,
51 )
52 from web3.utils.decorators import (
53 combomethod,
54 )
55 from web3.utils.empty import empty
56 from web3.utils.encoding import (
57 hex_encode_abi_type,
58 to_bytes,
59 to_int,
60 to_hex,
61 to_text,
62 )
63 from web3.utils.normalizers import (
64 abi_ens_resolver,
65 )
66
67
68 def get_default_modules():
69 return {
70 "eth": Eth,
71 "net": Net,
72 "personal": Personal,
73 "version": Version,
74 "txpool": TxPool,
75 "miner": Miner,
76 "admin": Admin,
77 "parity": Parity,
78 "testing": Testing,
79 }
80
81
82 class Web3:
83 # Providers
84 HTTPProvider = HTTPProvider
85 IPCProvider = IPCProvider
86 TestRPCProvider = TestRPCProvider
87 EthereumTesterProvider = EthereumTesterProvider
88 WebsocketProvider = WebsocketProvider
89
90 # Managers
91 RequestManager = RequestManager
92
93 # Iban
94 Iban = Iban
95
96 # Encoding and Decoding
97 toBytes = staticmethod(to_bytes)
98 toInt = staticmethod(to_int)
99 toHex = staticmethod(to_hex)
100 toText = staticmethod(to_text)
101
102 # Currency Utility
103 toWei = staticmethod(to_wei)
104 fromWei = staticmethod(from_wei)
105
106 # Address Utility
107 isAddress = staticmethod(is_address)
108 isChecksumAddress = staticmethod(is_checksum_address)
109 toChecksumAddress = staticmethod(to_checksum_address)
110
111 def __init__(self, providers=empty, middlewares=None, modules=None, ens=empty):
112 self.manager = RequestManager(self, providers, middlewares)
113
114 if modules is None:
115 modules = get_default_modules()
116
117 for module_name, module_class in modules.items():
118 module_class.attach(self, module_name)
119
120 self.ens = ens
121
122 @property
123 def middleware_stack(self):
124 return self.manager.middleware_stack
125
126 @property
127 def providers(self):
128 return self.manager.providers
129
130 @providers.setter
131 def providers(self, providers):
132 self.manager.providers = providers
133
134 @staticmethod
135 @apply_to_return_value(HexBytes)
136 def sha3(primitive=None, text=None, hexstr=None):
137 if isinstance(primitive, (bytes, int, type(None))):
138 input_bytes = to_bytes(primitive, hexstr=hexstr, text=text)
139 return keccak(input_bytes)
140
141 raise TypeError(
142 "You called sha3 with first arg %r and keywords %r. You must call it with one of "
143 "these approaches: sha3(text='txt'), sha3(hexstr='0x747874'), "
144 "sha3(b'\\x74\\x78\\x74'), or sha3(0x747874)." % (
145 primitive,
146 {'text': text, 'hexstr': hexstr}
147 )
148 )
149
150 @combomethod
151 def soliditySha3(cls, abi_types, values):
152 """
153 Executes sha3 (keccak256) exactly as Solidity does.
154 Takes list of abi_types as inputs -- `[uint24, int8[], bool]`
155 and list of corresponding values -- `[20, [-1, 5, 0], True]`
156 """
157 if len(abi_types) != len(values):
158 raise ValueError(
159 "Length mismatch between provided abi types and values. Got "
160 "{0} types and {1} values.".format(len(abi_types), len(values))
161 )
162
163 if isinstance(cls, type):
164 w3 = None
165 else:
166 w3 = cls
167 normalized_values = map_abi_data([abi_ens_resolver(w3)], abi_types, values)
168
169 hex_string = add_0x_prefix(''.join(
170 remove_0x_prefix(hex_encode_abi_type(abi_type, value))
171 for abi_type, value
172 in zip(abi_types, normalized_values)
173 ))
174 return cls.sha3(hexstr=hex_string)
175
176 def isConnected(self):
177 for provider in self.providers:
178 if provider.isConnected():
179 return True
180 else:
181 return False
182
183 @property
184 def ens(self):
185 if self._ens is empty:
186 return ENS.fromWeb3(self)
187 else:
188 return self._ens
189
190 @ens.setter
191 def ens(self, new_ens):
192 self._ens = new_ens
193
[end of web3/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/web3/__init__.py b/web3/__init__.py
--- a/web3/__init__.py
+++ b/web3/__init__.py
@@ -1,8 +1,17 @@
import pkg_resources
import sys
+import warnings
+
+if (3, 5) <= sys.version_info < (3, 6):
+ warnings.warn(
+ "Support for Python 3.5 will be removed in web3.py v5",
+ category=DeprecationWarning,
+ stacklevel=2)
if sys.version_info < (3, 5):
- raise EnvironmentError("Python 3.5 or above is required")
+ raise EnvironmentError(
+ "Python 3.5 or above is required. "
+ "Note that support for Python 3.5 will be remove in web3.py v5")
from eth_account import Account # noqa: E402
from web3.main import Web3 # noqa: E402
diff --git a/web3/main.py b/web3/main.py
--- a/web3/main.py
+++ b/web3/main.py
@@ -4,7 +4,7 @@
from_wei,
is_address,
is_checksum_address,
- keccak,
+ keccak as eth_utils_keccak,
remove_0x_prefix,
to_checksum_address,
to_wei,
@@ -51,6 +51,7 @@
)
from web3.utils.decorators import (
combomethod,
+ deprecated_for,
)
from web3.utils.empty import empty
from web3.utils.encoding import (
@@ -132,16 +133,22 @@
self.manager.providers = providers
@staticmethod
+ @deprecated_for("This method has been renamed to keccak")
@apply_to_return_value(HexBytes)
def sha3(primitive=None, text=None, hexstr=None):
+ return Web3.keccak(primitive, text, hexstr)
+
+ @staticmethod
+ @apply_to_return_value(HexBytes)
+ def keccak(primitive=None, text=None, hexstr=None):
if isinstance(primitive, (bytes, int, type(None))):
input_bytes = to_bytes(primitive, hexstr=hexstr, text=text)
- return keccak(input_bytes)
+ return eth_utils_keccak(input_bytes)
raise TypeError(
- "You called sha3 with first arg %r and keywords %r. You must call it with one of "
- "these approaches: sha3(text='txt'), sha3(hexstr='0x747874'), "
- "sha3(b'\\x74\\x78\\x74'), or sha3(0x747874)." % (
+ "You called keccak with first arg %r and keywords %r. You must call it with one of "
+ "these approaches: keccak(text='txt'), keccak(hexstr='0x747874'), "
+ "keccak(b'\\x74\\x78\\x74'), or keccak(0x747874)." % (
primitive,
{'text': text, 'hexstr': hexstr}
)
| {"golden_diff": "diff --git a/web3/__init__.py b/web3/__init__.py\n--- a/web3/__init__.py\n+++ b/web3/__init__.py\n@@ -1,8 +1,17 @@\n import pkg_resources\n import sys\n+import warnings\n+\n+if (3, 5) <= sys.version_info < (3, 6):\n+ warnings.warn(\n+ \"Support for Python 3.5 will be removed in web3.py v5\",\n+ category=DeprecationWarning,\n+ stacklevel=2)\n \n if sys.version_info < (3, 5):\n- raise EnvironmentError(\"Python 3.5 or above is required\")\n+ raise EnvironmentError(\n+ \"Python 3.5 or above is required. \"\n+ \"Note that support for Python 3.5 will be remove in web3.py v5\")\n \n from eth_account import Account # noqa: E402\n from web3.main import Web3 # noqa: E402\ndiff --git a/web3/main.py b/web3/main.py\n--- a/web3/main.py\n+++ b/web3/main.py\n@@ -4,7 +4,7 @@\n from_wei,\n is_address,\n is_checksum_address,\n- keccak,\n+ keccak as eth_utils_keccak,\n remove_0x_prefix,\n to_checksum_address,\n to_wei,\n@@ -51,6 +51,7 @@\n )\n from web3.utils.decorators import (\n combomethod,\n+ deprecated_for,\n )\n from web3.utils.empty import empty\n from web3.utils.encoding import (\n@@ -132,16 +133,22 @@\n self.manager.providers = providers\n \n @staticmethod\n+ @deprecated_for(\"This method has been renamed to keccak\")\n @apply_to_return_value(HexBytes)\n def sha3(primitive=None, text=None, hexstr=None):\n+ return Web3.keccak(primitive, text, hexstr)\n+\n+ @staticmethod\n+ @apply_to_return_value(HexBytes)\n+ def keccak(primitive=None, text=None, hexstr=None):\n if isinstance(primitive, (bytes, int, type(None))):\n input_bytes = to_bytes(primitive, hexstr=hexstr, text=text)\n- return keccak(input_bytes)\n+ return eth_utils_keccak(input_bytes)\n \n raise TypeError(\n- \"You called sha3 with first arg %r and keywords %r. You must call it with one of \"\n- \"these approaches: sha3(text='txt'), sha3(hexstr='0x747874'), \"\n- \"sha3(b'\\\\x74\\\\x78\\\\x74'), or sha3(0x747874).\" % (\n+ \"You called keccak with first arg %r and keywords %r. You must call it with one of \"\n+ \"these approaches: keccak(text='txt'), keccak(hexstr='0x747874'), \"\n+ \"keccak(b'\\\\x74\\\\x78\\\\x74'), or keccak(0x747874).\" % (\n primitive,\n {'text': text, 'hexstr': hexstr}\n )\n", "issue": "Rename middleware_stack to middleware_onion\n### How can it be fixed?\r\n\r\nsee #1020 \r\n\n", "before_files": [{"content": "import pkg_resources\nimport sys\n\nif sys.version_info < (3, 5):\n raise EnvironmentError(\"Python 3.5 or above is required\")\n\nfrom eth_account import Account # noqa: E402\nfrom web3.main import Web3 # noqa: E402\nfrom web3.providers.rpc import ( # noqa: E402\n HTTPProvider,\n)\nfrom web3.providers.eth_tester import ( # noqa: E402\n EthereumTesterProvider,\n)\nfrom web3.providers.tester import ( # noqa: E402\n TestRPCProvider,\n)\nfrom web3.providers.ipc import ( # noqa: E402\n IPCProvider,\n)\nfrom web3.providers.websocket import ( # noqa: E402\n WebsocketProvider,\n)\n\n__version__ = pkg_resources.get_distribution(\"web3\").version\n\n__all__ = [\n \"__version__\",\n \"Web3\",\n \"HTTPProvider\",\n \"IPCProvider\",\n \"WebsocketProvider\",\n \"TestRPCProvider\",\n \"EthereumTesterProvider\",\n \"Account\",\n]\n", "path": "web3/__init__.py"}, {"content": "from eth_utils import (\n apply_to_return_value,\n add_0x_prefix,\n from_wei,\n is_address,\n is_checksum_address,\n keccak,\n remove_0x_prefix,\n to_checksum_address,\n to_wei,\n)\n\nfrom ens import ENS\n\nfrom web3.admin import Admin\nfrom web3.eth import Eth\nfrom web3.iban import Iban\nfrom web3.miner import Miner\nfrom web3.net import Net\nfrom web3.parity import Parity\nfrom web3.personal import Personal\nfrom web3.testing import Testing\nfrom web3.txpool import TxPool\nfrom web3.version import Version\n\nfrom web3.providers.eth_tester import (\n EthereumTesterProvider,\n)\nfrom web3.providers.ipc import (\n IPCProvider,\n)\nfrom web3.providers.rpc import (\n HTTPProvider,\n)\nfrom web3.providers.tester import (\n TestRPCProvider,\n)\nfrom web3.providers.websocket import (\n WebsocketProvider\n)\n\nfrom web3.manager import (\n RequestManager,\n)\n\nfrom web3.utils.abi import (\n map_abi_data,\n)\nfrom hexbytes import (\n HexBytes,\n)\nfrom web3.utils.decorators import (\n combomethod,\n)\nfrom web3.utils.empty import empty\nfrom web3.utils.encoding import (\n hex_encode_abi_type,\n to_bytes,\n to_int,\n to_hex,\n to_text,\n)\nfrom web3.utils.normalizers import (\n abi_ens_resolver,\n)\n\n\ndef get_default_modules():\n return {\n \"eth\": Eth,\n \"net\": Net,\n \"personal\": Personal,\n \"version\": Version,\n \"txpool\": TxPool,\n \"miner\": Miner,\n \"admin\": Admin,\n \"parity\": Parity,\n \"testing\": Testing,\n }\n\n\nclass Web3:\n # Providers\n HTTPProvider = HTTPProvider\n IPCProvider = IPCProvider\n TestRPCProvider = TestRPCProvider\n EthereumTesterProvider = EthereumTesterProvider\n WebsocketProvider = WebsocketProvider\n\n # Managers\n RequestManager = RequestManager\n\n # Iban\n Iban = Iban\n\n # Encoding and Decoding\n toBytes = staticmethod(to_bytes)\n toInt = staticmethod(to_int)\n toHex = staticmethod(to_hex)\n toText = staticmethod(to_text)\n\n # Currency Utility\n toWei = staticmethod(to_wei)\n fromWei = staticmethod(from_wei)\n\n # Address Utility\n isAddress = staticmethod(is_address)\n isChecksumAddress = staticmethod(is_checksum_address)\n toChecksumAddress = staticmethod(to_checksum_address)\n\n def __init__(self, providers=empty, middlewares=None, modules=None, ens=empty):\n self.manager = RequestManager(self, providers, middlewares)\n\n if modules is None:\n modules = get_default_modules()\n\n for module_name, module_class in modules.items():\n module_class.attach(self, module_name)\n\n self.ens = ens\n\n @property\n def middleware_stack(self):\n return self.manager.middleware_stack\n\n @property\n def providers(self):\n return self.manager.providers\n\n @providers.setter\n def providers(self, providers):\n self.manager.providers = providers\n\n @staticmethod\n @apply_to_return_value(HexBytes)\n def sha3(primitive=None, text=None, hexstr=None):\n if isinstance(primitive, (bytes, int, type(None))):\n input_bytes = to_bytes(primitive, hexstr=hexstr, text=text)\n return keccak(input_bytes)\n\n raise TypeError(\n \"You called sha3 with first arg %r and keywords %r. You must call it with one of \"\n \"these approaches: sha3(text='txt'), sha3(hexstr='0x747874'), \"\n \"sha3(b'\\\\x74\\\\x78\\\\x74'), or sha3(0x747874).\" % (\n primitive,\n {'text': text, 'hexstr': hexstr}\n )\n )\n\n @combomethod\n def soliditySha3(cls, abi_types, values):\n \"\"\"\n Executes sha3 (keccak256) exactly as Solidity does.\n Takes list of abi_types as inputs -- `[uint24, int8[], bool]`\n and list of corresponding values -- `[20, [-1, 5, 0], True]`\n \"\"\"\n if len(abi_types) != len(values):\n raise ValueError(\n \"Length mismatch between provided abi types and values. Got \"\n \"{0} types and {1} values.\".format(len(abi_types), len(values))\n )\n\n if isinstance(cls, type):\n w3 = None\n else:\n w3 = cls\n normalized_values = map_abi_data([abi_ens_resolver(w3)], abi_types, values)\n\n hex_string = add_0x_prefix(''.join(\n remove_0x_prefix(hex_encode_abi_type(abi_type, value))\n for abi_type, value\n in zip(abi_types, normalized_values)\n ))\n return cls.sha3(hexstr=hex_string)\n\n def isConnected(self):\n for provider in self.providers:\n if provider.isConnected():\n return True\n else:\n return False\n\n @property\n def ens(self):\n if self._ens is empty:\n return ENS.fromWeb3(self)\n else:\n return self._ens\n\n @ens.setter\n def ens(self, new_ens):\n self._ens = new_ens\n", "path": "web3/main.py"}]} | 2,561 | 712 |
gh_patches_debug_26070 | rasdani/github-patches | git_diff | getnikola__nikola-1391 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SHOW_UNTRANSLATED_POSTS=False leads to 404s in archives
Even though SHOW_UNTRANSLATED_POSTS=False, archives still display posts that do not exist, leading to 404s everywhere.
</issue>
<code>
[start of nikola/plugins/task/archive.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2014 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 import os
28
29 # for tearDown with _reload we cannot use 'import from' to access LocaleBorg
30 import nikola.utils
31 from nikola.plugin_categories import Task
32 from nikola.utils import config_changed
33
34
35 class Archive(Task):
36 """Render the post archives."""
37
38 name = "render_archive"
39
40 def set_site(self, site):
41 site.register_path_handler('archive', self.archive_path)
42 return super(Archive, self).set_site(site)
43
44 def _prepare_task(self, kw, name, lang, posts, items, template_name,
45 title, deps_translatable=None):
46 # name: used to build permalink and destination
47 # posts, items: posts or items; only one of them should be used,
48 # the other be None
49 # template_name: name of the template to use
50 # title: the (translated) title for the generated page
51 # deps_translatable: dependencies (None if not added)
52 assert posts is not None or items is not None
53
54 context = {}
55 context["lang"] = lang
56 context["title"] = title
57 context["permalink"] = self.site.link("archive", name, lang)
58 if posts is not None:
59 context["posts"] = posts
60 n = len(posts)
61 else:
62 context["items"] = items
63 n = len(items)
64 task = self.site.generic_post_list_renderer(
65 lang,
66 [],
67 os.path.join(kw['output_folder'], self.site.path("archive", name, lang)),
68 template_name,
69 kw['filters'],
70 context,
71 )
72
73 task_cfg = {1: task['uptodate'][0].config, 2: kw, 3: n}
74 if deps_translatable is not None:
75 task_cfg[4] = deps_translatable
76 task['uptodate'] = [config_changed(task_cfg)]
77 task['basename'] = self.name
78 return task
79
80 def _generate_posts_task(self, kw, name, lang, posts, title, deps_translatable=None):
81 posts = sorted(posts, key=lambda a: a.date)
82 posts.reverse()
83 yield self._prepare_task(kw, name, lang, posts, None, "list_post.tmpl", title, deps_translatable)
84
85 def gen_tasks(self):
86 kw = {
87 "messages": self.site.MESSAGES,
88 "translations": self.site.config['TRANSLATIONS'],
89 "output_folder": self.site.config['OUTPUT_FOLDER'],
90 "filters": self.site.config['FILTERS'],
91 "create_monthly_archive": self.site.config['CREATE_MONTHLY_ARCHIVE'],
92 "create_single_archive": self.site.config['CREATE_SINGLE_ARCHIVE'],
93 "create_full_archives": self.site.config['CREATE_FULL_ARCHIVES'],
94 "create_daily_archive": self.site.config['CREATE_DAILY_ARCHIVE'],
95 }
96 self.site.scan_posts()
97 yield self.group_task()
98 # TODO add next/prev links for years
99 if (kw['create_monthly_archive'] and kw['create_single_archive']) and not kw['create_full_archives']:
100 raise Exception('Cannot create monthly and single archives at the same time.')
101 for lang in kw["translations"]:
102 if kw['create_single_archive'] and not kw['create_full_archives']:
103 # if we are creating one single archive
104 archdata = {}
105 else:
106 # if we are not creating one single archive, start with all years
107 archdata = self.site.posts_per_year.copy()
108 if kw['create_single_archive'] or kw['create_full_archives']:
109 # if we are creating one single archive, or full archives
110 archdata[None] = self.site.posts # for create_single_archive
111
112 for year, posts in archdata.items():
113 # Add archive per year or total archive
114 if year:
115 title = kw["messages"][lang]["Posts for year %s"] % year
116 else:
117 title = kw["messages"][lang]["Archive"]
118 deps_translatable = {}
119 for k in self.site._GLOBAL_CONTEXT_TRANSLATABLE:
120 deps_translatable[k] = self.site.GLOBAL_CONTEXT[k](lang)
121 if not kw["create_monthly_archive"] or kw["create_full_archives"]:
122 yield self._generate_posts_task(kw, year, lang, posts, title, deps_translatable)
123 else:
124 months = set([(m.split('/')[1], self.site.link("archive", m, lang)) for m in self.site.posts_per_month.keys() if m.startswith(str(year))])
125 months = sorted(list(months))
126 months.reverse()
127 items = [[nikola.utils.LocaleBorg().get_month_name(int(month), lang), link] for month, link in months]
128 yield self._prepare_task(kw, year, lang, None, items, "list.tmpl", title, deps_translatable)
129
130 if not kw["create_monthly_archive"] and not kw["create_full_archives"] and not kw["create_daily_archive"]:
131 continue # Just to avoid nesting the other loop in this if
132 for yearmonth, posts in self.site.posts_per_month.items():
133 # Add archive per month
134 year, month = yearmonth.split('/')
135 if kw["create_monthly_archive"] or kw["create_full_archives"]:
136 title = kw["messages"][lang]["Posts for {month} {year}"].format(
137 year=year, month=nikola.utils.LocaleBorg().get_month_name(int(month), lang))
138 yield self._generate_posts_task(kw, yearmonth, lang, posts, title)
139
140 if not kw["create_full_archives"] and not kw["create_daily_archive"]:
141 continue # Just to avoid nesting the other loop in this if
142 # Add archive per day
143 days = dict()
144 for p in posts:
145 if p.date.day not in days:
146 days[p.date.day] = list()
147 days[p.date.day].append(p)
148 for day, posts in days.items():
149 title = kw["messages"][lang]["Posts for {month} {day}, {year}"].format(
150 year=year, month=nikola.utils.LocaleBorg().get_month_name(int(month), lang), day=day)
151 yield self._generate_posts_task(kw, yearmonth + '/{0:02d}'.format(day), lang, posts, title)
152
153 if not kw['create_single_archive'] and not kw['create_full_archives']:
154 # And an "all your years" page for yearly and monthly archives
155 years = list(self.site.posts_per_year.keys())
156 years.sort(reverse=True)
157 kw['years'] = years
158 for lang in kw["translations"]:
159 items = [(y, self.site.link("archive", y, lang)) for y in years]
160 yield self._prepare_task(kw, None, lang, None, items, "list.tmpl", kw["messages"][lang]["Archive"])
161
162 def archive_path(self, name, lang):
163 if name:
164 return [_f for _f in [self.site.config['TRANSLATIONS'][lang],
165 self.site.config['ARCHIVE_PATH'], name,
166 self.site.config['INDEX_FILE']] if _f]
167 else:
168 return [_f for _f in [self.site.config['TRANSLATIONS'][lang],
169 self.site.config['ARCHIVE_PATH'],
170 self.site.config['ARCHIVE_FILENAME']] if _f]
171
[end of nikola/plugins/task/archive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/task/archive.py b/nikola/plugins/task/archive.py
--- a/nikola/plugins/task/archive.py
+++ b/nikola/plugins/task/archive.py
@@ -90,6 +90,7 @@
"filters": self.site.config['FILTERS'],
"create_monthly_archive": self.site.config['CREATE_MONTHLY_ARCHIVE'],
"create_single_archive": self.site.config['CREATE_SINGLE_ARCHIVE'],
+ "show_untranslated_posts": self.site.config['SHOW_UNTRANSLATED_POSTS'],
"create_full_archives": self.site.config['CREATE_FULL_ARCHIVES'],
"create_daily_archive": self.site.config['CREATE_DAILY_ARCHIVE'],
}
@@ -109,6 +110,11 @@
# if we are creating one single archive, or full archives
archdata[None] = self.site.posts # for create_single_archive
+ # Filter untranslated posts (Issue #1360)
+ if not kw["show_untranslated_posts"]:
+ for year, posts in archdata.items():
+ archdata[year] = [p for p in posts if lang in p.translated_to]
+
for year, posts in archdata.items():
# Add archive per year or total archive
if year:
| {"golden_diff": "diff --git a/nikola/plugins/task/archive.py b/nikola/plugins/task/archive.py\n--- a/nikola/plugins/task/archive.py\n+++ b/nikola/plugins/task/archive.py\n@@ -90,6 +90,7 @@\n \"filters\": self.site.config['FILTERS'],\n \"create_monthly_archive\": self.site.config['CREATE_MONTHLY_ARCHIVE'],\n \"create_single_archive\": self.site.config['CREATE_SINGLE_ARCHIVE'],\n+ \"show_untranslated_posts\": self.site.config['SHOW_UNTRANSLATED_POSTS'],\n \"create_full_archives\": self.site.config['CREATE_FULL_ARCHIVES'],\n \"create_daily_archive\": self.site.config['CREATE_DAILY_ARCHIVE'],\n }\n@@ -109,6 +110,11 @@\n # if we are creating one single archive, or full archives\n archdata[None] = self.site.posts # for create_single_archive\n \n+ # Filter untranslated posts (Issue #1360)\n+ if not kw[\"show_untranslated_posts\"]:\n+ for year, posts in archdata.items():\n+ archdata[year] = [p for p in posts if lang in p.translated_to]\n+\n for year, posts in archdata.items():\n # Add archive per year or total archive\n if year:\n", "issue": "SHOW_UNTRANSLATED_POSTS=False leads to 404s in archives\nEven though SHOW_UNTRANSLATED_POSTS=False, archives still display posts that do not exist, leading to 404s everywhere.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2014 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nimport os\n\n# for tearDown with _reload we cannot use 'import from' to access LocaleBorg\nimport nikola.utils\nfrom nikola.plugin_categories import Task\nfrom nikola.utils import config_changed\n\n\nclass Archive(Task):\n \"\"\"Render the post archives.\"\"\"\n\n name = \"render_archive\"\n\n def set_site(self, site):\n site.register_path_handler('archive', self.archive_path)\n return super(Archive, self).set_site(site)\n\n def _prepare_task(self, kw, name, lang, posts, items, template_name,\n title, deps_translatable=None):\n # name: used to build permalink and destination\n # posts, items: posts or items; only one of them should be used,\n # the other be None\n # template_name: name of the template to use\n # title: the (translated) title for the generated page\n # deps_translatable: dependencies (None if not added)\n assert posts is not None or items is not None\n\n context = {}\n context[\"lang\"] = lang\n context[\"title\"] = title\n context[\"permalink\"] = self.site.link(\"archive\", name, lang)\n if posts is not None:\n context[\"posts\"] = posts\n n = len(posts)\n else:\n context[\"items\"] = items\n n = len(items)\n task = self.site.generic_post_list_renderer(\n lang,\n [],\n os.path.join(kw['output_folder'], self.site.path(\"archive\", name, lang)),\n template_name,\n kw['filters'],\n context,\n )\n\n task_cfg = {1: task['uptodate'][0].config, 2: kw, 3: n}\n if deps_translatable is not None:\n task_cfg[4] = deps_translatable\n task['uptodate'] = [config_changed(task_cfg)]\n task['basename'] = self.name\n return task\n\n def _generate_posts_task(self, kw, name, lang, posts, title, deps_translatable=None):\n posts = sorted(posts, key=lambda a: a.date)\n posts.reverse()\n yield self._prepare_task(kw, name, lang, posts, None, \"list_post.tmpl\", title, deps_translatable)\n\n def gen_tasks(self):\n kw = {\n \"messages\": self.site.MESSAGES,\n \"translations\": self.site.config['TRANSLATIONS'],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n \"create_monthly_archive\": self.site.config['CREATE_MONTHLY_ARCHIVE'],\n \"create_single_archive\": self.site.config['CREATE_SINGLE_ARCHIVE'],\n \"create_full_archives\": self.site.config['CREATE_FULL_ARCHIVES'],\n \"create_daily_archive\": self.site.config['CREATE_DAILY_ARCHIVE'],\n }\n self.site.scan_posts()\n yield self.group_task()\n # TODO add next/prev links for years\n if (kw['create_monthly_archive'] and kw['create_single_archive']) and not kw['create_full_archives']:\n raise Exception('Cannot create monthly and single archives at the same time.')\n for lang in kw[\"translations\"]:\n if kw['create_single_archive'] and not kw['create_full_archives']:\n # if we are creating one single archive\n archdata = {}\n else:\n # if we are not creating one single archive, start with all years\n archdata = self.site.posts_per_year.copy()\n if kw['create_single_archive'] or kw['create_full_archives']:\n # if we are creating one single archive, or full archives\n archdata[None] = self.site.posts # for create_single_archive\n\n for year, posts in archdata.items():\n # Add archive per year or total archive\n if year:\n title = kw[\"messages\"][lang][\"Posts for year %s\"] % year\n else:\n title = kw[\"messages\"][lang][\"Archive\"]\n deps_translatable = {}\n for k in self.site._GLOBAL_CONTEXT_TRANSLATABLE:\n deps_translatable[k] = self.site.GLOBAL_CONTEXT[k](lang)\n if not kw[\"create_monthly_archive\"] or kw[\"create_full_archives\"]:\n yield self._generate_posts_task(kw, year, lang, posts, title, deps_translatable)\n else:\n months = set([(m.split('/')[1], self.site.link(\"archive\", m, lang)) for m in self.site.posts_per_month.keys() if m.startswith(str(year))])\n months = sorted(list(months))\n months.reverse()\n items = [[nikola.utils.LocaleBorg().get_month_name(int(month), lang), link] for month, link in months]\n yield self._prepare_task(kw, year, lang, None, items, \"list.tmpl\", title, deps_translatable)\n\n if not kw[\"create_monthly_archive\"] and not kw[\"create_full_archives\"] and not kw[\"create_daily_archive\"]:\n continue # Just to avoid nesting the other loop in this if\n for yearmonth, posts in self.site.posts_per_month.items():\n # Add archive per month\n year, month = yearmonth.split('/')\n if kw[\"create_monthly_archive\"] or kw[\"create_full_archives\"]:\n title = kw[\"messages\"][lang][\"Posts for {month} {year}\"].format(\n year=year, month=nikola.utils.LocaleBorg().get_month_name(int(month), lang))\n yield self._generate_posts_task(kw, yearmonth, lang, posts, title)\n\n if not kw[\"create_full_archives\"] and not kw[\"create_daily_archive\"]:\n continue # Just to avoid nesting the other loop in this if\n # Add archive per day\n days = dict()\n for p in posts:\n if p.date.day not in days:\n days[p.date.day] = list()\n days[p.date.day].append(p)\n for day, posts in days.items():\n title = kw[\"messages\"][lang][\"Posts for {month} {day}, {year}\"].format(\n year=year, month=nikola.utils.LocaleBorg().get_month_name(int(month), lang), day=day)\n yield self._generate_posts_task(kw, yearmonth + '/{0:02d}'.format(day), lang, posts, title)\n\n if not kw['create_single_archive'] and not kw['create_full_archives']:\n # And an \"all your years\" page for yearly and monthly archives\n years = list(self.site.posts_per_year.keys())\n years.sort(reverse=True)\n kw['years'] = years\n for lang in kw[\"translations\"]:\n items = [(y, self.site.link(\"archive\", y, lang)) for y in years]\n yield self._prepare_task(kw, None, lang, None, items, \"list.tmpl\", kw[\"messages\"][lang][\"Archive\"])\n\n def archive_path(self, name, lang):\n if name:\n return [_f for _f in [self.site.config['TRANSLATIONS'][lang],\n self.site.config['ARCHIVE_PATH'], name,\n self.site.config['INDEX_FILE']] if _f]\n else:\n return [_f for _f in [self.site.config['TRANSLATIONS'][lang],\n self.site.config['ARCHIVE_PATH'],\n self.site.config['ARCHIVE_FILENAME']] if _f]\n", "path": "nikola/plugins/task/archive.py"}]} | 2,806 | 275 |
gh_patches_debug_33126 | rasdani/github-patches | git_diff | Flexget__Flexget-1474 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unicode issues with exec plugin
Same as https://github.com/Flexget/Flexget/issues/1269, but with exec plugin
It was working well recently, but now if the given path contains unicode (e. g. "ñ") characters it crashes.
Running flexget version 2.1.23 installed using pip, on Windows 8.1 x64 + Python 2.7.12
Relevant snippet of crash report (crash_report.2016.07.25.115814396000.log):
```
2016-07-25 11:58 CRITICAL task remove-folders BUG: Unhandled error in plugin exec: 'ascii' codec can't encode character u'\xf1' in position 124: ordinal not in range(128)
Traceback (most recent call last):
File "c:\python27\lib\site-packages\flexget\task.py", line 444, in __run_plugin
return method(*args, **kwargs)
File "c:\python27\lib\site-packages\flexget\event.py", line 23, in __call__
return self.func(*args, **kwargs)
File "c:\python27\lib\site-packages\flexget\plugins\output\exec.py", line 192, in phase_handler
self.execute(task, 'on_' + phase, config)
File "c:\python27\lib\site-packages\flexget\plugins\output\exec.py", line 163, in execute
if (self.execute_cmd(cmd, allow_background, config['encoding']) != 0 and
File "c:\python27\lib\site-packages\flexget\plugins\output\exec.py", line 109, in execute_cmd
stderr=subprocess.STDOUT, close_fds=False)
File "c:\python27\lib\subprocess.py", line 711, in __init__
errread, errwrite)
File "c:\python27\lib\subprocess.py", line 929, in _execute_child
args = '{} /c "{}"'.format (comspec, args)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in position 124: ordinal not in range(128)
```
</issue>
<code>
[start of flexget/plugins/output/exec.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # pylint: disable=unused-import, redefined-builtin
3 from past.builtins import basestring
4
5 import logging
6 import subprocess
7
8 from flexget import plugin
9 from flexget.entry import Entry
10 from flexget.event import event
11 from flexget.config_schema import one_or_more
12 from flexget.utils.template import render_from_entry, render_from_task, RenderError
13 from flexget.utils.tools import io_encoding, native_str_to_text
14
15 log = logging.getLogger('exec')
16
17
18 class EscapingEntry(Entry):
19 """Helper class, same as a Entry, but returns all string value with quotes escaped."""
20
21 def __init__(self, entry):
22 super(EscapingEntry, self).__init__(entry)
23
24 def __getitem__(self, key):
25 value = super(EscapingEntry, self).__getitem__(key)
26 # TODO: May need to be different depending on OS
27 if isinstance(value, basestring):
28 value = value.replace('"', '\\"')
29 return value
30
31
32 class PluginExec(object):
33 """
34 Execute commands
35
36 Simple example, xecute command for entries that reach output::
37
38 exec: echo 'found {{title}} at {{url}}' > file
39
40 Advanced Example::
41
42 exec:
43 on_start:
44 phase: echo "Started"
45 on_input:
46 for_entries: echo 'got {{title}}'
47 on_output:
48 for_accepted: echo 'accepted {{title}} - {{url}} > file
49
50 You can use all (available) entry fields in the command.
51 """
52
53 NAME = 'exec'
54 HANDLED_PHASES = ['start', 'input', 'filter', 'output', 'exit']
55
56 schema = {
57 'oneOf': [
58 one_or_more({'type': 'string'}),
59 {
60 'type': 'object',
61 'properties': {
62 'on_start': {'$ref': '#/definitions/phaseSettings'},
63 'on_input': {'$ref': '#/definitions/phaseSettings'},
64 'on_filter': {'$ref': '#/definitions/phaseSettings'},
65 'on_output': {'$ref': '#/definitions/phaseSettings'},
66 'on_exit': {'$ref': '#/definitions/phaseSettings'},
67 'fail_entries': {'type': 'boolean'},
68 'auto_escape': {'type': 'boolean'},
69 'encoding': {'type': 'string'},
70 'allow_background': {'type': 'boolean'}
71 },
72 'additionalProperties': False
73 }
74 ],
75 'definitions': {
76 'phaseSettings': {
77 'type': 'object',
78 'properties': {
79 'phase': one_or_more({'type': 'string'}),
80 'for_entries': one_or_more({'type': 'string'}),
81 'for_accepted': one_or_more({'type': 'string'}),
82 'for_rejected': one_or_more({'type': 'string'}),
83 'for_failed': one_or_more({'type': 'string'})
84 },
85 'additionalProperties': False
86 }
87 }
88 }
89
90 def prepare_config(self, config):
91 if isinstance(config, basestring):
92 config = [config]
93 if isinstance(config, list):
94 config = {'on_output': {'for_accepted': config}}
95 if not config.get('encoding'):
96 config['encoding'] = io_encoding
97 for phase_name in config:
98 if phase_name.startswith('on_'):
99 for items_name in config[phase_name]:
100 if isinstance(config[phase_name][items_name], basestring):
101 config[phase_name][items_name] = [config[phase_name][items_name]]
102
103 return config
104
105 def execute_cmd(self, cmd, allow_background, encoding):
106 log.verbose('Executing: %s' % cmd)
107 # if PY2: cmd = cmd.encode(encoding) ?
108 p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
109 stderr=subprocess.STDOUT, close_fds=False)
110 if not allow_background:
111 (r, w) = (p.stdout, p.stdin)
112 response = native_str_to_text(r.read(), encoding=encoding, errors='replace')
113 r.close()
114 w.close()
115 if response:
116 log.info('Stdout: %s' % response)
117 return p.wait()
118
119 def execute(self, task, phase_name, config):
120 config = self.prepare_config(config)
121 if phase_name not in config:
122 log.debug('phase %s not configured' % phase_name)
123 return
124
125 name_map = {'for_entries': task.entries, 'for_accepted': task.accepted,
126 'for_rejected': task.rejected, 'for_failed': task.failed}
127
128 allow_background = config.get('allow_background')
129 for operation, entries in name_map.items():
130 if operation not in config[phase_name]:
131 continue
132
133 log.debug('running phase_name: %s operation: %s entries: %s' % (phase_name, operation, len(entries)))
134
135 for entry in entries:
136 for cmd in config[phase_name][operation]:
137 entrydict = EscapingEntry(entry) if config.get('auto_escape') else entry
138 # Do string replacement from entry, but make sure quotes get escaped
139 try:
140 cmd = render_from_entry(cmd, entrydict)
141 except RenderError as e:
142 log.error('Could not set exec command for %s: %s' % (entry['title'], e))
143 # fail the entry if configured to do so
144 if config.get('fail_entries'):
145 entry.fail('Entry `%s` does not have required fields for string replacement.' %
146 entry['title'])
147 continue
148
149 log.debug('phase_name: %s operation: %s cmd: %s' % (phase_name, operation, cmd))
150 if task.options.test:
151 log.info('Would execute: %s' % cmd)
152 else:
153 # Make sure the command can be encoded into appropriate encoding, don't actually encode yet,
154 # so logging continues to work.
155 try:
156 cmd.encode(config['encoding'])
157 except UnicodeEncodeError:
158 log.error('Unable to encode cmd `%s` to %s' % (cmd, config['encoding']))
159 if config.get('fail_entries'):
160 entry.fail('cmd `%s` could not be encoded to %s.' % (cmd, config['encoding']))
161 continue
162 # Run the command, fail entries with non-zero return code if configured to
163 if (self.execute_cmd(cmd, allow_background, config['encoding']) != 0 and
164 config.get('fail_entries')):
165 entry.fail('exec return code was non-zero')
166
167 # phase keyword in this
168 if 'phase' in config[phase_name]:
169 for cmd in config[phase_name]['phase']:
170 try:
171 cmd = render_from_task(cmd, task)
172 except RenderError as e:
173 log.error('Error rendering `%s`: %s' % (cmd, e))
174 else:
175 log.debug('phase cmd: %s' % cmd)
176 if task.options.test:
177 log.info('Would execute: %s' % cmd)
178 else:
179 self.execute_cmd(cmd, allow_background, config['encoding'])
180
181 def __getattr__(self, item):
182 """Creates methods to handle task phases."""
183 for phase in self.HANDLED_PHASES:
184 if item == plugin.phase_methods[phase]:
185 # A phase method we handle has been requested
186 break
187 else:
188 # We don't handle this phase
189 raise AttributeError(item)
190
191 def phase_handler(task, config):
192 self.execute(task, 'on_' + phase, config)
193
194 # Make sure we run after other plugins so exec can use their output
195 phase_handler.priority = 100
196 return phase_handler
197
198
199 @event('plugin.register')
200 def register_plugin():
201 plugin.register(PluginExec, 'exec', api_ver=2)
202
[end of flexget/plugins/output/exec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/output/exec.py b/flexget/plugins/output/exec.py
--- a/flexget/plugins/output/exec.py
+++ b/flexget/plugins/output/exec.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals, division, absolute_import
from builtins import * # pylint: disable=unused-import, redefined-builtin
from past.builtins import basestring
+from future.utils import text_to_native_str
import logging
import subprocess
@@ -10,7 +11,7 @@
from flexget.event import event
from flexget.config_schema import one_or_more
from flexget.utils.template import render_from_entry, render_from_task, RenderError
-from flexget.utils.tools import io_encoding, native_str_to_text
+from flexget.utils.tools import io_encoding
log = logging.getLogger('exec')
@@ -103,17 +104,16 @@
return config
def execute_cmd(self, cmd, allow_background, encoding):
- log.verbose('Executing: %s' % cmd)
- # if PY2: cmd = cmd.encode(encoding) ?
- p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT, close_fds=False)
+ log.verbose('Executing: %s', cmd)
+ p = subprocess.Popen(text_to_native_str(cmd, encoding=io_encoding), shell=True, stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=False)
if not allow_background:
- (r, w) = (p.stdout, p.stdin)
- response = native_str_to_text(r.read(), encoding=encoding, errors='replace')
+ r, w = (p.stdout, p.stdin)
+ response = r.read().decode(io_encoding)
r.close()
w.close()
if response:
- log.info('Stdout: %s' % response)
+ log.info('Stdout: %s', response.rstrip()) # rstrip to get rid of newlines
return p.wait()
def execute(self, task, phase_name, config):
| {"golden_diff": "diff --git a/flexget/plugins/output/exec.py b/flexget/plugins/output/exec.py\n--- a/flexget/plugins/output/exec.py\n+++ b/flexget/plugins/output/exec.py\n@@ -1,6 +1,7 @@\n from __future__ import unicode_literals, division, absolute_import\n from builtins import * # pylint: disable=unused-import, redefined-builtin\n from past.builtins import basestring\n+from future.utils import text_to_native_str\n \n import logging\n import subprocess\n@@ -10,7 +11,7 @@\n from flexget.event import event\n from flexget.config_schema import one_or_more\n from flexget.utils.template import render_from_entry, render_from_task, RenderError\n-from flexget.utils.tools import io_encoding, native_str_to_text\n+from flexget.utils.tools import io_encoding\n \n log = logging.getLogger('exec')\n \n@@ -103,17 +104,16 @@\n return config\n \n def execute_cmd(self, cmd, allow_background, encoding):\n- log.verbose('Executing: %s' % cmd)\n- # if PY2: cmd = cmd.encode(encoding) ?\n- p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,\n- stderr=subprocess.STDOUT, close_fds=False)\n+ log.verbose('Executing: %s', cmd)\n+ p = subprocess.Popen(text_to_native_str(cmd, encoding=io_encoding), shell=True, stdin=subprocess.PIPE,\n+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=False)\n if not allow_background:\n- (r, w) = (p.stdout, p.stdin)\n- response = native_str_to_text(r.read(), encoding=encoding, errors='replace')\n+ r, w = (p.stdout, p.stdin)\n+ response = r.read().decode(io_encoding)\n r.close()\n w.close()\n if response:\n- log.info('Stdout: %s' % response)\n+ log.info('Stdout: %s', response.rstrip()) # rstrip to get rid of newlines\n return p.wait()\n \n def execute(self, task, phase_name, config):\n", "issue": "Unicode issues with exec plugin\nSame as https://github.com/Flexget/Flexget/issues/1269, but with exec plugin\n\nIt was working well recently, but now if the given path contains unicode (e. g. \"\u00f1\") characters it crashes.\n\nRunning flexget version 2.1.23 installed using pip, on Windows 8.1 x64 + Python 2.7.12\n\nRelevant snippet of crash report (crash_report.2016.07.25.115814396000.log): \n\n```\n2016-07-25 11:58 CRITICAL task remove-folders BUG: Unhandled error in plugin exec: 'ascii' codec can't encode character u'\\xf1' in position 124: ordinal not in range(128)\nTraceback (most recent call last):\n File \"c:\\python27\\lib\\site-packages\\flexget\\task.py\", line 444, in __run_plugin\n return method(*args, **kwargs)\n File \"c:\\python27\\lib\\site-packages\\flexget\\event.py\", line 23, in __call__\n return self.func(*args, **kwargs)\n File \"c:\\python27\\lib\\site-packages\\flexget\\plugins\\output\\exec.py\", line 192, in phase_handler\n self.execute(task, 'on_' + phase, config)\n File \"c:\\python27\\lib\\site-packages\\flexget\\plugins\\output\\exec.py\", line 163, in execute\n if (self.execute_cmd(cmd, allow_background, config['encoding']) != 0 and\n File \"c:\\python27\\lib\\site-packages\\flexget\\plugins\\output\\exec.py\", line 109, in execute_cmd\n stderr=subprocess.STDOUT, close_fds=False)\n File \"c:\\python27\\lib\\subprocess.py\", line 711, in __init__\n errread, errwrite)\n File \"c:\\python27\\lib\\subprocess.py\", line 929, in _execute_child\n args = '{} /c \"{}\"'.format (comspec, args)\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xf1' in position 124: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\nfrom past.builtins import basestring\n\nimport logging\nimport subprocess\n\nfrom flexget import plugin\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.config_schema import one_or_more\nfrom flexget.utils.template import render_from_entry, render_from_task, RenderError\nfrom flexget.utils.tools import io_encoding, native_str_to_text\n\nlog = logging.getLogger('exec')\n\n\nclass EscapingEntry(Entry):\n \"\"\"Helper class, same as a Entry, but returns all string value with quotes escaped.\"\"\"\n\n def __init__(self, entry):\n super(EscapingEntry, self).__init__(entry)\n\n def __getitem__(self, key):\n value = super(EscapingEntry, self).__getitem__(key)\n # TODO: May need to be different depending on OS\n if isinstance(value, basestring):\n value = value.replace('\"', '\\\\\"')\n return value\n\n\nclass PluginExec(object):\n \"\"\"\n Execute commands\n\n Simple example, xecute command for entries that reach output::\n\n exec: echo 'found {{title}} at {{url}}' > file\n\n Advanced Example::\n\n exec:\n on_start:\n phase: echo \"Started\"\n on_input:\n for_entries: echo 'got {{title}}'\n on_output:\n for_accepted: echo 'accepted {{title}} - {{url}} > file\n\n You can use all (available) entry fields in the command.\n \"\"\"\n\n NAME = 'exec'\n HANDLED_PHASES = ['start', 'input', 'filter', 'output', 'exit']\n\n schema = {\n 'oneOf': [\n one_or_more({'type': 'string'}),\n {\n 'type': 'object',\n 'properties': {\n 'on_start': {'$ref': '#/definitions/phaseSettings'},\n 'on_input': {'$ref': '#/definitions/phaseSettings'},\n 'on_filter': {'$ref': '#/definitions/phaseSettings'},\n 'on_output': {'$ref': '#/definitions/phaseSettings'},\n 'on_exit': {'$ref': '#/definitions/phaseSettings'},\n 'fail_entries': {'type': 'boolean'},\n 'auto_escape': {'type': 'boolean'},\n 'encoding': {'type': 'string'},\n 'allow_background': {'type': 'boolean'}\n },\n 'additionalProperties': False\n }\n ],\n 'definitions': {\n 'phaseSettings': {\n 'type': 'object',\n 'properties': {\n 'phase': one_or_more({'type': 'string'}),\n 'for_entries': one_or_more({'type': 'string'}),\n 'for_accepted': one_or_more({'type': 'string'}),\n 'for_rejected': one_or_more({'type': 'string'}),\n 'for_failed': one_or_more({'type': 'string'})\n },\n 'additionalProperties': False\n }\n }\n }\n\n def prepare_config(self, config):\n if isinstance(config, basestring):\n config = [config]\n if isinstance(config, list):\n config = {'on_output': {'for_accepted': config}}\n if not config.get('encoding'):\n config['encoding'] = io_encoding\n for phase_name in config:\n if phase_name.startswith('on_'):\n for items_name in config[phase_name]:\n if isinstance(config[phase_name][items_name], basestring):\n config[phase_name][items_name] = [config[phase_name][items_name]]\n\n return config\n\n def execute_cmd(self, cmd, allow_background, encoding):\n log.verbose('Executing: %s' % cmd)\n # if PY2: cmd = cmd.encode(encoding) ?\n p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT, close_fds=False)\n if not allow_background:\n (r, w) = (p.stdout, p.stdin)\n response = native_str_to_text(r.read(), encoding=encoding, errors='replace')\n r.close()\n w.close()\n if response:\n log.info('Stdout: %s' % response)\n return p.wait()\n\n def execute(self, task, phase_name, config):\n config = self.prepare_config(config)\n if phase_name not in config:\n log.debug('phase %s not configured' % phase_name)\n return\n\n name_map = {'for_entries': task.entries, 'for_accepted': task.accepted,\n 'for_rejected': task.rejected, 'for_failed': task.failed}\n\n allow_background = config.get('allow_background')\n for operation, entries in name_map.items():\n if operation not in config[phase_name]:\n continue\n\n log.debug('running phase_name: %s operation: %s entries: %s' % (phase_name, operation, len(entries)))\n\n for entry in entries:\n for cmd in config[phase_name][operation]:\n entrydict = EscapingEntry(entry) if config.get('auto_escape') else entry\n # Do string replacement from entry, but make sure quotes get escaped\n try:\n cmd = render_from_entry(cmd, entrydict)\n except RenderError as e:\n log.error('Could not set exec command for %s: %s' % (entry['title'], e))\n # fail the entry if configured to do so\n if config.get('fail_entries'):\n entry.fail('Entry `%s` does not have required fields for string replacement.' %\n entry['title'])\n continue\n\n log.debug('phase_name: %s operation: %s cmd: %s' % (phase_name, operation, cmd))\n if task.options.test:\n log.info('Would execute: %s' % cmd)\n else:\n # Make sure the command can be encoded into appropriate encoding, don't actually encode yet,\n # so logging continues to work.\n try:\n cmd.encode(config['encoding'])\n except UnicodeEncodeError:\n log.error('Unable to encode cmd `%s` to %s' % (cmd, config['encoding']))\n if config.get('fail_entries'):\n entry.fail('cmd `%s` could not be encoded to %s.' % (cmd, config['encoding']))\n continue\n # Run the command, fail entries with non-zero return code if configured to\n if (self.execute_cmd(cmd, allow_background, config['encoding']) != 0 and\n config.get('fail_entries')):\n entry.fail('exec return code was non-zero')\n\n # phase keyword in this\n if 'phase' in config[phase_name]:\n for cmd in config[phase_name]['phase']:\n try:\n cmd = render_from_task(cmd, task)\n except RenderError as e:\n log.error('Error rendering `%s`: %s' % (cmd, e))\n else:\n log.debug('phase cmd: %s' % cmd)\n if task.options.test:\n log.info('Would execute: %s' % cmd)\n else:\n self.execute_cmd(cmd, allow_background, config['encoding'])\n\n def __getattr__(self, item):\n \"\"\"Creates methods to handle task phases.\"\"\"\n for phase in self.HANDLED_PHASES:\n if item == plugin.phase_methods[phase]:\n # A phase method we handle has been requested\n break\n else:\n # We don't handle this phase\n raise AttributeError(item)\n\n def phase_handler(task, config):\n self.execute(task, 'on_' + phase, config)\n\n # Make sure we run after other plugins so exec can use their output\n phase_handler.priority = 100\n return phase_handler\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(PluginExec, 'exec', api_ver=2)\n", "path": "flexget/plugins/output/exec.py"}]} | 3,247 | 461 |
gh_patches_debug_50420 | rasdani/github-patches | git_diff | litestar-org__litestar-2330 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of litestar/openapi/spec/enums.py]
1 from enum import Enum
2
3 __all__ = ("OpenAPIFormat", "OpenAPIType")
4
5
6 class OpenAPIFormat(str, Enum):
7 """Formats extracted from: https://datatracker.ietf.org/doc/html/draft-bhutton-json-schema-validation-00#page-13"""
8
9 DATE = "date"
10 DATE_TIME = "date-time"
11 TIME = "time"
12 DURATION = "duration"
13 URL = "url"
14 EMAIL = "email"
15 IDN_EMAIL = "idn-email"
16 HOST_NAME = "hostname"
17 IDN_HOST_NAME = "idn-hostname"
18 IPV4 = "ipv4"
19 IPV6 = "ipv6"
20 URI = "uri"
21 URI_REFERENCE = "uri-reference"
22 URI_TEMPLATE = "uri-template"
23 JSON_POINTER = "json-pointer"
24 RELATIVE_JSON_POINTER = "relative-json-pointer"
25 IRI = "iri-reference"
26 IRI_REFERENCE = "iri-reference" # noqa: PIE796
27 UUID = "uuid"
28 REGEX = "regex"
29
30
31 class OpenAPIType(str, Enum):
32 """An OopenAPI type."""
33
34 ARRAY = "array"
35 BOOLEAN = "boolean"
36 INTEGER = "integer"
37 NULL = "null"
38 NUMBER = "number"
39 OBJECT = "object"
40 STRING = "string"
41
[end of litestar/openapi/spec/enums.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/openapi/spec/enums.py b/litestar/openapi/spec/enums.py
--- a/litestar/openapi/spec/enums.py
+++ b/litestar/openapi/spec/enums.py
@@ -26,6 +26,7 @@
IRI_REFERENCE = "iri-reference" # noqa: PIE796
UUID = "uuid"
REGEX = "regex"
+ BINARY = "binary"
class OpenAPIType(str, Enum):
| {"golden_diff": "diff --git a/litestar/openapi/spec/enums.py b/litestar/openapi/spec/enums.py\n--- a/litestar/openapi/spec/enums.py\n+++ b/litestar/openapi/spec/enums.py\n@@ -26,6 +26,7 @@\n IRI_REFERENCE = \"iri-reference\" # noqa: PIE796\n UUID = \"uuid\"\n REGEX = \"regex\"\n+ BINARY = \"binary\"\n \n \n class OpenAPIType(str, Enum):\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from enum import Enum\n\n__all__ = (\"OpenAPIFormat\", \"OpenAPIType\")\n\n\nclass OpenAPIFormat(str, Enum):\n \"\"\"Formats extracted from: https://datatracker.ietf.org/doc/html/draft-bhutton-json-schema-validation-00#page-13\"\"\"\n\n DATE = \"date\"\n DATE_TIME = \"date-time\"\n TIME = \"time\"\n DURATION = \"duration\"\n URL = \"url\"\n EMAIL = \"email\"\n IDN_EMAIL = \"idn-email\"\n HOST_NAME = \"hostname\"\n IDN_HOST_NAME = \"idn-hostname\"\n IPV4 = \"ipv4\"\n IPV6 = \"ipv6\"\n URI = \"uri\"\n URI_REFERENCE = \"uri-reference\"\n URI_TEMPLATE = \"uri-template\"\n JSON_POINTER = \"json-pointer\"\n RELATIVE_JSON_POINTER = \"relative-json-pointer\"\n IRI = \"iri-reference\"\n IRI_REFERENCE = \"iri-reference\" # noqa: PIE796\n UUID = \"uuid\"\n REGEX = \"regex\"\n\n\nclass OpenAPIType(str, Enum):\n \"\"\"An OopenAPI type.\"\"\"\n\n ARRAY = \"array\"\n BOOLEAN = \"boolean\"\n INTEGER = \"integer\"\n NULL = \"null\"\n NUMBER = \"number\"\n OBJECT = \"object\"\n STRING = \"string\"\n", "path": "litestar/openapi/spec/enums.py"}]} | 1,070 | 108 |
gh_patches_debug_11843 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-14716 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot download certain Cartoon Network videos?
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.10.29*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.29**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['http://www.cartoonnetwork.com/video/regularshow/fre
e-cake-episode.html', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.10.29
[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-62162-gec8789a, ffprobe 3.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[CartoonNetwork] free-cake: Downloading webpage
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading f4m manif
est
WARNING: Unable to download f4m manifest: HTTP Error 403: Forbidden
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML
ERROR: UNKNOWN
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\YoutubeDL.py", line 784, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\common.py", line 434, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\cartoonnetwork.py", line 41, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\turner.py", line 84, in _extract_cvp_info
youtube_dl.utils.ExtractorError: UNKNOWN
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.cartoonnetwork.com/video/regularshow/free-cake-episode.html
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
I'm trying to download a particular video and for some reason I can't. Other Cartoon Network videos work just fine, but this series(?) doesn't seem to work. I'm not sure why some work, but some don't. I'm probably missing something... Help please?
</issue>
<code>
[start of youtube_dl/extractor/cartoonnetwork.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .turner import TurnerBaseIE
7
8
9 class CartoonNetworkIE(TurnerBaseIE):
10 _VALID_URL = r'https?://(?:www\.)?cartoonnetwork\.com/video/(?:[^/]+/)+(?P<id>[^/?#]+)-(?:clip|episode)\.html'
11 _TEST = {
12 'url': 'http://www.cartoonnetwork.com/video/teen-titans-go/starfire-the-cat-lady-clip.html',
13 'info_dict': {
14 'id': '8a250ab04ed07e6c014ef3f1e2f9016c',
15 'ext': 'mp4',
16 'title': 'Starfire the Cat Lady',
17 'description': 'Robin decides to become a cat so that Starfire will finally love him.',
18 },
19 'params': {
20 # m3u8 download
21 'skip_download': True,
22 },
23 }
24
25 def _real_extract(self, url):
26 display_id = self._match_id(url)
27 webpage = self._download_webpage(url, display_id)
28 id_type, video_id = re.search(r"_cnglobal\.cvp(Video|Title)Id\s*=\s*'([^']+)';", webpage).groups()
29 query = ('id' if id_type == 'Video' else 'titleId') + '=' + video_id
30 return self._extract_cvp_info(
31 'http://www.cartoonnetwork.com/video-seo-svc/episodeservices/getCvpPlaylist?networkName=CN2&' + query, video_id, {
32 'secure': {
33 'media_src': 'http://androidhls-secure.cdn.turner.com/toon/big',
34 'tokenizer_src': 'http://www.cartoonnetwork.com/cntv/mvpd/processors/services/token_ipadAdobe.do',
35 },
36 }, {
37 'url': url,
38 'site_name': 'CartoonNetwork',
39 'auth_required': self._search_regex(
40 r'_cnglobal\.cvpFullOrPreviewAuth\s*=\s*(true|false);',
41 webpage, 'auth required', default='false') == 'true',
42 })
43
[end of youtube_dl/extractor/cartoonnetwork.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/cartoonnetwork.py b/youtube_dl/extractor/cartoonnetwork.py
--- a/youtube_dl/extractor/cartoonnetwork.py
+++ b/youtube_dl/extractor/cartoonnetwork.py
@@ -31,7 +31,7 @@
'http://www.cartoonnetwork.com/video-seo-svc/episodeservices/getCvpPlaylist?networkName=CN2&' + query, video_id, {
'secure': {
'media_src': 'http://androidhls-secure.cdn.turner.com/toon/big',
- 'tokenizer_src': 'http://www.cartoonnetwork.com/cntv/mvpd/processors/services/token_ipadAdobe.do',
+ 'tokenizer_src': 'https://token.vgtf.net/token/token_mobile',
},
}, {
'url': url,
| {"golden_diff": "diff --git a/youtube_dl/extractor/cartoonnetwork.py b/youtube_dl/extractor/cartoonnetwork.py\n--- a/youtube_dl/extractor/cartoonnetwork.py\n+++ b/youtube_dl/extractor/cartoonnetwork.py\n@@ -31,7 +31,7 @@\n 'http://www.cartoonnetwork.com/video-seo-svc/episodeservices/getCvpPlaylist?networkName=CN2&' + query, video_id, {\n 'secure': {\n 'media_src': 'http://androidhls-secure.cdn.turner.com/toon/big',\n- 'tokenizer_src': 'http://www.cartoonnetwork.com/cntv/mvpd/processors/services/token_ipadAdobe.do',\n+ 'tokenizer_src': 'https://token.vgtf.net/token/token_mobile',\n },\n }, {\n 'url': url,\n", "issue": "Cannot download certain Cartoon Network videos?\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.10.29*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.29**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['http://www.cartoonnetwork.com/video/regularshow/fre\r\ne-cake-episode.html', '-v']\r\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\r\n[debug] youtube-dl version 2017.10.29\r\n[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1\r\n[debug] exe versions: ffmpeg N-62162-gec8789a, ffprobe 3.2.4, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n[CartoonNetwork] free-cake: Downloading webpage\r\n[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML\r\n[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading f4m manif\r\nest\r\nWARNING: Unable to download f4m manifest: HTTP Error 403: Forbidden\r\n[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML\r\nERROR: UNKNOWN\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpk63cqkyt\\bu\r\nild\\youtube_dl\\YoutubeDL.py\", line 784, in extract_info\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpk63cqkyt\\bu\r\nild\\youtube_dl\\extractor\\common.py\", line 434, in extract\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpk63cqkyt\\bu\r\nild\\youtube_dl\\extractor\\cartoonnetwork.py\", line 41, in _real_extract\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpk63cqkyt\\bu\r\nild\\youtube_dl\\extractor\\turner.py\", line 84, in _extract_cvp_info\r\nyoutube_dl.utils.ExtractorError: UNKNOWN\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: http://www.cartoonnetwork.com/video/regularshow/free-cake-episode.html\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nI'm trying to download a particular video and for some reason I can't. Other Cartoon Network videos work just fine, but this series(?) doesn't seem to work. I'm not sure why some work, but some don't. I'm probably missing something... Help please?\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .turner import TurnerBaseIE\n\n\nclass CartoonNetworkIE(TurnerBaseIE):\n _VALID_URL = r'https?://(?:www\\.)?cartoonnetwork\\.com/video/(?:[^/]+/)+(?P<id>[^/?#]+)-(?:clip|episode)\\.html'\n _TEST = {\n 'url': 'http://www.cartoonnetwork.com/video/teen-titans-go/starfire-the-cat-lady-clip.html',\n 'info_dict': {\n 'id': '8a250ab04ed07e6c014ef3f1e2f9016c',\n 'ext': 'mp4',\n 'title': 'Starfire the Cat Lady',\n 'description': 'Robin decides to become a cat so that Starfire will finally love him.',\n },\n 'params': {\n # m3u8 download\n 'skip_download': True,\n },\n }\n\n def _real_extract(self, url):\n display_id = self._match_id(url)\n webpage = self._download_webpage(url, display_id)\n id_type, video_id = re.search(r\"_cnglobal\\.cvp(Video|Title)Id\\s*=\\s*'([^']+)';\", webpage).groups()\n query = ('id' if id_type == 'Video' else 'titleId') + '=' + video_id\n return self._extract_cvp_info(\n 'http://www.cartoonnetwork.com/video-seo-svc/episodeservices/getCvpPlaylist?networkName=CN2&' + query, video_id, {\n 'secure': {\n 'media_src': 'http://androidhls-secure.cdn.turner.com/toon/big',\n 'tokenizer_src': 'http://www.cartoonnetwork.com/cntv/mvpd/processors/services/token_ipadAdobe.do',\n },\n }, {\n 'url': url,\n 'site_name': 'CartoonNetwork',\n 'auth_required': self._search_regex(\n r'_cnglobal\\.cvpFullOrPreviewAuth\\s*=\\s*(true|false);',\n webpage, 'auth required', default='false') == 'true',\n })\n", "path": "youtube_dl/extractor/cartoonnetwork.py"}]} | 2,427 | 186 |
gh_patches_debug_26780 | rasdani/github-patches | git_diff | ansible-collections__community.vmware-2034 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use toolsVersionStatus2 instead of toolsVersionStatus
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
As i saw in the python module you are looking für toolsVersionStatus instead of toolsVersionStatus2
##### ISSUE TYPE
- Bug Report
- https://developer.vmware.com/apis/1355/
- toolsVersionStatus is depreacated
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_tools_info_module.html#ansible-collections-community-vmware-vmware-guest-tools-info-module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
Every
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_tools_info_module.html#ansible-collections-community-vmware-vmware-guest-tools-info-module
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
</issue>
<code>
[start of plugins/modules/vmware_guest_tools_info.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2019, Ansible Project
5 # Copyright: (c) 2019, VMware, Inc. All Rights Reserved.
6 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
7 # SPDX-License-Identifier: GPL-3.0-or-later
8
9 from __future__ import absolute_import, division, print_function
10 __metaclass__ = type
11
12
13 DOCUMENTATION = r'''
14 ---
15 module: vmware_guest_tools_info
16 short_description: Gather info about VMware tools installed in VM
17 description:
18 - Gather information about the VMware tools installed in virtual machine.
19 author:
20 - Diane Wang (@Tomorrow9) <[email protected]>
21 options:
22 name:
23 description:
24 - Name of the VM to get VMware tools info.
25 - This is required if O(uuid) or O(moid) is not supplied.
26 type: str
27 name_match:
28 description:
29 - If multiple VMs matching the name, use the first or last found.
30 default: 'first'
31 choices: ['first', 'last']
32 type: str
33 uuid:
34 description:
35 - UUID of the instance to manage if known, this is VMware's unique identifier.
36 - This is required if O(name) or O(moid) is not supplied.
37 type: str
38 use_instance_uuid:
39 description:
40 - Whether to use the VMware instance UUID rather than the BIOS UUID.
41 default: false
42 type: bool
43 moid:
44 description:
45 - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
46 - This is required if O(name) or O(uuid) is not supplied.
47 type: str
48 folder:
49 description:
50 - Destination folder, absolute or relative path to find an existing guest.
51 - This is required if name is supplied.
52 - The folder should include the datacenter. ESXi server's datacenter is ha-datacenter.
53 - 'Examples:'
54 - ' folder: /ha-datacenter/vm'
55 - ' folder: ha-datacenter/vm'
56 - ' folder: /datacenter1/vm'
57 - ' folder: datacenter1/vm'
58 - ' folder: /datacenter1/vm/folder1'
59 - ' folder: datacenter1/vm/folder1'
60 - ' folder: /folder1/datacenter1/vm'
61 - ' folder: folder1/datacenter1/vm'
62 - ' folder: /folder1/datacenter1/vm/folder2'
63 type: str
64 datacenter:
65 description:
66 - The datacenter name to which virtual machine belongs to.
67 type: str
68 extends_documentation_fragment:
69 - community.vmware.vmware.documentation
70
71 '''
72
73 EXAMPLES = r'''
74 - name: Gather VMware tools info installed in VM specified by uuid
75 community.vmware.vmware_guest_tools_info:
76 hostname: "{{ vcenter_hostname }}"
77 username: "{{ vcenter_username }}"
78 password: "{{ vcenter_password }}"
79 datacenter: "{{ datacenter_name }}"
80 uuid: 421e4592-c069-924d-ce20-7e7533fab926
81 delegate_to: localhost
82 register: vmtools_info
83
84 - name: Gather VMware tools info installed in VM specified by name
85 community.vmware.vmware_guest_tools_info:
86 hostname: "{{ vcenter_hostname }}"
87 username: "{{ vcenter_username }}"
88 password: "{{ vcenter_password }}"
89 datacenter: "{{ datacenter_name }}"
90 name: "{{ vm_name }}"
91 delegate_to: localhost
92 register: vmtools_info
93 '''
94
95 RETURN = r'''
96 vmtools_info:
97 description: metadata about the VMware tools installed in virtual machine
98 returned: always
99 type: dict
100 sample: {
101 "vm_uuid": null,
102 "vm_moid": null,
103 "vm_use_instance_uuid": false,
104 "vm_guest_fullname": "Microsoft Windows 10 (64-bit)",
105 "vm_guest_hostname": "test",
106 "vm_guest_id": "windows9_64Guest",
107 "vm_hw_version": "vmx-14",
108 "vm_ipaddress": "10.10.10.10",
109 "vm_name": "test_vm",
110 "vm_tools_install_status": "toolsOk",
111 "vm_tools_install_type": "guestToolsTypeMSI",
112 "vm_tools_last_install_count": 0,
113 "vm_tools_running_status": "guestToolsRunning",
114 "vm_tools_upgrade_policy": "manual",
115 "vm_tools_version": 10341,
116 "vm_tools_version_status": "guestToolsCurrent"
117 }
118 '''
119
120
121 from ansible.module_utils.basic import AnsibleModule
122 from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec
123
124
125 class PyVmomiHelper(PyVmomi):
126 def __init__(self, module):
127 super(PyVmomiHelper, self).__init__(module)
128 self.name = self.params['name']
129 self.uuid = self.params['uuid']
130 self.moid = self.params['moid']
131 self.use_instance_uuid = self.params['use_instance_uuid']
132
133 def gather_vmtools_info(self):
134 vmtools_info = dict(
135 vm_name=self.name,
136 vm_uuid=self.uuid,
137 vm_moid=self.moid,
138 vm_use_instance_uuid=self.use_instance_uuid,
139 vm_hw_version=self.current_vm_obj.config.version,
140 vm_guest_id=self.current_vm_obj.summary.guest.guestId,
141 vm_guest_fullname=self.current_vm_obj.summary.guest.guestFullName,
142 vm_guest_hostname=self.current_vm_obj.summary.guest.hostName,
143 vm_ipaddress=self.current_vm_obj.summary.guest.ipAddress,
144 vm_tools_running_status=self.current_vm_obj.summary.guest.toolsRunningStatus,
145 vm_tools_install_status=self.current_vm_obj.summary.guest.toolsStatus,
146 vm_tools_version_status=self.current_vm_obj.summary.guest.toolsVersionStatus,
147 vm_tools_install_type=self.current_vm_obj.config.tools.toolsInstallType,
148 vm_tools_version=self.current_vm_obj.config.tools.toolsVersion,
149 vm_tools_upgrade_policy=self.current_vm_obj.config.tools.toolsUpgradePolicy,
150 vm_tools_last_install_count=self.current_vm_obj.config.tools.lastInstallInfo.counter,
151 )
152
153 return {'changed': False, 'failed': False, 'vmtools_info': vmtools_info}
154
155
156 def main():
157 argument_spec = vmware_argument_spec()
158 argument_spec.update(
159 name=dict(type='str'),
160 uuid=dict(type='str'),
161 moid=dict(type='str'),
162 use_instance_uuid=dict(type='bool', default=False),
163 name_match=dict(
164 choices=['first', 'last'],
165 default='first',
166 type='str'
167 ),
168 folder=dict(type='str'),
169 datacenter=dict(type='str'),
170 )
171
172 module = AnsibleModule(
173 argument_spec=argument_spec,
174 required_one_of=[
175 ['name', 'uuid', 'moid']
176 ],
177 mutually_exclusive=[
178 ['name', 'uuid', 'moid']
179 ],
180 supports_check_mode=True,
181 )
182
183 pyv = PyVmomiHelper(module)
184 vm = pyv.get_vm()
185 if not vm:
186 vm_id = (module.params.get('uuid') or module.params.get('name') or module.params.get('moid'))
187 module.fail_json(msg='Unable to find the specified virtual machine using: %s' % vm_id)
188 results = pyv.gather_vmtools_info()
189 module.exit_json(**results)
190
191
192 if __name__ == '__main__':
193 main()
194
[end of plugins/modules/vmware_guest_tools_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/vmware_guest_tools_info.py b/plugins/modules/vmware_guest_tools_info.py
--- a/plugins/modules/vmware_guest_tools_info.py
+++ b/plugins/modules/vmware_guest_tools_info.py
@@ -143,13 +143,18 @@
vm_ipaddress=self.current_vm_obj.summary.guest.ipAddress,
vm_tools_running_status=self.current_vm_obj.summary.guest.toolsRunningStatus,
vm_tools_install_status=self.current_vm_obj.summary.guest.toolsStatus,
- vm_tools_version_status=self.current_vm_obj.summary.guest.toolsVersionStatus,
+ vm_tools_version_status=self.current_vm_obj.summary.guest.toolsVersionStatus2,
vm_tools_install_type=self.current_vm_obj.config.tools.toolsInstallType,
vm_tools_version=self.current_vm_obj.config.tools.toolsVersion,
vm_tools_upgrade_policy=self.current_vm_obj.config.tools.toolsUpgradePolicy,
vm_tools_last_install_count=self.current_vm_obj.config.tools.lastInstallInfo.counter,
)
+ self.module.deprecate(
+ msg="The API providing vm_tools_install_status has been deprecated by VMware; use vm_tools_running_status / vm_tools_version_status instead",
+ version="5.0.0",
+ collection_name="community.vmware"
+ )
return {'changed': False, 'failed': False, 'vmtools_info': vmtools_info}
| {"golden_diff": "diff --git a/plugins/modules/vmware_guest_tools_info.py b/plugins/modules/vmware_guest_tools_info.py\n--- a/plugins/modules/vmware_guest_tools_info.py\n+++ b/plugins/modules/vmware_guest_tools_info.py\n@@ -143,13 +143,18 @@\n vm_ipaddress=self.current_vm_obj.summary.guest.ipAddress,\n vm_tools_running_status=self.current_vm_obj.summary.guest.toolsRunningStatus,\n vm_tools_install_status=self.current_vm_obj.summary.guest.toolsStatus,\n- vm_tools_version_status=self.current_vm_obj.summary.guest.toolsVersionStatus,\n+ vm_tools_version_status=self.current_vm_obj.summary.guest.toolsVersionStatus2,\n vm_tools_install_type=self.current_vm_obj.config.tools.toolsInstallType,\n vm_tools_version=self.current_vm_obj.config.tools.toolsVersion,\n vm_tools_upgrade_policy=self.current_vm_obj.config.tools.toolsUpgradePolicy,\n vm_tools_last_install_count=self.current_vm_obj.config.tools.lastInstallInfo.counter,\n )\n \n+ self.module.deprecate(\n+ msg=\"The API providing vm_tools_install_status has been deprecated by VMware; use vm_tools_running_status / vm_tools_version_status instead\",\n+ version=\"5.0.0\",\n+ collection_name=\"community.vmware\"\n+ )\n return {'changed': False, 'failed': False, 'vmtools_info': vmtools_info}\n", "issue": "Use toolsVersionStatus2 instead of toolsVersionStatus\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and devel branch are affected too -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\nAs i saw in the python module you are looking f\u00fcr toolsVersionStatus instead of toolsVersionStatus2\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n- https://developer.vmware.com/apis/1355/\r\n- toolsVersionStatus is depreacated\r\n\r\n##### COMPONENT NAME\r\n<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->\r\nhttps://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_tools_info_module.html#ansible-collections-community-vmware-vmware-guest-tools-info-module\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes -->\r\n```paste below\r\nEvery\r\n```\r\n\r\n##### COLLECTION VERSION\r\n<!--- Paste verbatim output from \"ansible-galaxy collection list <namespace>.<collection>\" between the quotes\r\nfor example: ansible-galaxy collection list community.general\r\n-->\r\n```paste below\r\nhttps://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_tools_info_module.html#ansible-collections-community-vmware-vmware-guest-tools-info-module\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- Describe what you expected to happen when running the steps above -->\r\n\r\n\r\n##### ACTUAL RESULTS\r\n<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes -->\r\n```paste below\r\n\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2019, Ansible Project\n# Copyright: (c) 2019, VMware, Inc. All Rights Reserved.\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: vmware_guest_tools_info\nshort_description: Gather info about VMware tools installed in VM\ndescription:\n - Gather information about the VMware tools installed in virtual machine.\nauthor:\n - Diane Wang (@Tomorrow9) <[email protected]>\noptions:\n name:\n description:\n - Name of the VM to get VMware tools info.\n - This is required if O(uuid) or O(moid) is not supplied.\n type: str\n name_match:\n description:\n - If multiple VMs matching the name, use the first or last found.\n default: 'first'\n choices: ['first', 'last']\n type: str\n uuid:\n description:\n - UUID of the instance to manage if known, this is VMware's unique identifier.\n - This is required if O(name) or O(moid) is not supplied.\n type: str\n use_instance_uuid:\n description:\n - Whether to use the VMware instance UUID rather than the BIOS UUID.\n default: false\n type: bool\n moid:\n description:\n - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.\n - This is required if O(name) or O(uuid) is not supplied.\n type: str\n folder:\n description:\n - Destination folder, absolute or relative path to find an existing guest.\n - This is required if name is supplied.\n - The folder should include the datacenter. ESXi server's datacenter is ha-datacenter.\n - 'Examples:'\n - ' folder: /ha-datacenter/vm'\n - ' folder: ha-datacenter/vm'\n - ' folder: /datacenter1/vm'\n - ' folder: datacenter1/vm'\n - ' folder: /datacenter1/vm/folder1'\n - ' folder: datacenter1/vm/folder1'\n - ' folder: /folder1/datacenter1/vm'\n - ' folder: folder1/datacenter1/vm'\n - ' folder: /folder1/datacenter1/vm/folder2'\n type: str\n datacenter:\n description:\n - The datacenter name to which virtual machine belongs to.\n type: str\nextends_documentation_fragment:\n- community.vmware.vmware.documentation\n\n'''\n\nEXAMPLES = r'''\n- name: Gather VMware tools info installed in VM specified by uuid\n community.vmware.vmware_guest_tools_info:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n datacenter: \"{{ datacenter_name }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n delegate_to: localhost\n register: vmtools_info\n\n- name: Gather VMware tools info installed in VM specified by name\n community.vmware.vmware_guest_tools_info:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n datacenter: \"{{ datacenter_name }}\"\n name: \"{{ vm_name }}\"\n delegate_to: localhost\n register: vmtools_info\n'''\n\nRETURN = r'''\nvmtools_info:\n description: metadata about the VMware tools installed in virtual machine\n returned: always\n type: dict\n sample: {\n \"vm_uuid\": null,\n \"vm_moid\": null,\n \"vm_use_instance_uuid\": false,\n \"vm_guest_fullname\": \"Microsoft Windows 10 (64-bit)\",\n \"vm_guest_hostname\": \"test\",\n \"vm_guest_id\": \"windows9_64Guest\",\n \"vm_hw_version\": \"vmx-14\",\n \"vm_ipaddress\": \"10.10.10.10\",\n \"vm_name\": \"test_vm\",\n \"vm_tools_install_status\": \"toolsOk\",\n \"vm_tools_install_type\": \"guestToolsTypeMSI\",\n \"vm_tools_last_install_count\": 0,\n \"vm_tools_running_status\": \"guestToolsRunning\",\n \"vm_tools_upgrade_policy\": \"manual\",\n \"vm_tools_version\": 10341,\n \"vm_tools_version_status\": \"guestToolsCurrent\"\n }\n'''\n\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec\n\n\nclass PyVmomiHelper(PyVmomi):\n def __init__(self, module):\n super(PyVmomiHelper, self).__init__(module)\n self.name = self.params['name']\n self.uuid = self.params['uuid']\n self.moid = self.params['moid']\n self.use_instance_uuid = self.params['use_instance_uuid']\n\n def gather_vmtools_info(self):\n vmtools_info = dict(\n vm_name=self.name,\n vm_uuid=self.uuid,\n vm_moid=self.moid,\n vm_use_instance_uuid=self.use_instance_uuid,\n vm_hw_version=self.current_vm_obj.config.version,\n vm_guest_id=self.current_vm_obj.summary.guest.guestId,\n vm_guest_fullname=self.current_vm_obj.summary.guest.guestFullName,\n vm_guest_hostname=self.current_vm_obj.summary.guest.hostName,\n vm_ipaddress=self.current_vm_obj.summary.guest.ipAddress,\n vm_tools_running_status=self.current_vm_obj.summary.guest.toolsRunningStatus,\n vm_tools_install_status=self.current_vm_obj.summary.guest.toolsStatus,\n vm_tools_version_status=self.current_vm_obj.summary.guest.toolsVersionStatus,\n vm_tools_install_type=self.current_vm_obj.config.tools.toolsInstallType,\n vm_tools_version=self.current_vm_obj.config.tools.toolsVersion,\n vm_tools_upgrade_policy=self.current_vm_obj.config.tools.toolsUpgradePolicy,\n vm_tools_last_install_count=self.current_vm_obj.config.tools.lastInstallInfo.counter,\n )\n\n return {'changed': False, 'failed': False, 'vmtools_info': vmtools_info}\n\n\ndef main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(\n name=dict(type='str'),\n uuid=dict(type='str'),\n moid=dict(type='str'),\n use_instance_uuid=dict(type='bool', default=False),\n name_match=dict(\n choices=['first', 'last'],\n default='first',\n type='str'\n ),\n folder=dict(type='str'),\n datacenter=dict(type='str'),\n )\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n required_one_of=[\n ['name', 'uuid', 'moid']\n ],\n mutually_exclusive=[\n ['name', 'uuid', 'moid']\n ],\n supports_check_mode=True,\n )\n\n pyv = PyVmomiHelper(module)\n vm = pyv.get_vm()\n if not vm:\n vm_id = (module.params.get('uuid') or module.params.get('name') or module.params.get('moid'))\n module.fail_json(msg='Unable to find the specified virtual machine using: %s' % vm_id)\n results = pyv.gather_vmtools_info()\n module.exit_json(**results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/vmware_guest_tools_info.py"}]} | 3,154 | 287 |
gh_patches_debug_64580 | rasdani/github-patches | git_diff | kubeflow__pipelines-6691 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[sdk] dependency conflict with tensorflow 2.6.0 and seldon-core
### Environment
* KFP version: 1.7
* KFP SDK version: 1.8.3
* All dependencies version:
```
[~]$ pip list | grep kfp
kfp 1.8.3
kfp-pipeline-spec 0.1.11
kfp-server-api 1.7.0
```
kfp==1.8.3 collides with tensorflow==2.6 because it requires
https://github.com/kubeflow/pipelines/blob/220d79df66e31bbd93c409fb361e0463bde4aeac/sdk/python/setup.py#L56
while tensorflow needs
```
Warning!!! Possibly conflicting dependencies found:
* tensorflow==2.6.0
- typing-extensions [required: ~=3.7.4, installed: 3.10.0.2]
```
https://github.com/tensorflow/tensorflow/blob/421fba8888bb8f8724bc2e35ca2fdcde16e1bfe5/tensorflow/tools/pip_package/setup.py#L90
is `'typing-extensions>=3.7.4,<4;python_version<"3.9"'` not enough?
The same goes for seldon-core==1.11.* and package click and absl-py
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
kfp 1.8.2 requires absl-py<=0.11,>=0.9, but you have absl-py 0.13.0 which is incompatible.
kfp 1.8.2 requires click<8,>=7.1.1, but you have click 8.0.1 which is incompatible.
kfp 1.8.2 requires typing-extensions<4,>=3.10.0.2, but you have typing-extensions 3.7.4.3 which is incompatible.
```
</issue>
<code>
[start of sdk/python/setup.py]
1 # Copyright 2018 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17
18 from setuptools import setup
19
20 NAME = 'kfp'
21 #VERSION = .... Change the version in kfp/__init__.py
22
23 # NOTICE, after any updates to the following, ./requirements.in should be updated
24 # accordingly.
25 REQUIRES = [
26 'absl-py>=0.9,<=0.11',
27 'PyYAML>=5.3,<6',
28 # `Blob.from_string` was introduced in google-cloud-storage 1.20.0
29 # https://github.com/googleapis/python-storage/blob/master/CHANGELOG.md#1200
30 'google-cloud-storage>=1.20.0,<2',
31 'kubernetes>=8.0.0,<19',
32 # google-api-python-client v2 doesn't work for private dicovery by default:
33 # https://github.com/googleapis/google-api-python-client/issues/1225#issuecomment-791058235
34 'google-api-python-client>=1.7.8,<2',
35 'google-auth>=1.6.1,<2',
36 'requests-toolbelt>=0.8.0,<1',
37 'cloudpickle>=1.3.0,<2',
38 # Update the upper version whenever a new major version of the
39 # kfp-server-api package is released.
40 # Update the lower version when kfp sdk depends on new apis/fields in
41 # kfp-server-api.
42 # Note, please also update ./requirements.in
43 'kfp-server-api>=1.1.2,<2.0.0',
44 'jsonschema>=3.0.1,<4',
45 'tabulate>=0.8.6,<1',
46 'click>=7.1.1,<8',
47 'Deprecated>=1.2.7,<2',
48 'strip-hints>=0.1.8,<1',
49 'docstring-parser>=0.7.3,<1',
50 'kfp-pipeline-spec>=0.1.10,<0.2.0',
51 'fire>=0.3.1,<1',
52 'protobuf>=3.13.0,<4',
53 'uritemplate>=3.0.1,<4',
54 'pydantic>=1.8.2,<2',
55 # Standard library backports
56 'dataclasses;python_version<"3.7"',
57 'typing-extensions>=3.7.4,<4;python_version<"3.9"',
58 ]
59
60 TESTS_REQUIRE = [
61 'frozendict',
62 ]
63
64
65 def find_version(*file_path_parts):
66 here = os.path.abspath(os.path.dirname(__file__))
67 with open(os.path.join(here, *file_path_parts), 'r') as fp:
68 version_file_text = fp.read()
69
70 version_match = re.search(
71 r"^__version__ = ['\"]([^'\"]*)['\"]",
72 version_file_text,
73 re.M,
74 )
75 if version_match:
76 return version_match.group(1)
77
78 raise RuntimeError('Unable to find version string.')
79
80
81 setup(
82 name=NAME,
83 version=find_version('kfp', '__init__.py'),
84 description='KubeFlow Pipelines SDK',
85 author='The Kubeflow Authors',
86 url="https://github.com/kubeflow/pipelines",
87 project_urls={
88 "Documentation": "https://kubeflow-pipelines.readthedocs.io/en/stable/",
89 "Bug Tracker": "https://github.com/kubeflow/pipelines/issues",
90 "Source": "https://github.com/kubeflow/pipelines/tree/master/sdk",
91 "Changelog": "https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md",
92 },
93 install_requires=REQUIRES,
94 tests_require=TESTS_REQUIRE,
95 packages=[
96 'kfp',
97 'kfp.auth',
98 'kfp.cli',
99 'kfp.cli.diagnose_me',
100 'kfp.compiler',
101 'kfp.components',
102 'kfp.components.structures',
103 'kfp.containers',
104 'kfp.dsl',
105 'kfp.dsl.extensions',
106 'kfp.notebook',
107 'kfp.v2',
108 'kfp.v2.compiler',
109 'kfp.v2.components',
110 'kfp.v2.components.types',
111 'kfp.v2.components.experimental',
112 'kfp.v2.dsl',
113 'kfp.v2.google.client',
114 'kfp.v2.google.experimental',
115 ],
116 classifiers=[
117 'Intended Audience :: Developers',
118 'Intended Audience :: Education',
119 'Intended Audience :: Science/Research',
120 'License :: OSI Approved :: Apache Software License',
121 'Programming Language :: Python :: 3',
122 'Programming Language :: Python :: 3.6',
123 'Programming Language :: Python :: 3.7',
124 'Programming Language :: Python :: 3.8',
125 'Programming Language :: Python :: 3.9',
126 'Topic :: Scientific/Engineering',
127 'Topic :: Scientific/Engineering :: Artificial Intelligence',
128 'Topic :: Software Development',
129 'Topic :: Software Development :: Libraries',
130 'Topic :: Software Development :: Libraries :: Python Modules',
131 ],
132 python_requires='>=3.6.1',
133 include_package_data=True,
134 entry_points={
135 'console_scripts': [
136 'dsl-compile = kfp.compiler.main:main',
137 'dsl-compile-v2 = kfp.v2.compiler.main:main',
138 'kfp=kfp.__main__:main'
139 ]
140 })
141
[end of sdk/python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -43,7 +43,7 @@
'kfp-server-api>=1.1.2,<2.0.0',
'jsonschema>=3.0.1,<4',
'tabulate>=0.8.6,<1',
- 'click>=7.1.1,<8',
+ 'click>=7.1.2,<9',
'Deprecated>=1.2.7,<2',
'strip-hints>=0.1.8,<1',
'docstring-parser>=0.7.3,<1',
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -43,7 +43,7 @@\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n- 'click>=7.1.1,<8',\n+ 'click>=7.1.2,<9',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n", "issue": "[sdk] dependency conflict with tensorflow 2.6.0 and seldon-core\n### Environment\r\n\r\n* KFP version: 1.7\r\n* KFP SDK version: 1.8.3\r\n\r\n\r\n* All dependencies version:\r\n```\r\n[~]$ pip list | grep kfp\r\nkfp 1.8.3\r\nkfp-pipeline-spec 0.1.11\r\nkfp-server-api 1.7.0\r\n```\r\nkfp==1.8.3 collides with tensorflow==2.6 because it requires \r\n\r\nhttps://github.com/kubeflow/pipelines/blob/220d79df66e31bbd93c409fb361e0463bde4aeac/sdk/python/setup.py#L56\r\n\r\nwhile tensorflow needs\r\n```\r\nWarning!!! Possibly conflicting dependencies found:\r\n* tensorflow==2.6.0\r\n - typing-extensions [required: ~=3.7.4, installed: 3.10.0.2]\r\n```\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/421fba8888bb8f8724bc2e35ca2fdcde16e1bfe5/tensorflow/tools/pip_package/setup.py#L90\r\n\r\nis `'typing-extensions>=3.7.4,<4;python_version<\"3.9\"'` not enough?\r\n\r\nThe same goes for seldon-core==1.11.* and package click and absl-py\r\n\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nkfp 1.8.2 requires absl-py<=0.11,>=0.9, but you have absl-py 0.13.0 which is incompatible.\r\nkfp 1.8.2 requires click<8,>=7.1.1, but you have click 8.0.1 which is incompatible.\r\nkfp 1.8.2 requires typing-extensions<4,>=3.10.0.2, but you have typing-extensions 3.7.4.3 which is incompatible.\r\n```\n", "before_files": [{"content": "# Copyright 2018 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\n\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\n# NOTICE, after any updates to the following, ./requirements.in should be updated\n# accordingly.\nREQUIRES = [\n 'absl-py>=0.9,<=0.11',\n 'PyYAML>=5.3,<6',\n # `Blob.from_string` was introduced in google-cloud-storage 1.20.0\n # https://github.com/googleapis/python-storage/blob/master/CHANGELOG.md#1200\n 'google-cloud-storage>=1.20.0,<2',\n 'kubernetes>=8.0.0,<19',\n # google-api-python-client v2 doesn't work for private dicovery by default:\n # https://github.com/googleapis/google-api-python-client/issues/1225#issuecomment-791058235\n 'google-api-python-client>=1.7.8,<2',\n 'google-auth>=1.6.1,<2',\n 'requests-toolbelt>=0.8.0,<1',\n 'cloudpickle>=1.3.0,<2',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n 'click>=7.1.1,<8',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n 'kfp-pipeline-spec>=0.1.10,<0.2.0',\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4',\n 'uritemplate>=3.0.1,<4',\n 'pydantic>=1.8.2,<2',\n # Standard library backports\n 'dataclasses;python_version<\"3.7\"',\n 'typing-extensions>=3.7.4,<4;python_version<\"3.9\"',\n]\n\nTESTS_REQUIRE = [\n 'frozendict',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='The Kubeflow Authors',\n url=\"https://github.com/kubeflow/pipelines\",\n project_urls={\n \"Documentation\": \"https://kubeflow-pipelines.readthedocs.io/en/stable/\",\n \"Bug Tracker\": \"https://github.com/kubeflow/pipelines/issues\",\n \"Source\": \"https://github.com/kubeflow/pipelines/tree/master/sdk\",\n \"Changelog\": \"https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md\",\n },\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.auth',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n 'kfp.v2',\n 'kfp.v2.compiler',\n 'kfp.v2.components',\n 'kfp.v2.components.types',\n 'kfp.v2.components.experimental',\n 'kfp.v2.dsl',\n 'kfp.v2.google.client',\n 'kfp.v2.google.experimental',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6.1',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'dsl-compile-v2 = kfp.v2.compiler.main:main',\n 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]} | 2,649 | 151 |
gh_patches_debug_19349 | rasdani/github-patches | git_diff | fossasia__open-event-server-4248 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Microlocations : GET requests return ERROR 500
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
Endpoint
```
GET v1/events/<event_id>/microlocations
```
Response
```
{
"errors":[
{
"detail":"Unknown error",
"source":{
"pointer":""
},
"status":500,
"title":"Unknown error"
}
],
"jsonapi":{
"version":"1.0"
}
}
```
Example URL
```
https://open-event-api.herokuapp.com/v1/events/173/microlocations
```
</issue>
<code>
[start of app/api/microlocations.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from marshmallow_jsonapi import fields
4
5 from app.api.bootstrap import api
6 from app.api.helpers.utilities import dasherize
7 from app.models import db
8 from app.models.microlocation import Microlocation
9 from app.models.session import Session
10 from app.api.helpers.db import safe_query
11 from app.api.helpers.utilities import require_relationship
12 from app.api.helpers.permission_manager import has_access
13 from app.api.helpers.exceptions import ForbiddenException
14 from app.api.helpers.query import event_query
15
16
17 class MicrolocationSchema(Schema):
18 """
19 Api schema for Microlocation Model
20 """
21
22 class Meta:
23 """
24 Meta class for Microlocation Api Schema
25 """
26 type_ = 'microlocation'
27 self_view = 'v1.microlocation_detail'
28 self_view_kwargs = {'id': '<id>'}
29 self_view_many = 'v1.session_list'
30 inflect = dasherize
31
32 id = fields.Str(dump_only=True)
33 name = fields.Str(required=True)
34 latitude = fields.Float(validate=lambda n: -90 <= n <= 90, allow_none=True)
35 longitude = fields.Float(validate=lambda n: -180 <= n <= 180, allow_none=True)
36 floor = fields.Integer(allow_none=True)
37 room = fields.Str(allow_none=True)
38 sessions = Relationship(attribute='session',
39 self_view='v1.microlocation_session',
40 self_view_kwargs={'id': '<id>'},
41 related_view='v1.session_list',
42 related_view_kwargs={'microlocation_id': '<id>'},
43 schema='SessionSchema',
44 type_='session')
45 event = Relationship(attribute='event',
46 self_view='v1.microlocation_event',
47 self_view_kwargs={'id': '<id>'},
48 related_view='v1.event_detail',
49 related_view_kwargs={'microlocation_id': '<id>'},
50 schema='EventSchema',
51 type_='event')
52
53
54 class MicrolocationListPost(ResourceList):
55 """
56 List and create microlocations
57 """
58 def before_post(self, args, kwargs, data):
59 require_relationship(['event'], data)
60 if not has_access('is_coorganizer', event_id=data['event']):
61 raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')
62
63 methods = ['POST', ]
64 schema = MicrolocationSchema
65 data_layer = {'session': db.session,
66 'model': Microlocation}
67
68
69 class MicrolocationList(ResourceList):
70 """
71 List Microlocations
72 """
73 def query(self, view_kwargs):
74 query_ = self.session.query(Microlocation)
75 query_ = event_query(self, query_, view_kwargs)
76 if view_kwargs.get('session_id'):
77 session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
78 query_ = query_.join(Session).filter(Session.id == session.id)
79 return query_
80
81 view_kwargs = True
82 methods = ['GET']
83 schema = MicrolocationSchema
84 data_layer = {'session': db.session,
85 'model': Microlocation,
86 'methods': {
87 'query': query
88 }}
89
90
91 class MicrolocationDetail(ResourceDetail):
92 """
93 Microlocation detail by id
94 """
95
96 def before_get_object(self, view_kwargs):
97
98 if view_kwargs.get('session_id') is not None:
99 sessions = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
100 if sessions.event_id is not None:
101 view_kwargs['id'] = sessions.event_id
102 else:
103 view_kwargs['id'] = None
104
105 decorators = (api.has_permission('is_coorganizer', methods="PATCH,DELETE", fetch="event_id", fetch_as="event_id",
106 model=Microlocation),)
107 schema = MicrolocationSchema
108 data_layer = {'session': db.session,
109 'model': Microlocation,
110 'methods': {'before_get_object': before_get_object}}
111
112
113 class MicrolocationRelationshipRequired(ResourceRelationship):
114 """
115 Microlocation Relationship for required entities
116 """
117 decorators = (api.has_permission('is_coorganizer', methods="PATCH", fetch="event_id", fetch_as="event_id",
118 model=Microlocation),)
119 methods = ['GET', 'PATCH']
120 schema = MicrolocationSchema
121 data_layer = {'session': db.session,
122 'model': Microlocation}
123
124
125 class MicrolocationRelationshipOptional(ResourceRelationship):
126 """
127 Microlocation Relationship
128 """
129 decorators = (api.has_permission('is_coorganizer', methods="PATCH,DELETE", fetch="event_id", fetch_as="event_id",
130 model=Microlocation),)
131 schema = MicrolocationSchema
132 data_layer = {'session': db.session,
133 'model': Microlocation}
134
[end of app/api/microlocations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/microlocations.py b/app/api/microlocations.py
--- a/app/api/microlocations.py
+++ b/app/api/microlocations.py
@@ -26,7 +26,7 @@
type_ = 'microlocation'
self_view = 'v1.microlocation_detail'
self_view_kwargs = {'id': '<id>'}
- self_view_many = 'v1.session_list'
+ self_view_many = 'v1.microlocation_list_post'
inflect = dasherize
id = fields.Str(dump_only=True)
@@ -36,6 +36,7 @@
floor = fields.Integer(allow_none=True)
room = fields.Str(allow_none=True)
sessions = Relationship(attribute='session',
+ many=True,
self_view='v1.microlocation_session',
self_view_kwargs={'id': '<id>'},
related_view='v1.session_list',
| {"golden_diff": "diff --git a/app/api/microlocations.py b/app/api/microlocations.py\n--- a/app/api/microlocations.py\n+++ b/app/api/microlocations.py\n@@ -26,7 +26,7 @@\n type_ = 'microlocation'\n self_view = 'v1.microlocation_detail'\n self_view_kwargs = {'id': '<id>'}\n- self_view_many = 'v1.session_list'\n+ self_view_many = 'v1.microlocation_list_post'\n inflect = dasherize\n \n id = fields.Str(dump_only=True)\n@@ -36,6 +36,7 @@\n floor = fields.Integer(allow_none=True)\n room = fields.Str(allow_none=True)\n sessions = Relationship(attribute='session',\n+ many=True,\n self_view='v1.microlocation_session',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_list',\n", "issue": "Microlocations : GET requests return ERROR 500\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\nEndpoint \r\n```\r\nGET v1/events/<event_id>/microlocations \r\n```\r\n\r\nResponse\r\n```\r\n{\r\n \"errors\":[\r\n {\r\n \"detail\":\"Unknown error\",\r\n \"source\":{\r\n \"pointer\":\"\"\r\n },\r\n \"status\":500,\r\n \"title\":\"Unknown error\"\r\n }\r\n ],\r\n \"jsonapi\":{\r\n \"version\":\"1.0\"\r\n }\r\n}\r\n```\r\n\r\nExample URL\r\n```\r\nhttps://open-event-api.herokuapp.com/v1/events/173/microlocations\r\n```\r\n\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.microlocation import Microlocation\nfrom app.models.session import Session\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.query import event_query\n\n\nclass MicrolocationSchema(Schema):\n \"\"\"\n Api schema for Microlocation Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for Microlocation Api Schema\n \"\"\"\n type_ = 'microlocation'\n self_view = 'v1.microlocation_detail'\n self_view_kwargs = {'id': '<id>'}\n self_view_many = 'v1.session_list'\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(required=True)\n latitude = fields.Float(validate=lambda n: -90 <= n <= 90, allow_none=True)\n longitude = fields.Float(validate=lambda n: -180 <= n <= 180, allow_none=True)\n floor = fields.Integer(allow_none=True)\n room = fields.Str(allow_none=True)\n sessions = Relationship(attribute='session',\n self_view='v1.microlocation_session',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_list',\n related_view_kwargs={'microlocation_id': '<id>'},\n schema='SessionSchema',\n type_='session')\n event = Relationship(attribute='event',\n self_view='v1.microlocation_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'microlocation_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass MicrolocationListPost(ResourceList):\n \"\"\"\n List and create microlocations\n \"\"\"\n def before_post(self, args, kwargs, data):\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n methods = ['POST', ]\n schema = MicrolocationSchema\n data_layer = {'session': db.session,\n 'model': Microlocation}\n\n\nclass MicrolocationList(ResourceList):\n \"\"\"\n List Microlocations\n \"\"\"\n def query(self, view_kwargs):\n query_ = self.session.query(Microlocation)\n query_ = event_query(self, query_, view_kwargs)\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n query_ = query_.join(Session).filter(Session.id == session.id)\n return query_\n\n view_kwargs = True\n methods = ['GET']\n schema = MicrolocationSchema\n data_layer = {'session': db.session,\n 'model': Microlocation,\n 'methods': {\n 'query': query\n }}\n\n\nclass MicrolocationDetail(ResourceDetail):\n \"\"\"\n Microlocation detail by id\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n\n if view_kwargs.get('session_id') is not None:\n sessions = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n if sessions.event_id is not None:\n view_kwargs['id'] = sessions.event_id\n else:\n view_kwargs['id'] = None\n\n decorators = (api.has_permission('is_coorganizer', methods=\"PATCH,DELETE\", fetch=\"event_id\", fetch_as=\"event_id\",\n model=Microlocation),)\n schema = MicrolocationSchema\n data_layer = {'session': db.session,\n 'model': Microlocation,\n 'methods': {'before_get_object': before_get_object}}\n\n\nclass MicrolocationRelationshipRequired(ResourceRelationship):\n \"\"\"\n Microlocation Relationship for required entities\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', methods=\"PATCH\", fetch=\"event_id\", fetch_as=\"event_id\",\n model=Microlocation),)\n methods = ['GET', 'PATCH']\n schema = MicrolocationSchema\n data_layer = {'session': db.session,\n 'model': Microlocation}\n\n\nclass MicrolocationRelationshipOptional(ResourceRelationship):\n \"\"\"\n Microlocation Relationship\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', methods=\"PATCH,DELETE\", fetch=\"event_id\", fetch_as=\"event_id\",\n model=Microlocation),)\n schema = MicrolocationSchema\n data_layer = {'session': db.session,\n 'model': Microlocation}\n", "path": "app/api/microlocations.py"}]} | 2,045 | 198 |
gh_patches_debug_8814 | rasdani/github-patches | git_diff | CTPUG__wafer-243 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tickets should be decoded on python 3
As seen from the recent quicket hook posts
TypeError at /tickets/quicket_hook/
the JSON object must be str, not 'bytes'
</issue>
<code>
[start of wafer/tickets/views.py]
1 import json
2 import logging
3
4 from django.conf import settings
5 from django.contrib.auth import get_user_model
6 from django.core.exceptions import PermissionDenied, ValidationError
7 from django.core.urlresolvers import reverse
8 from django.http import HttpResponse, Http404
9 from django.views.decorators.csrf import csrf_exempt
10 from django.views.decorators.http import require_POST
11 from django.views.generic.edit import FormView
12
13 from wafer.tickets.models import Ticket, TicketType
14 from wafer.tickets.forms import TicketForm
15
16 log = logging.getLogger(__name__)
17
18
19 class ClaimView(FormView):
20 template_name = 'wafer.tickets/claim.html'
21 form_class = TicketForm
22
23 def get_context_data(self, **kwargs):
24 context = super(ClaimView, self).get_context_data(**kwargs)
25 context['can_claim'] = self.can_claim()
26 return context
27
28 def can_claim(self):
29 if settings.WAFER_REGISTRATION_MODE != 'ticket':
30 raise Http404('Ticket-based registration is not in use')
31 if not settings.WAFER_REGISTRATION_OPEN:
32 return False
33 return not self.request.user.userprofile.is_registered()
34
35 def form_valid(self, form):
36 if not self.can_claim():
37 raise ValidationError('User may not claim a ticket')
38 ticket = Ticket.objects.get(barcode=form.cleaned_data['barcode'])
39 ticket.user = self.request.user
40 ticket.save()
41 return super(ClaimView, self).form_valid(form)
42
43 def get_success_url(self):
44 return reverse(
45 'wafer_user_profile', args=(self.request.user.username,))
46
47
48 @csrf_exempt
49 @require_POST
50 def quicket_hook(request):
51 '''
52 Quicket.co.za can POST something like this when tickets are bought:
53 {
54 "reference": "REF00123456",
55 "event_id": 123,
56 "event_name": "My Big Event",
57 "amount": 0.00,
58 "email": "[email protected]",
59 "action": "checkout_started",
60 // Options are "checkout_started","checkout_cancelled","eft_pending",
61 // "checkout_completed"
62 "tickets": [
63 {
64 "id": 122,
65 "attendee_name": "",
66 "attendee_email": "",
67 "ticket_type": "Free Ticket",
68 "price": 0.00,
69 "barcode": 12345,
70 },
71 ...
72 ],
73 }
74 '''
75 if request.GET.get('secret') != settings.WAFER_TICKETS_SECRET:
76 raise PermissionDenied('Incorrect secret')
77
78 payload = json.load(request)
79 for ticket in payload['tickets']:
80 import_ticket(ticket['barcode'], ticket['ticket_type'],
81 ticket['attendee_email'])
82
83 return HttpResponse("Noted\n", content_type='text/plain')
84
85
86 def import_ticket(ticket_barcode, ticket_type, email):
87 if Ticket.objects.filter(barcode=ticket_barcode).exists():
88 log.debug('Ticket already registered: %s', ticket_barcode)
89 return
90
91 # truncate long ticket type names to length allowed by database
92 ticket_type = ticket_type[:TicketType.MAX_NAME_LENGTH]
93 type_, created = TicketType.objects.get_or_create(name=ticket_type)
94
95 UserModel = get_user_model()
96
97 try:
98 user = UserModel.objects.get(email=email, ticket=None)
99 except UserModel.DoesNotExist:
100 user = None
101 except UserModel.MultipleObjectsReturned:
102 # We're can't uniquely identify the user to associate this ticket
103 # with, so leave it for them to figure out via the 'claim ticket'
104 # interface
105 user = None
106
107 ticket = Ticket.objects.create(
108 barcode=ticket_barcode,
109 email=email,
110 type=type_,
111 user=user,
112 )
113 ticket.save()
114
115 if user:
116 log.info('Ticket registered: %s and linked to user', ticket)
117 else:
118 log.info('Ticket registered: %s. Unclaimed', ticket)
119
[end of wafer/tickets/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wafer/tickets/views.py b/wafer/tickets/views.py
--- a/wafer/tickets/views.py
+++ b/wafer/tickets/views.py
@@ -75,7 +75,8 @@
if request.GET.get('secret') != settings.WAFER_TICKETS_SECRET:
raise PermissionDenied('Incorrect secret')
- payload = json.load(request)
+ # This is required for python 3, and in theory fine on python 2
+ payload = json.loads(request.body.decode('utf8'))
for ticket in payload['tickets']:
import_ticket(ticket['barcode'], ticket['ticket_type'],
ticket['attendee_email'])
| {"golden_diff": "diff --git a/wafer/tickets/views.py b/wafer/tickets/views.py\n--- a/wafer/tickets/views.py\n+++ b/wafer/tickets/views.py\n@@ -75,7 +75,8 @@\n if request.GET.get('secret') != settings.WAFER_TICKETS_SECRET:\n raise PermissionDenied('Incorrect secret')\n \n- payload = json.load(request)\n+ # This is required for python 3, and in theory fine on python 2\n+ payload = json.loads(request.body.decode('utf8'))\n for ticket in payload['tickets']:\n import_ticket(ticket['barcode'], ticket['ticket_type'],\n ticket['attendee_email'])\n", "issue": "tickets should be decoded on python 3\nAs seen from the recent quicket hook posts\n\nTypeError at /tickets/quicket_hook/\nthe JSON object must be str, not 'bytes'\n\n", "before_files": [{"content": "import json\nimport logging\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import PermissionDenied, ValidationError\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponse, Http404\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\nfrom django.views.generic.edit import FormView\n\nfrom wafer.tickets.models import Ticket, TicketType\nfrom wafer.tickets.forms import TicketForm\n\nlog = logging.getLogger(__name__)\n\n\nclass ClaimView(FormView):\n template_name = 'wafer.tickets/claim.html'\n form_class = TicketForm\n\n def get_context_data(self, **kwargs):\n context = super(ClaimView, self).get_context_data(**kwargs)\n context['can_claim'] = self.can_claim()\n return context\n\n def can_claim(self):\n if settings.WAFER_REGISTRATION_MODE != 'ticket':\n raise Http404('Ticket-based registration is not in use')\n if not settings.WAFER_REGISTRATION_OPEN:\n return False\n return not self.request.user.userprofile.is_registered()\n\n def form_valid(self, form):\n if not self.can_claim():\n raise ValidationError('User may not claim a ticket')\n ticket = Ticket.objects.get(barcode=form.cleaned_data['barcode'])\n ticket.user = self.request.user\n ticket.save()\n return super(ClaimView, self).form_valid(form)\n\n def get_success_url(self):\n return reverse(\n 'wafer_user_profile', args=(self.request.user.username,))\n\n\n@csrf_exempt\n@require_POST\ndef quicket_hook(request):\n '''\n Quicket.co.za can POST something like this when tickets are bought:\n {\n \"reference\": \"REF00123456\",\n \"event_id\": 123,\n \"event_name\": \"My Big Event\",\n \"amount\": 0.00,\n \"email\": \"[email protected]\",\n \"action\": \"checkout_started\",\n // Options are \"checkout_started\",\"checkout_cancelled\",\"eft_pending\",\n // \"checkout_completed\"\n \"tickets\": [\n {\n \"id\": 122,\n \"attendee_name\": \"\",\n \"attendee_email\": \"\",\n \"ticket_type\": \"Free Ticket\",\n \"price\": 0.00,\n \"barcode\": 12345,\n },\n ...\n ],\n }\n '''\n if request.GET.get('secret') != settings.WAFER_TICKETS_SECRET:\n raise PermissionDenied('Incorrect secret')\n\n payload = json.load(request)\n for ticket in payload['tickets']:\n import_ticket(ticket['barcode'], ticket['ticket_type'],\n ticket['attendee_email'])\n\n return HttpResponse(\"Noted\\n\", content_type='text/plain')\n\n\ndef import_ticket(ticket_barcode, ticket_type, email):\n if Ticket.objects.filter(barcode=ticket_barcode).exists():\n log.debug('Ticket already registered: %s', ticket_barcode)\n return\n\n # truncate long ticket type names to length allowed by database\n ticket_type = ticket_type[:TicketType.MAX_NAME_LENGTH]\n type_, created = TicketType.objects.get_or_create(name=ticket_type)\n\n UserModel = get_user_model()\n\n try:\n user = UserModel.objects.get(email=email, ticket=None)\n except UserModel.DoesNotExist:\n user = None\n except UserModel.MultipleObjectsReturned:\n # We're can't uniquely identify the user to associate this ticket\n # with, so leave it for them to figure out via the 'claim ticket'\n # interface\n user = None\n\n ticket = Ticket.objects.create(\n barcode=ticket_barcode,\n email=email,\n type=type_,\n user=user,\n )\n ticket.save()\n\n if user:\n log.info('Ticket registered: %s and linked to user', ticket)\n else:\n log.info('Ticket registered: %s. Unclaimed', ticket)\n", "path": "wafer/tickets/views.py"}]} | 1,667 | 146 |
gh_patches_debug_3467 | rasdani/github-patches | git_diff | getmoto__moto-1739 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[SES] Does not properly verify mailbox with display name
https://tools.ietf.org/html/rfc2822.html#section-3.4 defines two forms of valid mailbox:
* `[email protected]`
* `"Foo Bar" <[email protected]>`
SES supports both of these forms. Per https://github.com/spulec/moto/blob/master/moto/ses/models.py#L55, only the first form is supported by moto.
</issue>
<code>
[start of moto/ses/models.py]
1 from __future__ import unicode_literals
2
3 import email
4 from email.utils import parseaddr
5
6 from moto.core import BaseBackend, BaseModel
7 from .exceptions import MessageRejectedError
8 from .utils import get_random_message_id
9
10
11 RECIPIENT_LIMIT = 50
12
13
14 class Message(BaseModel):
15
16 def __init__(self, message_id, source, subject, body, destinations):
17 self.id = message_id
18 self.source = source
19 self.subject = subject
20 self.body = body
21 self.destinations = destinations
22
23
24 class RawMessage(BaseModel):
25
26 def __init__(self, message_id, source, destinations, raw_data):
27 self.id = message_id
28 self.source = source
29 self.destinations = destinations
30 self.raw_data = raw_data
31
32
33 class SESQuota(BaseModel):
34
35 def __init__(self, sent):
36 self.sent = sent
37
38 @property
39 def sent_past_24(self):
40 return self.sent
41
42
43 class SESBackend(BaseBackend):
44
45 def __init__(self):
46 self.addresses = []
47 self.email_addresses = []
48 self.domains = []
49 self.sent_messages = []
50 self.sent_message_count = 0
51
52 def _is_verified_address(self, address):
53 if address in self.addresses:
54 return True
55 user, host = address.split('@', 1)
56 return host in self.domains
57
58 def verify_email_identity(self, address):
59 self.addresses.append(address)
60
61 def verify_email_address(self, address):
62 self.email_addresses.append(address)
63
64 def verify_domain(self, domain):
65 self.domains.append(domain)
66
67 def list_identities(self):
68 return self.domains + self.addresses
69
70 def list_verified_email_addresses(self):
71 return self.email_addresses
72
73 def delete_identity(self, identity):
74 if '@' in identity:
75 self.addresses.remove(identity)
76 else:
77 self.domains.remove(identity)
78
79 def send_email(self, source, subject, body, destinations):
80 recipient_count = sum(map(len, destinations.values()))
81 if recipient_count > RECIPIENT_LIMIT:
82 raise MessageRejectedError('Too many recipients.')
83 if not self._is_verified_address(source):
84 raise MessageRejectedError(
85 "Email address not verified %s" % source
86 )
87
88 message_id = get_random_message_id()
89 message = Message(message_id, source, subject, body, destinations)
90 self.sent_messages.append(message)
91 self.sent_message_count += recipient_count
92 return message
93
94 def send_raw_email(self, source, destinations, raw_data):
95 if source is not None:
96 _, source_email_address = parseaddr(source)
97 if source_email_address not in self.addresses:
98 raise MessageRejectedError(
99 "Did not have authority to send from email %s" % source_email_address
100 )
101
102 recipient_count = len(destinations)
103 message = email.message_from_string(raw_data)
104 if source is None:
105 if message['from'] is None:
106 raise MessageRejectedError(
107 "Source not specified"
108 )
109
110 _, source_email_address = parseaddr(message['from'])
111 if source_email_address not in self.addresses:
112 raise MessageRejectedError(
113 "Did not have authority to send from email %s" % source_email_address
114 )
115
116 for header in 'TO', 'CC', 'BCC':
117 recipient_count += sum(
118 d.strip() and 1 or 0
119 for d in message.get(header, '').split(',')
120 )
121 if recipient_count > RECIPIENT_LIMIT:
122 raise MessageRejectedError('Too many recipients.')
123
124 self.sent_message_count += recipient_count
125 message_id = get_random_message_id()
126 message = RawMessage(message_id, source, destinations, raw_data)
127 self.sent_messages.append(message)
128 return message
129
130 def get_send_quota(self):
131 return SESQuota(self.sent_message_count)
132
133
134 ses_backend = SESBackend()
135
[end of moto/ses/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/moto/ses/models.py b/moto/ses/models.py
--- a/moto/ses/models.py
+++ b/moto/ses/models.py
@@ -49,7 +49,8 @@
self.sent_messages = []
self.sent_message_count = 0
- def _is_verified_address(self, address):
+ def _is_verified_address(self, source):
+ _, address = parseaddr(source)
if address in self.addresses:
return True
user, host = address.split('@', 1)
| {"golden_diff": "diff --git a/moto/ses/models.py b/moto/ses/models.py\n--- a/moto/ses/models.py\n+++ b/moto/ses/models.py\n@@ -49,7 +49,8 @@\n self.sent_messages = []\n self.sent_message_count = 0\n \n- def _is_verified_address(self, address):\n+ def _is_verified_address(self, source):\n+ _, address = parseaddr(source)\n if address in self.addresses:\n return True\n user, host = address.split('@', 1)\n", "issue": "[SES] Does not properly verify mailbox with display name\nhttps://tools.ietf.org/html/rfc2822.html#section-3.4 defines two forms of valid mailbox:\r\n\r\n* `[email protected]`\r\n* `\"Foo Bar\" <[email protected]>`\r\n\r\nSES supports both of these forms. Per https://github.com/spulec/moto/blob/master/moto/ses/models.py#L55, only the first form is supported by moto.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport email\nfrom email.utils import parseaddr\n\nfrom moto.core import BaseBackend, BaseModel\nfrom .exceptions import MessageRejectedError\nfrom .utils import get_random_message_id\n\n\nRECIPIENT_LIMIT = 50\n\n\nclass Message(BaseModel):\n\n def __init__(self, message_id, source, subject, body, destinations):\n self.id = message_id\n self.source = source\n self.subject = subject\n self.body = body\n self.destinations = destinations\n\n\nclass RawMessage(BaseModel):\n\n def __init__(self, message_id, source, destinations, raw_data):\n self.id = message_id\n self.source = source\n self.destinations = destinations\n self.raw_data = raw_data\n\n\nclass SESQuota(BaseModel):\n\n def __init__(self, sent):\n self.sent = sent\n\n @property\n def sent_past_24(self):\n return self.sent\n\n\nclass SESBackend(BaseBackend):\n\n def __init__(self):\n self.addresses = []\n self.email_addresses = []\n self.domains = []\n self.sent_messages = []\n self.sent_message_count = 0\n\n def _is_verified_address(self, address):\n if address in self.addresses:\n return True\n user, host = address.split('@', 1)\n return host in self.domains\n\n def verify_email_identity(self, address):\n self.addresses.append(address)\n\n def verify_email_address(self, address):\n self.email_addresses.append(address)\n\n def verify_domain(self, domain):\n self.domains.append(domain)\n\n def list_identities(self):\n return self.domains + self.addresses\n\n def list_verified_email_addresses(self):\n return self.email_addresses\n\n def delete_identity(self, identity):\n if '@' in identity:\n self.addresses.remove(identity)\n else:\n self.domains.remove(identity)\n\n def send_email(self, source, subject, body, destinations):\n recipient_count = sum(map(len, destinations.values()))\n if recipient_count > RECIPIENT_LIMIT:\n raise MessageRejectedError('Too many recipients.')\n if not self._is_verified_address(source):\n raise MessageRejectedError(\n \"Email address not verified %s\" % source\n )\n\n message_id = get_random_message_id()\n message = Message(message_id, source, subject, body, destinations)\n self.sent_messages.append(message)\n self.sent_message_count += recipient_count\n return message\n\n def send_raw_email(self, source, destinations, raw_data):\n if source is not None:\n _, source_email_address = parseaddr(source)\n if source_email_address not in self.addresses:\n raise MessageRejectedError(\n \"Did not have authority to send from email %s\" % source_email_address\n )\n\n recipient_count = len(destinations)\n message = email.message_from_string(raw_data)\n if source is None:\n if message['from'] is None:\n raise MessageRejectedError(\n \"Source not specified\"\n )\n\n _, source_email_address = parseaddr(message['from'])\n if source_email_address not in self.addresses:\n raise MessageRejectedError(\n \"Did not have authority to send from email %s\" % source_email_address\n )\n\n for header in 'TO', 'CC', 'BCC':\n recipient_count += sum(\n d.strip() and 1 or 0\n for d in message.get(header, '').split(',')\n )\n if recipient_count > RECIPIENT_LIMIT:\n raise MessageRejectedError('Too many recipients.')\n\n self.sent_message_count += recipient_count\n message_id = get_random_message_id()\n message = RawMessage(message_id, source, destinations, raw_data)\n self.sent_messages.append(message)\n return message\n\n def get_send_quota(self):\n return SESQuota(self.sent_message_count)\n\n\nses_backend = SESBackend()\n", "path": "moto/ses/models.py"}]} | 1,774 | 118 |
gh_patches_debug_31477 | rasdani/github-patches | git_diff | talonhub__community-179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows switching could use fuzzier matching
Right now when you want to switch window focus you have to say a word the matches a generated list of running applications, and this list has an override that lets you more easily say the names of complex application names, for instance `term` is `iTerm2` or whatever.
I propose we change the way the list works so that actually we just use `focus <user.text>` for the command, and then `switcher_focus()` does the lookup for whatever text it gets inside the running application list. Doing it this way allows us to leverage any additional vocabulary that's been added to `vocabulary.py` without necessarily having to hardcode it again inside of `switcher.py`.
in addition I propose that we introduce slightly fuzzer matching. IFor example, my `vocabulary.py` as a rule for "key pass" which becomes "keepass", and this is helpful when I am just talking to other people in chat or whatever. But the actual name of the running process is `KeePassXC`. I don't necessarily want to always be typing that exact word into chat with other people when I say `key pass` though. I suspect this type of thing will be increasingly common the more people are using it with different applications
As it stands if I wanted to match the app with current switcher I'd have to put "key pass":"KeePassXC" override in `switcher.py`, instead of just matching the result I already have in `vocabulary.py`.
In addition to using `vocabulary.py`, I propose we check for a partial match using `startswith()`, so then I would be able to say something like "focus key pass" and it would actually come through as `focus keepass`, which would in turn match against `keepassxc`.
We can of course keep the overrides for certain things that are a bit harder to say in `switcher.py`, for instance I think people aren't going to have `term` becoming `iTerm2` in `vocabulary.py`.
This can mostly all be done with a simple `switcher_focus()` change like:
```
running = ctx.lists["self.running"]
wanted_app = None
for running_name in running.keys():
if running_name == name or running_name.lower().startswith(name):
wanted_app = running[running_name]
break
if wanted_app is None:
return
for app in ui.apps():
if app.name == wanted_app and not app.background:
#os.system("i3-msg '[class=\"(?)%s\"] focus'" % app.name)
app.focus()
break
```
Not sending a PR for now as I figured I'd run it past others first.
One possible change could be that it does two passes, the first pass matches on explicit name matches so you don't accidentally match against something that has a more explicit match, and then if there are no explicit matches, it tries a second pass with fuzzy matching.
</issue>
<code>
[start of code/switcher.py]
1 from talon import app, Module, Context, actions, ui, imgui
2 from talon.voice import Capture
3 import re
4 import time
5 import os
6
7 # Construct at startup a list of overides for application names (similar to how homophone list is managed)
8 # ie for a given talon recognition word set `one note`, recognized this in these switcher functions as `ONENOTE`
9 # the list is a comma seperated `<Recognized Words>, <Overide>`
10 # TODO: Consider put list csv's (homophones.csv, app_name_overrides.csv) files together in a seperate directory,`knausj_talon/lists`
11 cwd = os.path.dirname(os.path.realpath(__file__))
12 overrides_file = os.path.join(
13 cwd, "app_names", f"app_name_overrides.{app.platform}.csv"
14 )
15 overrides = {}
16 with open(overrides_file, "r") as f:
17 for line in f:
18 line = line.rstrip()
19 line = line.split(",")
20 overrides[line[0].lower()] = line[1].strip()
21
22 print(f"knausj_talon.switcher------------ app name overrides:{overrides}")
23
24 app_cache = {}
25
26
27 mod = Module()
28 mod.list("running", desc="all running applications")
29 mod.list("launch", desc="all launchable applications")
30
31
32 @mod.capture
33 def running_applications(m) -> str:
34 "Returns a single application name"
35
36
37 @mod.capture
38 def launch_applications(m) -> Capture:
39 "Returns a single application name"
40
41
42 ctx = Context()
43
44
45 @ctx.capture(rule="{self.running}")
46 def running_applications(m):
47 return m.running
48
49
50 @ctx.capture(rule="{self.launch}")
51 def launch_applications(m):
52 return m.launch
53
54
55 def split_camel(word):
56 return re.findall(r"[0-9A-Z]*[a-z]+(?=[A-Z]|$)", word)
57
58
59 def get_words(name):
60 words = re.findall(r"[0-9A-Za-z]+", name)
61 out = []
62 for word in words:
63 out += split_camel(word)
64 return out
65
66
67 @mod.action_class
68 class Actions:
69 def switcher_focus(name: str):
70 """Focus a new application by name"""
71 for app in ui.apps():
72 # print(f"--------- app.name:{app.name} app.bundler:{app.bundle}")
73 if name in app.name and not app.background:
74 app.focus()
75 break
76
77 def switcher_launch(path: str):
78 """Launch a new application by path"""
79 ui.launch(path=path)
80
81 def switcher_list_running():
82 """Lists all running applications"""
83 gui.show()
84
85 def switcher_hide_running():
86 """Hides list of running applications"""
87 gui.hide()
88
89
90 @imgui.open(software=False)
91 def gui(gui: imgui.GUI):
92 gui.text("Names of running applications")
93 gui.line()
94 for line in ctx.lists["self.running"]:
95 gui.text(line)
96
97
98 def update_lists():
99 running = {}
100 launch = {}
101
102 for cur_app in ui.apps(background=False):
103 name = cur_app.name
104 if name.endswith(".exe"):
105 name = name.rsplit(".", 1)[0]
106 words = get_words(name)
107 for word in words:
108 if word and not word in running:
109 running[word.lower()] = cur_app.name
110 running[name.lower()] = cur_app.name
111 for override in overrides:
112 running[override] = overrides[override]
113
114 if app.platform == "mac":
115 for base in "/Applications", "/Applications/Utilities":
116 for name in os.listdir(base):
117 path = os.path.join(base, name)
118 name = name.rsplit(".", 1)[0].lower()
119 launch[name] = path
120 words = name.split(" ")
121 for word in words:
122 if word and word not in launch:
123 if len(name) > 6 and len(word) < 3:
124 continue
125 launch[word] = path
126
127 lists = {
128 "self.running": running,
129 "self.launch": launch,
130 }
131
132 # batch update lists
133 ctx.lists.update(lists)
134
135
136 def ui_event(event, arg):
137 if event in ("app_activate", "app_launch", "app_close", "win_open", "win_close"):
138 # print(f'------------------ event:{event} arg:{arg}')
139 update_lists()
140
141
142 ui.register("", ui_event)
143 update_lists()
144
[end of code/switcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/code/switcher.py b/code/switcher.py
--- a/code/switcher.py
+++ b/code/switcher.py
@@ -1,8 +1,9 @@
-from talon import app, Module, Context, actions, ui, imgui
-from talon.voice import Capture
+import os
import re
import time
-import os
+
+from talon import Context, Module, app, imgui, ui
+from talon.voice import Capture
# Construct at startup a list of overides for application names (similar to how homophone list is managed)
# ie for a given talon recognition word set `one note`, recognized this in these switcher functions as `ONENOTE`
@@ -68,10 +69,18 @@
class Actions:
def switcher_focus(name: str):
"""Focus a new application by name"""
- for app in ui.apps():
- # print(f"--------- app.name:{app.name} app.bundler:{app.bundle}")
- if name in app.name and not app.background:
- app.focus()
+ running = ctx.lists["self.running"]
+ wanted_app = None
+ for running_name in running.keys():
+ if running_name == name or running_name.lower().startswith(name):
+ wanted_app = running[running_name]
+ break
+ if wanted_app is None:
+ return
+
+ for cur_app in ui.apps():
+ if cur_app.name == wanted_app and not cur_app.background:
+ cur_app.focus()
break
def switcher_launch(path: str):
@@ -105,7 +114,7 @@
name = name.rsplit(".", 1)[0]
words = get_words(name)
for word in words:
- if word and not word in running:
+ if word and word not in running:
running[word.lower()] = cur_app.name
running[name.lower()] = cur_app.name
for override in overrides:
| {"golden_diff": "diff --git a/code/switcher.py b/code/switcher.py\n--- a/code/switcher.py\n+++ b/code/switcher.py\n@@ -1,8 +1,9 @@\n-from talon import app, Module, Context, actions, ui, imgui\n-from talon.voice import Capture\n+import os\n import re\n import time\n-import os\n+\n+from talon import Context, Module, app, imgui, ui\n+from talon.voice import Capture\n \n # Construct at startup a list of overides for application names (similar to how homophone list is managed)\n # ie for a given talon recognition word set `one note`, recognized this in these switcher functions as `ONENOTE`\n@@ -68,10 +69,18 @@\n class Actions:\n def switcher_focus(name: str):\n \"\"\"Focus a new application by name\"\"\"\n- for app in ui.apps():\n- # print(f\"--------- app.name:{app.name} app.bundler:{app.bundle}\")\n- if name in app.name and not app.background:\n- app.focus()\n+ running = ctx.lists[\"self.running\"]\n+ wanted_app = None\n+ for running_name in running.keys():\n+ if running_name == name or running_name.lower().startswith(name):\n+ wanted_app = running[running_name]\n+ break\n+ if wanted_app is None:\n+ return\n+\n+ for cur_app in ui.apps():\n+ if cur_app.name == wanted_app and not cur_app.background:\n+ cur_app.focus()\n break\n \n def switcher_launch(path: str):\n@@ -105,7 +114,7 @@\n name = name.rsplit(\".\", 1)[0]\n words = get_words(name)\n for word in words:\n- if word and not word in running:\n+ if word and word not in running:\n running[word.lower()] = cur_app.name\n running[name.lower()] = cur_app.name\n for override in overrides:\n", "issue": "Windows switching could use fuzzier matching\nRight now when you want to switch window focus you have to say a word the matches a generated list of running applications, and this list has an override that lets you more easily say the names of complex application names, for instance `term` is `iTerm2` or whatever.\r\n\r\nI propose we change the way the list works so that actually we just use `focus <user.text>` for the command, and then `switcher_focus()` does the lookup for whatever text it gets inside the running application list. Doing it this way allows us to leverage any additional vocabulary that's been added to `vocabulary.py` without necessarily having to hardcode it again inside of `switcher.py`. \r\n\r\nin addition I propose that we introduce slightly fuzzer matching. IFor example, my `vocabulary.py` as a rule for \"key pass\" which becomes \"keepass\", and this is helpful when I am just talking to other people in chat or whatever. But the actual name of the running process is `KeePassXC`. I don't necessarily want to always be typing that exact word into chat with other people when I say `key pass` though. I suspect this type of thing will be increasingly common the more people are using it with different applications \r\n\r\nAs it stands if I wanted to match the app with current switcher I'd have to put \"key pass\":\"KeePassXC\" override in `switcher.py`, instead of just matching the result I already have in `vocabulary.py`. \r\n\r\nIn addition to using `vocabulary.py`, I propose we check for a partial match using `startswith()`, so then I would be able to say something like \"focus key pass\" and it would actually come through as `focus keepass`, which would in turn match against `keepassxc`. \r\n\r\nWe can of course keep the overrides for certain things that are a bit harder to say in `switcher.py`, for instance I think people aren't going to have `term` becoming `iTerm2` in `vocabulary.py`.\r\n\r\nThis can mostly all be done with a simple `switcher_focus()` change like:\r\n\r\n```\r\n running = ctx.lists[\"self.running\"]\r\n wanted_app = None\r\n for running_name in running.keys():\r\n if running_name == name or running_name.lower().startswith(name):\r\n wanted_app = running[running_name]\r\n break\r\n if wanted_app is None:\r\n return\r\n\r\n for app in ui.apps():\r\n if app.name == wanted_app and not app.background:\r\n #os.system(\"i3-msg '[class=\\\"(?)%s\\\"] focus'\" % app.name)\r\n app.focus()\r\n break\r\n```\r\n\r\nNot sending a PR for now as I figured I'd run it past others first.\r\n\r\nOne possible change could be that it does two passes, the first pass matches on explicit name matches so you don't accidentally match against something that has a more explicit match, and then if there are no explicit matches, it tries a second pass with fuzzy matching.\n", "before_files": [{"content": "from talon import app, Module, Context, actions, ui, imgui\nfrom talon.voice import Capture\nimport re\nimport time\nimport os\n\n# Construct at startup a list of overides for application names (similar to how homophone list is managed)\n# ie for a given talon recognition word set `one note`, recognized this in these switcher functions as `ONENOTE`\n# the list is a comma seperated `<Recognized Words>, <Overide>`\n# TODO: Consider put list csv's (homophones.csv, app_name_overrides.csv) files together in a seperate directory,`knausj_talon/lists`\ncwd = os.path.dirname(os.path.realpath(__file__))\noverrides_file = os.path.join(\n cwd, \"app_names\", f\"app_name_overrides.{app.platform}.csv\"\n)\noverrides = {}\nwith open(overrides_file, \"r\") as f:\n for line in f:\n line = line.rstrip()\n line = line.split(\",\")\n overrides[line[0].lower()] = line[1].strip()\n\nprint(f\"knausj_talon.switcher------------ app name overrides:{overrides}\")\n\napp_cache = {}\n\n\nmod = Module()\nmod.list(\"running\", desc=\"all running applications\")\nmod.list(\"launch\", desc=\"all launchable applications\")\n\n\[email protected]\ndef running_applications(m) -> str:\n \"Returns a single application name\"\n\n\[email protected]\ndef launch_applications(m) -> Capture:\n \"Returns a single application name\"\n\n\nctx = Context()\n\n\[email protected](rule=\"{self.running}\")\ndef running_applications(m):\n return m.running\n\n\[email protected](rule=\"{self.launch}\")\ndef launch_applications(m):\n return m.launch\n\n\ndef split_camel(word):\n return re.findall(r\"[0-9A-Z]*[a-z]+(?=[A-Z]|$)\", word)\n\n\ndef get_words(name):\n words = re.findall(r\"[0-9A-Za-z]+\", name)\n out = []\n for word in words:\n out += split_camel(word)\n return out\n\n\[email protected]_class\nclass Actions:\n def switcher_focus(name: str):\n \"\"\"Focus a new application by name\"\"\"\n for app in ui.apps():\n # print(f\"--------- app.name:{app.name} app.bundler:{app.bundle}\")\n if name in app.name and not app.background:\n app.focus()\n break\n\n def switcher_launch(path: str):\n \"\"\"Launch a new application by path\"\"\"\n ui.launch(path=path)\n\n def switcher_list_running():\n \"\"\"Lists all running applications\"\"\"\n gui.show()\n\n def switcher_hide_running():\n \"\"\"Hides list of running applications\"\"\"\n gui.hide()\n\n\[email protected](software=False)\ndef gui(gui: imgui.GUI):\n gui.text(\"Names of running applications\")\n gui.line()\n for line in ctx.lists[\"self.running\"]:\n gui.text(line)\n\n\ndef update_lists():\n running = {}\n launch = {}\n\n for cur_app in ui.apps(background=False):\n name = cur_app.name\n if name.endswith(\".exe\"):\n name = name.rsplit(\".\", 1)[0]\n words = get_words(name)\n for word in words:\n if word and not word in running:\n running[word.lower()] = cur_app.name\n running[name.lower()] = cur_app.name\n for override in overrides:\n running[override] = overrides[override]\n\n if app.platform == \"mac\":\n for base in \"/Applications\", \"/Applications/Utilities\":\n for name in os.listdir(base):\n path = os.path.join(base, name)\n name = name.rsplit(\".\", 1)[0].lower()\n launch[name] = path\n words = name.split(\" \")\n for word in words:\n if word and word not in launch:\n if len(name) > 6 and len(word) < 3:\n continue\n launch[word] = path\n\n lists = {\n \"self.running\": running,\n \"self.launch\": launch,\n }\n\n # batch update lists\n ctx.lists.update(lists)\n\n\ndef ui_event(event, arg):\n if event in (\"app_activate\", \"app_launch\", \"app_close\", \"win_open\", \"win_close\"):\n # print(f'------------------ event:{event} arg:{arg}')\n update_lists()\n\n\nui.register(\"\", ui_event)\nupdate_lists()\n", "path": "code/switcher.py"}]} | 2,428 | 431 |
gh_patches_debug_22587 | rasdani/github-patches | git_diff | buildbot__buildbot-3623 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running with --nodaemon and PID files
This ticket is a migrated Trac ticket [3608](http://trac.buildbot.net/ticket/3608)
People contributed to the original ticket: @unknown_contributor
Ticket created on: `Sep 12 2016`
Ticket last modified on: `Sep 12 2016`
---
Both the master and the worker still use PID files while running with --nodaemon. I've had issues with workers running with --nodaemon not starting because of an old, stale PID file, corresponding to an actual process belonging to another user.
This can be thought to be inconsistent, and harms the writing ofreally simple non forking configuration files to manage the process through an external service manager (systemd, supervisord etc.).
It seems that the cousin ``twistd`` executable has a --pidfile argument, that can be used to mean "no pid file". This is explained notably in https://twistedmatrix.com/documents/current/core/howto/systemd.html.
We could expose that, or we could decide that --nodaemon implies no pid file.
I believe this is true in Eight and Nine, and that a fix could be easily backported
(marking this as minor, because it's easy to circumvent by putting appropriate rms in unit files or other confs)
---
</issue>
<code>
[start of worker/buildbot_worker/scripts/start.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 import os
21 import sys
22 import time
23
24 from buildbot_worker.scripts import base
25 from buildbot_worker.util import rewrap
26
27
28 class Follower(object):
29
30 def follow(self):
31 from twisted.internet import reactor
32 from buildbot_worker.scripts.logwatcher import LogWatcher
33 self.rc = 0
34 print("Following twistd.log until startup finished..")
35 lw = LogWatcher("twistd.log")
36 d = lw.start()
37 d.addCallbacks(self._success, self._failure)
38 reactor.run()
39 return self.rc
40
41 def _success(self, processtype):
42 from twisted.internet import reactor
43 print("The %s appears to have (re)started correctly." % processtype)
44 self.rc = 0
45 reactor.stop()
46
47 def _failure(self, why):
48 from twisted.internet import reactor
49 from buildbot_worker.scripts.logwatcher import WorkerTimeoutError
50 if why.check(WorkerTimeoutError):
51 print(rewrap("""\
52 The worker took more than 10 seconds to start and/or connect
53 to the buildmaster, so we were unable to confirm that it
54 started and connected correctly.
55 Please 'tail twistd.log' and look for a line that says
56 'message from master: attached' to verify correct startup.
57 If you see a bunch of messages like 'will retry in 6 seconds',
58 your worker might not have the correct hostname or portnumber
59 for the buildmaster, or the buildmaster might not be running.
60 If you see messages like
61 'Failure: twisted.cred.error.UnauthorizedLogin'
62 then your worker might be using the wrong botname or password.
63 Please correct these problems and then restart the worker.
64 """))
65 else:
66 print(rewrap("""\
67 Unable to confirm that the worker started correctly.
68 You may need to stop it, fix the config file, and restart.
69 """))
70 print(why)
71 self.rc = 1
72 reactor.stop()
73
74
75 def startCommand(config):
76 basedir = config['basedir']
77 if not base.isWorkerDir(basedir):
78 return 1
79
80 return startWorker(basedir, config['quiet'], config['nodaemon'])
81
82
83 def startWorker(basedir, quiet, nodaemon):
84 """
85 Start worker process.
86
87 Fork and start twisted application described in basedir buildbot.tac file.
88 Print it's log messages to stdout for a while and try to figure out if
89 start was successful.
90
91 If quiet or nodaemon parameters are True, or we are running on a win32
92 system, will not fork and log will not be printed to stdout.
93
94 @param basedir: worker's basedir path
95 @param quiet: don't display startup log messages
96 @param nodaemon: don't daemonize (stay in foreground)
97 @return: 0 if worker was successfully started,
98 1 if we are not sure that worker started successfully
99 """
100
101 os.chdir(basedir)
102 if quiet or nodaemon:
103 return launch(nodaemon)
104
105 # we probably can't do this os.fork under windows
106 from twisted.python.runtime import platformType
107 if platformType == "win32":
108 return launch(nodaemon)
109
110 # fork a child to launch the daemon, while the parent process tails the
111 # logfile
112 if os.fork():
113 # this is the parent
114 rc = Follower().follow()
115 return rc
116 # this is the child: give the logfile-watching parent a chance to start
117 # watching it before we start the daemon
118 time.sleep(0.2)
119 launch(nodaemon)
120
121
122 def launch(nodaemon):
123 sys.path.insert(0, os.path.abspath(os.getcwd()))
124
125 # see if we can launch the application without actually having to
126 # spawn twistd, since spawning processes correctly is a real hassle
127 # on windows.
128 from twisted.python.runtime import platformType
129 argv = ["twistd",
130 "--no_save",
131 "--logfile=twistd.log", # windows doesn't use the same default
132 "--python=buildbot.tac"]
133 if nodaemon:
134 argv.extend(['--nodaemon'])
135 sys.argv = argv
136
137 # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use
138 # _twistw.run . Twisted-2.5.0 and later use twistd.run, even for
139 # windows.
140 from twisted import __version__
141 major, minor, ignored = __version__.split(".", 2)
142 major = int(major)
143 minor = int(minor)
144 if (platformType == "win32" and (major == 2 and minor < 5)):
145 from twisted.scripts import _twistw
146 run = _twistw.run
147 else:
148 from twisted.scripts import twistd
149 run = twistd.run
150 run()
151
[end of worker/buildbot_worker/scripts/start.py]
[start of master/buildbot/scripts/start.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 import os
21 import sys
22
23 from twisted.internet import protocol
24 from twisted.internet import reactor
25 from twisted.python.runtime import platformType
26
27 from buildbot.scripts import base
28 from buildbot.scripts.logwatcher import BuildmasterStartupError
29 from buildbot.scripts.logwatcher import BuildmasterTimeoutError
30 from buildbot.scripts.logwatcher import LogWatcher
31 from buildbot.scripts.logwatcher import ReconfigError
32 from buildbot.util import rewrap
33
34
35 class Follower:
36
37 def follow(self, basedir):
38 self.rc = 0
39 print("Following twistd.log until startup finished..")
40 lw = LogWatcher(os.path.join(basedir, "twistd.log"))
41 d = lw.start()
42 d.addCallbacks(self._success, self._failure)
43 reactor.run()
44 return self.rc
45
46 def _success(self, _):
47 print("The buildmaster appears to have (re)started correctly.")
48 self.rc = 0
49 reactor.stop()
50
51 def _failure(self, why):
52 if why.check(BuildmasterTimeoutError):
53 print(rewrap("""\
54 The buildmaster took more than 10 seconds to start, so we were
55 unable to confirm that it started correctly.
56 Please 'tail twistd.log' and look for a line that says
57 'BuildMaster is running' to verify correct startup.
58 """))
59 elif why.check(ReconfigError):
60 print(rewrap("""\
61 The buildmaster appears to have encountered an error in the
62 master.cfg config file during startup.
63 Please inspect and fix master.cfg, then restart the
64 buildmaster.
65 """))
66 elif why.check(BuildmasterStartupError):
67 print(rewrap("""\
68 The buildmaster startup failed. Please see 'twistd.log' for
69 possible reason.
70 """))
71 else:
72 print(rewrap("""\
73 Unable to confirm that the buildmaster started correctly.
74 You may need to stop it, fix the config file, and restart.
75 """))
76 print(why)
77 self.rc = 1
78 reactor.stop()
79
80
81 def launchNoDaemon(config):
82 os.chdir(config['basedir'])
83 sys.path.insert(0, os.path.abspath(config['basedir']))
84
85 argv = ["twistd",
86 "--no_save",
87 '--nodaemon',
88 "--logfile=twistd.log", # windows doesn't use the same default
89 "--python=buildbot.tac"]
90 sys.argv = argv
91
92 # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use
93 # _twistw.run . Twisted-2.5.0 and later use twistd.run, even for
94 # windows.
95 from twisted.scripts import twistd
96 twistd.run()
97
98
99 def launch(config):
100 os.chdir(config['basedir'])
101 sys.path.insert(0, os.path.abspath(config['basedir']))
102
103 # see if we can launch the application without actually having to
104 # spawn twistd, since spawning processes correctly is a real hassle
105 # on windows.
106 argv = [sys.executable,
107 "-c",
108 # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use
109 # _twistw.run . Twisted-2.5.0 and later use twistd.run, even for
110 # windows.
111 "from twisted.scripts import twistd; twistd.run()",
112 "--no_save",
113 "--logfile=twistd.log", # windows doesn't use the same default
114 "--python=buildbot.tac"]
115
116 # ProcessProtocol just ignores all output
117 proc = reactor.spawnProcess(
118 protocol.ProcessProtocol(), sys.executable, argv, env=os.environ)
119
120 if platformType == "win32":
121 with open("twistd.pid", "w") as pidfile:
122 pidfile.write("{0}".format(proc.pid))
123
124
125 def start(config):
126 if not base.isBuildmasterDir(config['basedir']):
127 return 1
128
129 if config['nodaemon']:
130 launchNoDaemon(config)
131 return 0
132
133 launch(config)
134
135 # We don't have tail on windows
136 if platformType == "win32" or config['quiet']:
137 return 0
138
139 # this is the parent
140 rc = Follower().follow(config['basedir'])
141 return rc
142
[end of master/buildbot/scripts/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/scripts/start.py b/master/buildbot/scripts/start.py
--- a/master/buildbot/scripts/start.py
+++ b/master/buildbot/scripts/start.py
@@ -84,9 +84,14 @@
argv = ["twistd",
"--no_save",
- '--nodaemon',
+ "--nodaemon",
"--logfile=twistd.log", # windows doesn't use the same default
"--python=buildbot.tac"]
+
+ if platformType != 'win32':
+ # windows doesn't use pidfile option.
+ argv.extend(["--pidfile="])
+
sys.argv = argv
# this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use
diff --git a/worker/buildbot_worker/scripts/start.py b/worker/buildbot_worker/scripts/start.py
--- a/worker/buildbot_worker/scripts/start.py
+++ b/worker/buildbot_worker/scripts/start.py
@@ -131,7 +131,11 @@
"--logfile=twistd.log", # windows doesn't use the same default
"--python=buildbot.tac"]
if nodaemon:
- argv.extend(['--nodaemon'])
+ argv.extend(["--nodaemon"])
+ if platformType != 'win32':
+ # windows doesn't use pidfile option.
+ argv.extend(["--pidfile="])
+
sys.argv = argv
# this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use
| {"golden_diff": "diff --git a/master/buildbot/scripts/start.py b/master/buildbot/scripts/start.py\n--- a/master/buildbot/scripts/start.py\n+++ b/master/buildbot/scripts/start.py\n@@ -84,9 +84,14 @@\n \n argv = [\"twistd\",\n \"--no_save\",\n- '--nodaemon',\n+ \"--nodaemon\",\n \"--logfile=twistd.log\", # windows doesn't use the same default\n \"--python=buildbot.tac\"]\n+\n+ if platformType != 'win32':\n+ # windows doesn't use pidfile option.\n+ argv.extend([\"--pidfile=\"])\n+\n sys.argv = argv\n \n # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use\ndiff --git a/worker/buildbot_worker/scripts/start.py b/worker/buildbot_worker/scripts/start.py\n--- a/worker/buildbot_worker/scripts/start.py\n+++ b/worker/buildbot_worker/scripts/start.py\n@@ -131,7 +131,11 @@\n \"--logfile=twistd.log\", # windows doesn't use the same default\n \"--python=buildbot.tac\"]\n if nodaemon:\n- argv.extend(['--nodaemon'])\n+ argv.extend([\"--nodaemon\"])\n+ if platformType != 'win32':\n+ # windows doesn't use pidfile option.\n+ argv.extend([\"--pidfile=\"])\n+\n sys.argv = argv\n \n # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use\n", "issue": "Running with --nodaemon and PID files\nThis ticket is a migrated Trac ticket [3608](http://trac.buildbot.net/ticket/3608)\n\nPeople contributed to the original ticket: @unknown_contributor\nTicket created on: `Sep 12 2016`\nTicket last modified on: `Sep 12 2016`\n\n---\n\nBoth the master and the worker still use PID files while running with --nodaemon. I've had issues with workers running with --nodaemon not starting because of an old, stale PID file, corresponding to an actual process belonging to another user.\n\nThis can be thought to be inconsistent, and harms the writing ofreally simple non forking configuration files to manage the process through an external service manager (systemd, supervisord etc.).\n\nIt seems that the cousin ``twistd`` executable has a --pidfile argument, that can be used to mean \"no pid file\". This is explained notably in https://twistedmatrix.com/documents/current/core/howto/systemd.html.\n\nWe could expose that, or we could decide that --nodaemon implies no pid file.\n\nI believe this is true in Eight and Nine, and that a fix could be easily backported\n(marking this as minor, because it's easy to circumvent by putting appropriate rms in unit files or other confs)\n\n\n---\n\n\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport time\n\nfrom buildbot_worker.scripts import base\nfrom buildbot_worker.util import rewrap\n\n\nclass Follower(object):\n\n def follow(self):\n from twisted.internet import reactor\n from buildbot_worker.scripts.logwatcher import LogWatcher\n self.rc = 0\n print(\"Following twistd.log until startup finished..\")\n lw = LogWatcher(\"twistd.log\")\n d = lw.start()\n d.addCallbacks(self._success, self._failure)\n reactor.run()\n return self.rc\n\n def _success(self, processtype):\n from twisted.internet import reactor\n print(\"The %s appears to have (re)started correctly.\" % processtype)\n self.rc = 0\n reactor.stop()\n\n def _failure(self, why):\n from twisted.internet import reactor\n from buildbot_worker.scripts.logwatcher import WorkerTimeoutError\n if why.check(WorkerTimeoutError):\n print(rewrap(\"\"\"\\\n The worker took more than 10 seconds to start and/or connect\n to the buildmaster, so we were unable to confirm that it\n started and connected correctly.\n Please 'tail twistd.log' and look for a line that says\n 'message from master: attached' to verify correct startup.\n If you see a bunch of messages like 'will retry in 6 seconds',\n your worker might not have the correct hostname or portnumber\n for the buildmaster, or the buildmaster might not be running.\n If you see messages like\n 'Failure: twisted.cred.error.UnauthorizedLogin'\n then your worker might be using the wrong botname or password.\n Please correct these problems and then restart the worker.\n \"\"\"))\n else:\n print(rewrap(\"\"\"\\\n Unable to confirm that the worker started correctly.\n You may need to stop it, fix the config file, and restart.\n \"\"\"))\n print(why)\n self.rc = 1\n reactor.stop()\n\n\ndef startCommand(config):\n basedir = config['basedir']\n if not base.isWorkerDir(basedir):\n return 1\n\n return startWorker(basedir, config['quiet'], config['nodaemon'])\n\n\ndef startWorker(basedir, quiet, nodaemon):\n \"\"\"\n Start worker process.\n\n Fork and start twisted application described in basedir buildbot.tac file.\n Print it's log messages to stdout for a while and try to figure out if\n start was successful.\n\n If quiet or nodaemon parameters are True, or we are running on a win32\n system, will not fork and log will not be printed to stdout.\n\n @param basedir: worker's basedir path\n @param quiet: don't display startup log messages\n @param nodaemon: don't daemonize (stay in foreground)\n @return: 0 if worker was successfully started,\n 1 if we are not sure that worker started successfully\n \"\"\"\n\n os.chdir(basedir)\n if quiet or nodaemon:\n return launch(nodaemon)\n\n # we probably can't do this os.fork under windows\n from twisted.python.runtime import platformType\n if platformType == \"win32\":\n return launch(nodaemon)\n\n # fork a child to launch the daemon, while the parent process tails the\n # logfile\n if os.fork():\n # this is the parent\n rc = Follower().follow()\n return rc\n # this is the child: give the logfile-watching parent a chance to start\n # watching it before we start the daemon\n time.sleep(0.2)\n launch(nodaemon)\n\n\ndef launch(nodaemon):\n sys.path.insert(0, os.path.abspath(os.getcwd()))\n\n # see if we can launch the application without actually having to\n # spawn twistd, since spawning processes correctly is a real hassle\n # on windows.\n from twisted.python.runtime import platformType\n argv = [\"twistd\",\n \"--no_save\",\n \"--logfile=twistd.log\", # windows doesn't use the same default\n \"--python=buildbot.tac\"]\n if nodaemon:\n argv.extend(['--nodaemon'])\n sys.argv = argv\n\n # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use\n # _twistw.run . Twisted-2.5.0 and later use twistd.run, even for\n # windows.\n from twisted import __version__\n major, minor, ignored = __version__.split(\".\", 2)\n major = int(major)\n minor = int(minor)\n if (platformType == \"win32\" and (major == 2 and minor < 5)):\n from twisted.scripts import _twistw\n run = _twistw.run\n else:\n from twisted.scripts import twistd\n run = twistd.run\n run()\n", "path": "worker/buildbot_worker/scripts/start.py"}, {"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nfrom twisted.internet import protocol\nfrom twisted.internet import reactor\nfrom twisted.python.runtime import platformType\n\nfrom buildbot.scripts import base\nfrom buildbot.scripts.logwatcher import BuildmasterStartupError\nfrom buildbot.scripts.logwatcher import BuildmasterTimeoutError\nfrom buildbot.scripts.logwatcher import LogWatcher\nfrom buildbot.scripts.logwatcher import ReconfigError\nfrom buildbot.util import rewrap\n\n\nclass Follower:\n\n def follow(self, basedir):\n self.rc = 0\n print(\"Following twistd.log until startup finished..\")\n lw = LogWatcher(os.path.join(basedir, \"twistd.log\"))\n d = lw.start()\n d.addCallbacks(self._success, self._failure)\n reactor.run()\n return self.rc\n\n def _success(self, _):\n print(\"The buildmaster appears to have (re)started correctly.\")\n self.rc = 0\n reactor.stop()\n\n def _failure(self, why):\n if why.check(BuildmasterTimeoutError):\n print(rewrap(\"\"\"\\\n The buildmaster took more than 10 seconds to start, so we were\n unable to confirm that it started correctly.\n Please 'tail twistd.log' and look for a line that says\n 'BuildMaster is running' to verify correct startup.\n \"\"\"))\n elif why.check(ReconfigError):\n print(rewrap(\"\"\"\\\n The buildmaster appears to have encountered an error in the\n master.cfg config file during startup.\n Please inspect and fix master.cfg, then restart the\n buildmaster.\n \"\"\"))\n elif why.check(BuildmasterStartupError):\n print(rewrap(\"\"\"\\\n The buildmaster startup failed. Please see 'twistd.log' for\n possible reason.\n \"\"\"))\n else:\n print(rewrap(\"\"\"\\\n Unable to confirm that the buildmaster started correctly.\n You may need to stop it, fix the config file, and restart.\n \"\"\"))\n print(why)\n self.rc = 1\n reactor.stop()\n\n\ndef launchNoDaemon(config):\n os.chdir(config['basedir'])\n sys.path.insert(0, os.path.abspath(config['basedir']))\n\n argv = [\"twistd\",\n \"--no_save\",\n '--nodaemon',\n \"--logfile=twistd.log\", # windows doesn't use the same default\n \"--python=buildbot.tac\"]\n sys.argv = argv\n\n # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use\n # _twistw.run . Twisted-2.5.0 and later use twistd.run, even for\n # windows.\n from twisted.scripts import twistd\n twistd.run()\n\n\ndef launch(config):\n os.chdir(config['basedir'])\n sys.path.insert(0, os.path.abspath(config['basedir']))\n\n # see if we can launch the application without actually having to\n # spawn twistd, since spawning processes correctly is a real hassle\n # on windows.\n argv = [sys.executable,\n \"-c\",\n # this is copied from bin/twistd. twisted-2.0.0 through 2.4.0 use\n # _twistw.run . Twisted-2.5.0 and later use twistd.run, even for\n # windows.\n \"from twisted.scripts import twistd; twistd.run()\",\n \"--no_save\",\n \"--logfile=twistd.log\", # windows doesn't use the same default\n \"--python=buildbot.tac\"]\n\n # ProcessProtocol just ignores all output\n proc = reactor.spawnProcess(\n protocol.ProcessProtocol(), sys.executable, argv, env=os.environ)\n\n if platformType == \"win32\":\n with open(\"twistd.pid\", \"w\") as pidfile:\n pidfile.write(\"{0}\".format(proc.pid))\n\n\ndef start(config):\n if not base.isBuildmasterDir(config['basedir']):\n return 1\n\n if config['nodaemon']:\n launchNoDaemon(config)\n return 0\n\n launch(config)\n\n # We don't have tail on windows\n if platformType == \"win32\" or config['quiet']:\n return 0\n\n # this is the parent\n rc = Follower().follow(config['basedir'])\n return rc\n", "path": "master/buildbot/scripts/start.py"}]} | 3,940 | 353 |
gh_patches_debug_31263 | rasdani/github-patches | git_diff | svthalia__concrexit-1749 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FieldError: Cannot resolve keyword 'pizza_event' into field. Choices are: food_event, food_event_id, id, memb...
Sentry Issue: [CONCREXIT-6G](https://sentry.io/organizations/thalia/issues/2468308255/?referrer=github_integration)
```
FieldError: Cannot resolve keyword 'pizza_event' into field. Choices are: food_event, food_event_id, id, member, member_id, name, payment, payment_id, product, product_id
(9 additional frame(s) were not displayed)
...
File "django/db/models/sql/query.py", line 1391, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "django/db/models/sql/query.py", line 1410, in _add_q
child_clause, needed_inner = self.build_filter(
File "django/db/models/sql/query.py", line 1284, in build_filter
lookups, parts, reffed_expression = self.solve_lookup_type(arg)
File "django/db/models/sql/query.py", line 1110, in solve_lookup_type
_, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "django/db/models/sql/query.py", line 1537, in names_to_path
raise FieldError("Cannot resolve keyword '%s' into field. "
```
</issue>
<code>
[start of website/pizzas/services.py]
1 from events.services import is_organiser
2 from .models import Product, FoodOrder, FoodEvent
3
4
5 def gen_stats_pizza_orders():
6 """Generate statistics about number of orders per product.
7
8 :return: Dict with key, value being resp. name, order count of a product.
9 """
10 total = {}
11
12 for product in Product.objects.all():
13 total.update(
14 {product.name: FoodOrder.objects.filter(product=product).count(),}
15 )
16
17 return {
18 i[0]: i[1]
19 for i in sorted(total.items(), key=lambda x: x[1], reverse=True)[:5]
20 if i[1] > 0
21 }
22
23
24 def gen_stats_current_pizza_orders():
25 """Generate statistics about number of orders per product of the active pizza event.
26
27 :return: Dict with key, value being resp. name, order count of a product.
28 """
29 total = {}
30
31 current_pizza_event = FoodEvent.current()
32 if not current_pizza_event:
33 return None
34
35 for product in Product.objects.filter():
36 total.update(
37 {
38 product.name: FoodOrder.objects.filter(
39 product=product, pizza_event=current_pizza_event,
40 ).count(),
41 }
42 )
43
44 return {
45 i[0]: i[1]
46 for i in sorted(total.items(), key=lambda x: x[1], reverse=True)[:5]
47 if i[1] > 0
48 }
49
50
51 def can_change_order(member, food_event):
52 """Determine if a certain member can edit orders of an event.
53
54 :param member: Member who wants to change and order
55 :param food_event: The event for which we want to change an order
56 :return: True if we can change an order else False
57 """
58 return (
59 food_event
60 and member.has_perm("pizzas.change_foodorder")
61 and is_organiser(member, food_event.event)
62 )
63
[end of website/pizzas/services.py]
[start of website/pizzas/views.py]
1 """Views provided by the pizzas package."""
2 from django.contrib import messages
3 from django.contrib.auth.decorators import login_required
4 from django.http import Http404
5 from django.shortcuts import get_object_or_404, render, redirect
6 from django.utils.translation import gettext_lazy as _
7 from django.views.decorators.http import require_http_methods
8
9 from .models import FoodOrder, FoodEvent, Product
10
11
12 @login_required
13 def index(request):
14 """Overview of user order for a pizza event."""
15 products = Product.available_products.order_by("name")
16 if not request.user.has_perm("pizzas.order_restricted_products"):
17 products = products.exclude(restricted=True)
18 event = FoodEvent.current()
19 try:
20 obj = FoodOrder.objects.get(pizza_event=event, member=request.member)
21 except FoodOrder.DoesNotExist:
22 obj = None
23 context = {"event": event, "products": products, "order": obj}
24 return render(request, "pizzas/index.html", context)
25
26
27 @require_http_methods(["POST"])
28 def cancel_order(request):
29 """View that cancels a user's order."""
30 if "order" in request.POST:
31 try:
32 order = get_object_or_404(FoodOrder, pk=int(request.POST["order"]))
33 if not order.can_be_changed:
34 messages.error(request, _("You can no longer cancel."))
35 elif order.member == request.member:
36 order.delete()
37 messages.success(request, _("Your order has been cancelled."))
38 except Http404:
39 messages.error(request, _("Your order could not be found."))
40 return redirect("pizzas:index")
41
42
43 @login_required
44 def place_order(request):
45 """View that shows the detail of the current order."""
46 event = FoodEvent.current()
47 if not event:
48 return redirect("pizzas:index")
49
50 try:
51 obj = FoodOrder.objects.get(pizza_event=event, member=request.member)
52 current_order_locked = not obj.can_be_changed
53 except FoodOrder.DoesNotExist:
54 obj = None
55 current_order_locked = False
56
57 if "product" in request.POST and not current_order_locked:
58 productset = Product.available_products.all()
59 if not request.user.has_perm("pizzas.order_restricted_products"):
60 productset = productset.exclude(restricted=True)
61 try:
62 product = productset.get(pk=int(request.POST["product"]))
63 except Product.DoesNotExist as e:
64 raise Http404("Pizza does not exist") from e
65 if not obj:
66 obj = FoodOrder(pizza_event=event, member=request.member)
67 obj.product = product
68 obj.save()
69 return redirect("pizzas:index")
70
[end of website/pizzas/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pizzas/services.py b/website/pizzas/services.py
--- a/website/pizzas/services.py
+++ b/website/pizzas/services.py
@@ -36,7 +36,7 @@
total.update(
{
product.name: FoodOrder.objects.filter(
- product=product, pizza_event=current_pizza_event,
+ product=product, food_event=current_pizza_event,
).count(),
}
)
diff --git a/website/pizzas/views.py b/website/pizzas/views.py
--- a/website/pizzas/views.py
+++ b/website/pizzas/views.py
@@ -17,7 +17,7 @@
products = products.exclude(restricted=True)
event = FoodEvent.current()
try:
- obj = FoodOrder.objects.get(pizza_event=event, member=request.member)
+ obj = FoodOrder.objects.get(food_event=event, member=request.member)
except FoodOrder.DoesNotExist:
obj = None
context = {"event": event, "products": products, "order": obj}
@@ -48,7 +48,7 @@
return redirect("pizzas:index")
try:
- obj = FoodOrder.objects.get(pizza_event=event, member=request.member)
+ obj = FoodOrder.objects.get(food_event=event, member=request.member)
current_order_locked = not obj.can_be_changed
except FoodOrder.DoesNotExist:
obj = None
@@ -63,7 +63,7 @@
except Product.DoesNotExist as e:
raise Http404("Pizza does not exist") from e
if not obj:
- obj = FoodOrder(pizza_event=event, member=request.member)
+ obj = FoodOrder(food_event=event, member=request.member)
obj.product = product
obj.save()
return redirect("pizzas:index")
| {"golden_diff": "diff --git a/website/pizzas/services.py b/website/pizzas/services.py\n--- a/website/pizzas/services.py\n+++ b/website/pizzas/services.py\n@@ -36,7 +36,7 @@\n total.update(\n {\n product.name: FoodOrder.objects.filter(\n- product=product, pizza_event=current_pizza_event,\n+ product=product, food_event=current_pizza_event,\n ).count(),\n }\n )\ndiff --git a/website/pizzas/views.py b/website/pizzas/views.py\n--- a/website/pizzas/views.py\n+++ b/website/pizzas/views.py\n@@ -17,7 +17,7 @@\n products = products.exclude(restricted=True)\n event = FoodEvent.current()\n try:\n- obj = FoodOrder.objects.get(pizza_event=event, member=request.member)\n+ obj = FoodOrder.objects.get(food_event=event, member=request.member)\n except FoodOrder.DoesNotExist:\n obj = None\n context = {\"event\": event, \"products\": products, \"order\": obj}\n@@ -48,7 +48,7 @@\n return redirect(\"pizzas:index\")\n \n try:\n- obj = FoodOrder.objects.get(pizza_event=event, member=request.member)\n+ obj = FoodOrder.objects.get(food_event=event, member=request.member)\n current_order_locked = not obj.can_be_changed\n except FoodOrder.DoesNotExist:\n obj = None\n@@ -63,7 +63,7 @@\n except Product.DoesNotExist as e:\n raise Http404(\"Pizza does not exist\") from e\n if not obj:\n- obj = FoodOrder(pizza_event=event, member=request.member)\n+ obj = FoodOrder(food_event=event, member=request.member)\n obj.product = product\n obj.save()\n return redirect(\"pizzas:index\")\n", "issue": "FieldError: Cannot resolve keyword 'pizza_event' into field. Choices are: food_event, food_event_id, id, memb...\nSentry Issue: [CONCREXIT-6G](https://sentry.io/organizations/thalia/issues/2468308255/?referrer=github_integration)\n\n```\nFieldError: Cannot resolve keyword 'pizza_event' into field. Choices are: food_event, food_event_id, id, member, member_id, name, payment, payment_id, product, product_id\n(9 additional frame(s) were not displayed)\n...\n File \"django/db/models/sql/query.py\", line 1391, in add_q\n clause, _ = self._add_q(q_object, self.used_aliases)\n File \"django/db/models/sql/query.py\", line 1410, in _add_q\n child_clause, needed_inner = self.build_filter(\n File \"django/db/models/sql/query.py\", line 1284, in build_filter\n lookups, parts, reffed_expression = self.solve_lookup_type(arg)\n File \"django/db/models/sql/query.py\", line 1110, in solve_lookup_type\n _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())\n File \"django/db/models/sql/query.py\", line 1537, in names_to_path\n raise FieldError(\"Cannot resolve keyword '%s' into field. \"\n```\n", "before_files": [{"content": "from events.services import is_organiser\nfrom .models import Product, FoodOrder, FoodEvent\n\n\ndef gen_stats_pizza_orders():\n \"\"\"Generate statistics about number of orders per product.\n\n :return: Dict with key, value being resp. name, order count of a product.\n \"\"\"\n total = {}\n\n for product in Product.objects.all():\n total.update(\n {product.name: FoodOrder.objects.filter(product=product).count(),}\n )\n\n return {\n i[0]: i[1]\n for i in sorted(total.items(), key=lambda x: x[1], reverse=True)[:5]\n if i[1] > 0\n }\n\n\ndef gen_stats_current_pizza_orders():\n \"\"\"Generate statistics about number of orders per product of the active pizza event.\n\n :return: Dict with key, value being resp. name, order count of a product.\n \"\"\"\n total = {}\n\n current_pizza_event = FoodEvent.current()\n if not current_pizza_event:\n return None\n\n for product in Product.objects.filter():\n total.update(\n {\n product.name: FoodOrder.objects.filter(\n product=product, pizza_event=current_pizza_event,\n ).count(),\n }\n )\n\n return {\n i[0]: i[1]\n for i in sorted(total.items(), key=lambda x: x[1], reverse=True)[:5]\n if i[1] > 0\n }\n\n\ndef can_change_order(member, food_event):\n \"\"\"Determine if a certain member can edit orders of an event.\n\n :param member: Member who wants to change and order\n :param food_event: The event for which we want to change an order\n :return: True if we can change an order else False\n \"\"\"\n return (\n food_event\n and member.has_perm(\"pizzas.change_foodorder\")\n and is_organiser(member, food_event.event)\n )\n", "path": "website/pizzas/services.py"}, {"content": "\"\"\"Views provided by the pizzas package.\"\"\"\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, render, redirect\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.http import require_http_methods\n\nfrom .models import FoodOrder, FoodEvent, Product\n\n\n@login_required\ndef index(request):\n \"\"\"Overview of user order for a pizza event.\"\"\"\n products = Product.available_products.order_by(\"name\")\n if not request.user.has_perm(\"pizzas.order_restricted_products\"):\n products = products.exclude(restricted=True)\n event = FoodEvent.current()\n try:\n obj = FoodOrder.objects.get(pizza_event=event, member=request.member)\n except FoodOrder.DoesNotExist:\n obj = None\n context = {\"event\": event, \"products\": products, \"order\": obj}\n return render(request, \"pizzas/index.html\", context)\n\n\n@require_http_methods([\"POST\"])\ndef cancel_order(request):\n \"\"\"View that cancels a user's order.\"\"\"\n if \"order\" in request.POST:\n try:\n order = get_object_or_404(FoodOrder, pk=int(request.POST[\"order\"]))\n if not order.can_be_changed:\n messages.error(request, _(\"You can no longer cancel.\"))\n elif order.member == request.member:\n order.delete()\n messages.success(request, _(\"Your order has been cancelled.\"))\n except Http404:\n messages.error(request, _(\"Your order could not be found.\"))\n return redirect(\"pizzas:index\")\n\n\n@login_required\ndef place_order(request):\n \"\"\"View that shows the detail of the current order.\"\"\"\n event = FoodEvent.current()\n if not event:\n return redirect(\"pizzas:index\")\n\n try:\n obj = FoodOrder.objects.get(pizza_event=event, member=request.member)\n current_order_locked = not obj.can_be_changed\n except FoodOrder.DoesNotExist:\n obj = None\n current_order_locked = False\n\n if \"product\" in request.POST and not current_order_locked:\n productset = Product.available_products.all()\n if not request.user.has_perm(\"pizzas.order_restricted_products\"):\n productset = productset.exclude(restricted=True)\n try:\n product = productset.get(pk=int(request.POST[\"product\"]))\n except Product.DoesNotExist as e:\n raise Http404(\"Pizza does not exist\") from e\n if not obj:\n obj = FoodOrder(pizza_event=event, member=request.member)\n obj.product = product\n obj.save()\n return redirect(\"pizzas:index\")\n", "path": "website/pizzas/views.py"}]} | 2,084 | 391 |
gh_patches_debug_1786 | rasdani/github-patches | git_diff | mozilla__telemetry-analysis-service-413 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError: No module named 'atmo.clusters.jobs'
```
app@a898b116953a:~$ ./manage.py update_clusters
Traceback (most recent call last):
File "./manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py", line 195, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py", line 39, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/usr/local/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 673, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/app/atmo/clusters/management/commands/update_clusters.py", line 6, in <module>
from ...jobs import update_clusters
ImportError: No module named 'atmo.clusters.jobs'
```
</issue>
<code>
[start of atmo/clusters/management/commands/update_clusters.py]
1 # This Source Code Form is subject to the terms of the Mozilla Public
2 # License, v. 2.0. If a copy of the MPL was not distributed with this
3 # file, you can obtain one at http://mozilla.org/MPL/2.0/.
4 from django.core.management.base import BaseCommand
5
6 from ...jobs import update_clusters
7
8
9 class Command(BaseCommand):
10 help = 'Go through active clusters and update their status'
11
12 def handle(self, *args, **options):
13 self.stdout.write('Updating cluster info...', ending='')
14 update_clusters()
15 self.stdout.write('done.')
16
[end of atmo/clusters/management/commands/update_clusters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/atmo/clusters/management/commands/update_clusters.py b/atmo/clusters/management/commands/update_clusters.py
--- a/atmo/clusters/management/commands/update_clusters.py
+++ b/atmo/clusters/management/commands/update_clusters.py
@@ -3,7 +3,7 @@
# file, you can obtain one at http://mozilla.org/MPL/2.0/.
from django.core.management.base import BaseCommand
-from ...jobs import update_clusters
+from ...tasks import update_clusters
class Command(BaseCommand):
| {"golden_diff": "diff --git a/atmo/clusters/management/commands/update_clusters.py b/atmo/clusters/management/commands/update_clusters.py\n--- a/atmo/clusters/management/commands/update_clusters.py\n+++ b/atmo/clusters/management/commands/update_clusters.py\n@@ -3,7 +3,7 @@\n # file, you can obtain one at http://mozilla.org/MPL/2.0/.\n from django.core.management.base import BaseCommand\n \n-from ...jobs import update_clusters\n+from ...tasks import update_clusters\n \n \n class Command(BaseCommand):\n", "issue": "ImportError: No module named 'atmo.clusters.jobs'\n```\r\napp@a898b116953a:~$ ./manage.py update_clusters\r\nTraceback (most recent call last):\r\n File \"./manage.py\", line 11, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py\", line 353, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py\", line 345, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py\", line 195, in fetch_command\r\n klass = load_command_class(app_name, subcommand)\r\n File \"/usr/local/lib/python3.5/site-packages/django/core/management/__init__.py\", line 39, in load_command_class\r\n module = import_module('%s.management.commands.%s' % (app_name, name))\r\n File \"/usr/local/lib/python3.5/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 986, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 969, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 958, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 673, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 673, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\r\n File \"/app/atmo/clusters/management/commands/update_clusters.py\", line 6, in <module>\r\n from ...jobs import update_clusters\r\nImportError: No module named 'atmo.clusters.jobs'\r\n```\n", "before_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.core.management.base import BaseCommand\n\nfrom ...jobs import update_clusters\n\n\nclass Command(BaseCommand):\n help = 'Go through active clusters and update their status'\n\n def handle(self, *args, **options):\n self.stdout.write('Updating cluster info...', ending='')\n update_clusters()\n self.stdout.write('done.')\n", "path": "atmo/clusters/management/commands/update_clusters.py"}]} | 1,162 | 114 |
gh_patches_debug_1821 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1713 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG - hooks not working on windows 10, when user account name contains non-ascii characters
When user account name contains non-ascii characters such as 'š', such that python executable ends up for example in C:\Users\john.š\\.cache\pre-commit\repo\py_env-python3.8\Scripts\python.exe, when committing to the git repository following message appears:
An unexpected error has occurred: AssertionError: BUG: expected environment for python to be healthy() immediately after install, please open an issue describing your environment.
PS: fucntion os.path.isfile() in parse_shebang.normexe() returns False, even though the executable exists there and is a file.
</issue>
<code>
[start of pre_commit/languages/python.py]
1 import contextlib
2 import functools
3 import os
4 import sys
5 from typing import Dict
6 from typing import Generator
7 from typing import Optional
8 from typing import Sequence
9 from typing import Tuple
10
11 import pre_commit.constants as C
12 from pre_commit.envcontext import envcontext
13 from pre_commit.envcontext import PatchesT
14 from pre_commit.envcontext import UNSET
15 from pre_commit.envcontext import Var
16 from pre_commit.hook import Hook
17 from pre_commit.languages import helpers
18 from pre_commit.parse_shebang import find_executable
19 from pre_commit.prefix import Prefix
20 from pre_commit.util import CalledProcessError
21 from pre_commit.util import clean_path_on_failure
22 from pre_commit.util import cmd_output
23 from pre_commit.util import cmd_output_b
24
25 ENVIRONMENT_DIR = 'py_env'
26
27
28 @functools.lru_cache(maxsize=None)
29 def _version_info(exe: str) -> str:
30 prog = 'import sys;print(".".join(str(p) for p in sys.version_info))'
31 try:
32 return cmd_output(exe, '-S', '-c', prog)[1].strip()
33 except CalledProcessError:
34 return f'<<error retrieving version from {exe}>>'
35
36
37 def _read_pyvenv_cfg(filename: str) -> Dict[str, str]:
38 ret = {}
39 with open(filename) as f:
40 for line in f:
41 try:
42 k, v = line.split('=')
43 except ValueError: # blank line / comment / etc.
44 continue
45 else:
46 ret[k.strip()] = v.strip()
47 return ret
48
49
50 def bin_dir(venv: str) -> str:
51 """On windows there's a different directory for the virtualenv"""
52 bin_part = 'Scripts' if os.name == 'nt' else 'bin'
53 return os.path.join(venv, bin_part)
54
55
56 def get_env_patch(venv: str) -> PatchesT:
57 return (
58 ('PIP_DISABLE_PIP_VERSION_CHECK', '1'),
59 ('PYTHONHOME', UNSET),
60 ('VIRTUAL_ENV', venv),
61 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
62 )
63
64
65 def _find_by_py_launcher(
66 version: str,
67 ) -> Optional[str]: # pragma: no cover (windows only)
68 if version.startswith('python'):
69 num = version[len('python'):]
70 cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')
71 env = dict(os.environ, PYTHONIOENCODING='UTF-8')
72 try:
73 return cmd_output(*cmd, env=env)[1].strip()
74 except CalledProcessError:
75 pass
76 return None
77
78
79 def _find_by_sys_executable() -> Optional[str]:
80 def _norm(path: str) -> Optional[str]:
81 _, exe = os.path.split(path.lower())
82 exe, _, _ = exe.partition('.exe')
83 if exe not in {'python', 'pythonw'} and find_executable(exe):
84 return exe
85 return None
86
87 # On linux, I see these common sys.executables:
88 #
89 # system `python`: /usr/bin/python -> python2.7
90 # system `python2`: /usr/bin/python2 -> python2.7
91 # virtualenv v: v/bin/python (will not return from this loop)
92 # virtualenv v -ppython2: v/bin/python -> python2
93 # virtualenv v -ppython2.7: v/bin/python -> python2.7
94 # virtualenv v -ppypy: v/bin/python -> v/bin/pypy
95 for path in (sys.executable, os.path.realpath(sys.executable)):
96 exe = _norm(path)
97 if exe:
98 return exe
99 return None
100
101
102 @functools.lru_cache(maxsize=1)
103 def get_default_version() -> str: # pragma: no cover (platform dependent)
104 # First attempt from `sys.executable` (or the realpath)
105 exe = _find_by_sys_executable()
106 if exe:
107 return exe
108
109 # Next try the `pythonX.X` executable
110 exe = f'python{sys.version_info[0]}.{sys.version_info[1]}'
111 if find_executable(exe):
112 return exe
113
114 if _find_by_py_launcher(exe):
115 return exe
116
117 # We tried!
118 return C.DEFAULT
119
120
121 def _sys_executable_matches(version: str) -> bool:
122 if version == 'python':
123 return True
124 elif not version.startswith('python'):
125 return False
126
127 try:
128 info = tuple(int(p) for p in version[len('python'):].split('.'))
129 except ValueError:
130 return False
131
132 return sys.version_info[:len(info)] == info
133
134
135 def norm_version(version: str) -> Optional[str]:
136 if version == C.DEFAULT: # use virtualenv's default
137 return None
138 elif _sys_executable_matches(version): # virtualenv defaults to our exe
139 return None
140
141 if os.name == 'nt': # pragma: no cover (windows)
142 version_exec = _find_by_py_launcher(version)
143 if version_exec:
144 return version_exec
145
146 # Try looking up by name
147 version_exec = find_executable(version)
148 if version_exec and version_exec != version:
149 return version_exec
150
151 # Otherwise assume it is a path
152 return os.path.expanduser(version)
153
154
155 @contextlib.contextmanager
156 def in_env(
157 prefix: Prefix,
158 language_version: str,
159 ) -> Generator[None, None, None]:
160 directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)
161 envdir = prefix.path(directory)
162 with envcontext(get_env_patch(envdir)):
163 yield
164
165
166 def healthy(prefix: Prefix, language_version: str) -> bool:
167 directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)
168 envdir = prefix.path(directory)
169 pyvenv_cfg = os.path.join(envdir, 'pyvenv.cfg')
170
171 # created with "old" virtualenv
172 if not os.path.exists(pyvenv_cfg):
173 return False
174
175 exe_name = 'python.exe' if sys.platform == 'win32' else 'python'
176 py_exe = prefix.path(bin_dir(envdir), exe_name)
177 cfg = _read_pyvenv_cfg(pyvenv_cfg)
178
179 return (
180 'version_info' in cfg and
181 # always use uncached lookup here in case we replaced an unhealthy env
182 _version_info.__wrapped__(py_exe) == cfg['version_info'] and (
183 'base-executable' not in cfg or
184 _version_info(cfg['base-executable']) == cfg['version_info']
185 )
186 )
187
188
189 def install_environment(
190 prefix: Prefix,
191 version: str,
192 additional_dependencies: Sequence[str],
193 ) -> None:
194 envdir = prefix.path(helpers.environment_dir(ENVIRONMENT_DIR, version))
195 venv_cmd = [sys.executable, '-mvirtualenv', envdir]
196 python = norm_version(version)
197 if python is not None:
198 venv_cmd.extend(('-p', python))
199 install_cmd = ('python', '-mpip', 'install', '.', *additional_dependencies)
200
201 with clean_path_on_failure(envdir):
202 cmd_output_b(*venv_cmd, cwd='/')
203 with in_env(prefix, version):
204 helpers.run_setup_cmd(prefix, install_cmd)
205
206
207 def run_hook(
208 hook: Hook,
209 file_args: Sequence[str],
210 color: bool,
211 ) -> Tuple[int, bytes]:
212 with in_env(hook.prefix, hook.language_version):
213 return helpers.run_xargs(hook, hook.cmd, file_args, color=color)
214
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -36,7 +36,7 @@
def _read_pyvenv_cfg(filename: str) -> Dict[str, str]:
ret = {}
- with open(filename) as f:
+ with open(filename, encoding='UTF-8') as f:
for line in f:
try:
k, v = line.split('=')
| {"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -36,7 +36,7 @@\n \n def _read_pyvenv_cfg(filename: str) -> Dict[str, str]:\n ret = {}\n- with open(filename) as f:\n+ with open(filename, encoding='UTF-8') as f:\n for line in f:\n try:\n k, v = line.split('=')\n", "issue": "BUG - hooks not working on windows 10, when user account name contains non-ascii characters\nWhen user account name contains non-ascii characters such as '\u0161', such that python executable ends up for example in C:\\Users\\john.\u0161\\\\.cache\\pre-commit\\repo\\py_env-python3.8\\Scripts\\python.exe, when committing to the git repository following message appears:\r\n\r\nAn unexpected error has occurred: AssertionError: BUG: expected environment for python to be healthy() immediately after install, please open an issue describing your environment.\r\n\r\nPS: fucntion os.path.isfile() in parse_shebang.normexe() returns False, even though the executable exists there and is a file.\n", "before_files": [{"content": "import contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Dict\nfrom typing import Generator\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.parse_shebang import find_executable\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'py_env'\n\n\[email protected]_cache(maxsize=None)\ndef _version_info(exe: str) -> str:\n prog = 'import sys;print(\".\".join(str(p) for p in sys.version_info))'\n try:\n return cmd_output(exe, '-S', '-c', prog)[1].strip()\n except CalledProcessError:\n return f'<<error retrieving version from {exe}>>'\n\n\ndef _read_pyvenv_cfg(filename: str) -> Dict[str, str]:\n ret = {}\n with open(filename) as f:\n for line in f:\n try:\n k, v = line.split('=')\n except ValueError: # blank line / comment / etc.\n continue\n else:\n ret[k.strip()] = v.strip()\n return ret\n\n\ndef bin_dir(venv: str) -> str:\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n return (\n ('PIP_DISABLE_PIP_VERSION_CHECK', '1'),\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\ndef _find_by_py_launcher(\n version: str,\n) -> Optional[str]: # pragma: no cover (windows only)\n if version.startswith('python'):\n num = version[len('python'):]\n cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')\n env = dict(os.environ, PYTHONIOENCODING='UTF-8')\n try:\n return cmd_output(*cmd, env=env)[1].strip()\n except CalledProcessError:\n pass\n return None\n\n\ndef _find_by_sys_executable() -> Optional[str]:\n def _norm(path: str) -> Optional[str]:\n _, exe = os.path.split(path.lower())\n exe, _, _ = exe.partition('.exe')\n if exe not in {'python', 'pythonw'} and find_executable(exe):\n return exe\n return None\n\n # On linux, I see these common sys.executables:\n #\n # system `python`: /usr/bin/python -> python2.7\n # system `python2`: /usr/bin/python2 -> python2.7\n # virtualenv v: v/bin/python (will not return from this loop)\n # virtualenv v -ppython2: v/bin/python -> python2\n # virtualenv v -ppython2.7: v/bin/python -> python2.7\n # virtualenv v -ppypy: v/bin/python -> v/bin/pypy\n for path in (sys.executable, os.path.realpath(sys.executable)):\n exe = _norm(path)\n if exe:\n return exe\n return None\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str: # pragma: no cover (platform dependent)\n # First attempt from `sys.executable` (or the realpath)\n exe = _find_by_sys_executable()\n if exe:\n return exe\n\n # Next try the `pythonX.X` executable\n exe = f'python{sys.version_info[0]}.{sys.version_info[1]}'\n if find_executable(exe):\n return exe\n\n if _find_by_py_launcher(exe):\n return exe\n\n # We tried!\n return C.DEFAULT\n\n\ndef _sys_executable_matches(version: str) -> bool:\n if version == 'python':\n return True\n elif not version.startswith('python'):\n return False\n\n try:\n info = tuple(int(p) for p in version[len('python'):].split('.'))\n except ValueError:\n return False\n\n return sys.version_info[:len(info)] == info\n\n\ndef norm_version(version: str) -> Optional[str]:\n if version == C.DEFAULT: # use virtualenv's default\n return None\n elif _sys_executable_matches(version): # virtualenv defaults to our exe\n return None\n\n if os.name == 'nt': # pragma: no cover (windows)\n version_exec = _find_by_py_launcher(version)\n if version_exec:\n return version_exec\n\n # Try looking up by name\n version_exec = find_executable(version)\n if version_exec and version_exec != version:\n return version_exec\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)\n envdir = prefix.path(directory)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef healthy(prefix: Prefix, language_version: str) -> bool:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)\n envdir = prefix.path(directory)\n pyvenv_cfg = os.path.join(envdir, 'pyvenv.cfg')\n\n # created with \"old\" virtualenv\n if not os.path.exists(pyvenv_cfg):\n return False\n\n exe_name = 'python.exe' if sys.platform == 'win32' else 'python'\n py_exe = prefix.path(bin_dir(envdir), exe_name)\n cfg = _read_pyvenv_cfg(pyvenv_cfg)\n\n return (\n 'version_info' in cfg and\n # always use uncached lookup here in case we replaced an unhealthy env\n _version_info.__wrapped__(py_exe) == cfg['version_info'] and (\n 'base-executable' not in cfg or\n _version_info(cfg['base-executable']) == cfg['version_info']\n )\n )\n\n\ndef install_environment(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n) -> None:\n envdir = prefix.path(helpers.environment_dir(ENVIRONMENT_DIR, version))\n venv_cmd = [sys.executable, '-mvirtualenv', envdir]\n python = norm_version(version)\n if python is not None:\n venv_cmd.extend(('-p', python))\n install_cmd = ('python', '-mpip', 'install', '.', *additional_dependencies)\n\n with clean_path_on_failure(envdir):\n cmd_output_b(*venv_cmd, cwd='/')\n with in_env(prefix, version):\n helpers.run_setup_cmd(prefix, install_cmd)\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/python.py"}]} | 2,869 | 112 |
gh_patches_debug_39196 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3705 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
testing 4295: 402 error for poll export
**URL:** https://meinberlin-dev.liqd.net/dashboard/modules/umfrage-24/poll/export/
**user:** initiator, moderator, group member
**expected behaviour:** download export
**behaviour:** 403 error
**important screensize:**
**device & browser:** big sur, firefox
**Comment/Question:**
Screenshot?
</issue>
<code>
[start of meinberlin/apps/polls/exports.py]
1 from django.utils.translation import ugettext as _
2 from rules.contrib.views import PermissionRequiredMixin
3
4 from adhocracy4.comments.models import Comment
5 from adhocracy4.exports import mixins
6 from adhocracy4.exports import views as export_views
7 from adhocracy4.polls import models as poll_models
8 from meinberlin.apps.users.models import User
9
10
11 class PollCommentExportView(
12 PermissionRequiredMixin,
13 mixins.ExportModelFieldsMixin,
14 mixins.UserGeneratedContentExportMixin,
15 mixins.ItemExportWithLinkMixin,
16 mixins.CommentExportWithRepliesToMixin,
17 export_views.BaseItemExportView
18 ):
19
20 model = Comment
21
22 fields = ['id', 'comment', 'created']
23 permission_required = 'a4projects.change_project'
24
25 def get_permission_object(self):
26 return self.module.project
27
28 def get_queryset(self):
29 comments = (
30 Comment.objects.filter(poll__module=self.module) |
31 Comment.objects.filter(parent_comment__poll__module=self.module)
32 )
33 return comments
34
35 def get_virtual_fields(self, virtual):
36 virtual.setdefault('id', _('ID'))
37 virtual.setdefault('comment', _('Comment'))
38 virtual.setdefault('created', _('Created'))
39 return super().get_virtual_fields(virtual)
40
41 @property
42 def raise_exception(self):
43 return self.request.user.is_authenticated
44
45
46 class PollExportView(
47 PermissionRequiredMixin,
48 export_views.BaseItemExportView
49 ):
50
51 permission_required = 'a4projects.change_project'
52
53 def get_queryset(self):
54 creators_vote = poll_models.Vote.objects.filter(
55 choice__question__poll=self.poll).values_list('creator', flat=True)
56 creators_answer = poll_models.Answer.objects.filter(
57 question__poll=self.poll).values_list('creator', flat=True)
58 creator_ids = list(set(creators_vote).union(set(creators_answer)))
59 return User.objects.filter(pk__in=creator_ids)
60
61 @property
62 def poll(self):
63 return poll_models.Poll.objects.get(module=self.module)
64
65 @property
66 def single_choice_questions(self):
67 return self.poll.questions.filter(
68 multiple_choice=False,
69 is_open=False).order_by('id')
70
71 @property
72 def multiple_choice_questions(self):
73 return self.poll.questions.filter(multiple_choice=True).order_by('id')
74
75 @property
76 def open_questions(self):
77 return self.poll.questions.filter(is_open=True).order_by('id')
78
79 def get_virtual_fields(self, virtual):
80 virtual = super().get_virtual_fields(virtual)
81 virtual = self.get_virtual_fields_choice_questions(
82 virtual, self.single_choice_questions)
83 virtual = self.get_virtual_fields_choice_questions(
84 virtual, self.multiple_choice_questions)
85 virtual = self.get_virtual_fields_open_questions(
86 virtual, self.open_questions)
87
88 return virtual
89
90 def get_virtual_fields_choice_questions(self, virtual, choice_questions):
91 for question in choice_questions.all():
92 for choice in question.choices.all():
93 identifier = 'Q' + str(question.pk) + '_A' + str(choice.pk)
94 virtual[(choice, False)] = identifier
95 if choice.is_other_choice:
96 identifier_answer = identifier + '_text'
97 virtual[(choice, True)] = identifier_answer
98
99 return virtual
100
101 def get_virtual_fields_open_questions(self, virtual, open_questions):
102 for question in open_questions.all():
103 identifier = 'Q' + str(question.pk)
104 virtual[(question, False)] = identifier
105 identifier_answer = identifier + '_text'
106 virtual[(question, True)] = identifier_answer
107
108 return virtual
109
110 def get_field_data(self, user, field):
111 field_object, is_text_field = field
112
113 if type(field_object) == poll_models.Choice:
114 votes_qs = poll_models.Vote.objects.filter(
115 choice=field_object,
116 creator=user)
117 if not is_text_field:
118 value = int(votes_qs.exists())
119 else:
120 vote = votes_qs.first()
121 if vote:
122 value = poll_models.OtherVote.objects.get(vote=vote).answer
123 else:
124 value = ''
125 else: # field_object is question
126 answers_qs = poll_models.Answer.objects.filter(
127 question=field_object,
128 creator=user)
129 if not is_text_field:
130 value = int(answers_qs.exists())
131 else:
132 answer = answers_qs.first()
133 if answer:
134 value = answer.answer
135 else:
136 value = ''
137
138 return value
139
[end of meinberlin/apps/polls/exports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/polls/exports.py b/meinberlin/apps/polls/exports.py
--- a/meinberlin/apps/polls/exports.py
+++ b/meinberlin/apps/polls/exports.py
@@ -50,6 +50,9 @@
permission_required = 'a4projects.change_project'
+ def get_permission_object(self):
+ return self.module.project
+
def get_queryset(self):
creators_vote = poll_models.Vote.objects.filter(
choice__question__poll=self.poll).values_list('creator', flat=True)
@@ -63,47 +66,37 @@
return poll_models.Poll.objects.get(module=self.module)
@property
- def single_choice_questions(self):
- return self.poll.questions.filter(
- multiple_choice=False,
- is_open=False).order_by('id')
-
- @property
- def multiple_choice_questions(self):
- return self.poll.questions.filter(multiple_choice=True).order_by('id')
-
- @property
- def open_questions(self):
- return self.poll.questions.filter(is_open=True).order_by('id')
+ def questions(self):
+ return self.poll.questions.all()
def get_virtual_fields(self, virtual):
virtual = super().get_virtual_fields(virtual)
- virtual = self.get_virtual_fields_choice_questions(
- virtual, self.single_choice_questions)
- virtual = self.get_virtual_fields_choice_questions(
- virtual, self.multiple_choice_questions)
- virtual = self.get_virtual_fields_open_questions(
- virtual, self.open_questions)
+
+ for question in self.questions:
+ if question.is_open:
+ virtual = \
+ self.get_virtual_field_open_question(virtual, question)
+ else:
+ virtual = \
+ self.get_virtual_field_choice_question(virtual, question)
return virtual
- def get_virtual_fields_choice_questions(self, virtual, choice_questions):
- for question in choice_questions.all():
- for choice in question.choices.all():
- identifier = 'Q' + str(question.pk) + '_A' + str(choice.pk)
- virtual[(choice, False)] = identifier
- if choice.is_other_choice:
- identifier_answer = identifier + '_text'
- virtual[(choice, True)] = identifier_answer
+ def get_virtual_field_choice_question(self, virtual, choice_question):
+ for choice in choice_question.choices.all():
+ identifier = 'Q' + str(choice_question.pk) + '_A' + str(choice.pk)
+ virtual[(choice, False)] = identifier
+ if choice.is_other_choice:
+ identifier_answer = identifier + '_text'
+ virtual[(choice, True)] = identifier_answer
return virtual
- def get_virtual_fields_open_questions(self, virtual, open_questions):
- for question in open_questions.all():
- identifier = 'Q' + str(question.pk)
- virtual[(question, False)] = identifier
- identifier_answer = identifier + '_text'
- virtual[(question, True)] = identifier_answer
+ def get_virtual_field_open_question(self, virtual, open_question):
+ identifier = 'Q' + str(open_question.pk)
+ virtual[(open_question, False)] = identifier
+ identifier_answer = identifier + '_text'
+ virtual[(open_question, True)] = identifier_answer
return virtual
| {"golden_diff": "diff --git a/meinberlin/apps/polls/exports.py b/meinberlin/apps/polls/exports.py\n--- a/meinberlin/apps/polls/exports.py\n+++ b/meinberlin/apps/polls/exports.py\n@@ -50,6 +50,9 @@\n \n permission_required = 'a4projects.change_project'\n \n+ def get_permission_object(self):\n+ return self.module.project\n+\n def get_queryset(self):\n creators_vote = poll_models.Vote.objects.filter(\n choice__question__poll=self.poll).values_list('creator', flat=True)\n@@ -63,47 +66,37 @@\n return poll_models.Poll.objects.get(module=self.module)\n \n @property\n- def single_choice_questions(self):\n- return self.poll.questions.filter(\n- multiple_choice=False,\n- is_open=False).order_by('id')\n-\n- @property\n- def multiple_choice_questions(self):\n- return self.poll.questions.filter(multiple_choice=True).order_by('id')\n-\n- @property\n- def open_questions(self):\n- return self.poll.questions.filter(is_open=True).order_by('id')\n+ def questions(self):\n+ return self.poll.questions.all()\n \n def get_virtual_fields(self, virtual):\n virtual = super().get_virtual_fields(virtual)\n- virtual = self.get_virtual_fields_choice_questions(\n- virtual, self.single_choice_questions)\n- virtual = self.get_virtual_fields_choice_questions(\n- virtual, self.multiple_choice_questions)\n- virtual = self.get_virtual_fields_open_questions(\n- virtual, self.open_questions)\n+\n+ for question in self.questions:\n+ if question.is_open:\n+ virtual = \\\n+ self.get_virtual_field_open_question(virtual, question)\n+ else:\n+ virtual = \\\n+ self.get_virtual_field_choice_question(virtual, question)\n \n return virtual\n \n- def get_virtual_fields_choice_questions(self, virtual, choice_questions):\n- for question in choice_questions.all():\n- for choice in question.choices.all():\n- identifier = 'Q' + str(question.pk) + '_A' + str(choice.pk)\n- virtual[(choice, False)] = identifier\n- if choice.is_other_choice:\n- identifier_answer = identifier + '_text'\n- virtual[(choice, True)] = identifier_answer\n+ def get_virtual_field_choice_question(self, virtual, choice_question):\n+ for choice in choice_question.choices.all():\n+ identifier = 'Q' + str(choice_question.pk) + '_A' + str(choice.pk)\n+ virtual[(choice, False)] = identifier\n+ if choice.is_other_choice:\n+ identifier_answer = identifier + '_text'\n+ virtual[(choice, True)] = identifier_answer\n \n return virtual\n \n- def get_virtual_fields_open_questions(self, virtual, open_questions):\n- for question in open_questions.all():\n- identifier = 'Q' + str(question.pk)\n- virtual[(question, False)] = identifier\n- identifier_answer = identifier + '_text'\n- virtual[(question, True)] = identifier_answer\n+ def get_virtual_field_open_question(self, virtual, open_question):\n+ identifier = 'Q' + str(open_question.pk)\n+ virtual[(open_question, False)] = identifier\n+ identifier_answer = identifier + '_text'\n+ virtual[(open_question, True)] = identifier_answer\n \n return virtual\n", "issue": "testing 4295: 402 error for poll export\n**URL:** https://meinberlin-dev.liqd.net/dashboard/modules/umfrage-24/poll/export/\r\n**user:** initiator, moderator, group member\r\n**expected behaviour:** download export\r\n**behaviour:** 403 error\r\n**important screensize:**\r\n**device & browser:** big sur, firefox\r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "from django.utils.translation import ugettext as _\nfrom rules.contrib.views import PermissionRequiredMixin\n\nfrom adhocracy4.comments.models import Comment\nfrom adhocracy4.exports import mixins\nfrom adhocracy4.exports import views as export_views\nfrom adhocracy4.polls import models as poll_models\nfrom meinberlin.apps.users.models import User\n\n\nclass PollCommentExportView(\n PermissionRequiredMixin,\n mixins.ExportModelFieldsMixin,\n mixins.UserGeneratedContentExportMixin,\n mixins.ItemExportWithLinkMixin,\n mixins.CommentExportWithRepliesToMixin,\n export_views.BaseItemExportView\n):\n\n model = Comment\n\n fields = ['id', 'comment', 'created']\n permission_required = 'a4projects.change_project'\n\n def get_permission_object(self):\n return self.module.project\n\n def get_queryset(self):\n comments = (\n Comment.objects.filter(poll__module=self.module) |\n Comment.objects.filter(parent_comment__poll__module=self.module)\n )\n return comments\n\n def get_virtual_fields(self, virtual):\n virtual.setdefault('id', _('ID'))\n virtual.setdefault('comment', _('Comment'))\n virtual.setdefault('created', _('Created'))\n return super().get_virtual_fields(virtual)\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated\n\n\nclass PollExportView(\n PermissionRequiredMixin,\n export_views.BaseItemExportView\n):\n\n permission_required = 'a4projects.change_project'\n\n def get_queryset(self):\n creators_vote = poll_models.Vote.objects.filter(\n choice__question__poll=self.poll).values_list('creator', flat=True)\n creators_answer = poll_models.Answer.objects.filter(\n question__poll=self.poll).values_list('creator', flat=True)\n creator_ids = list(set(creators_vote).union(set(creators_answer)))\n return User.objects.filter(pk__in=creator_ids)\n\n @property\n def poll(self):\n return poll_models.Poll.objects.get(module=self.module)\n\n @property\n def single_choice_questions(self):\n return self.poll.questions.filter(\n multiple_choice=False,\n is_open=False).order_by('id')\n\n @property\n def multiple_choice_questions(self):\n return self.poll.questions.filter(multiple_choice=True).order_by('id')\n\n @property\n def open_questions(self):\n return self.poll.questions.filter(is_open=True).order_by('id')\n\n def get_virtual_fields(self, virtual):\n virtual = super().get_virtual_fields(virtual)\n virtual = self.get_virtual_fields_choice_questions(\n virtual, self.single_choice_questions)\n virtual = self.get_virtual_fields_choice_questions(\n virtual, self.multiple_choice_questions)\n virtual = self.get_virtual_fields_open_questions(\n virtual, self.open_questions)\n\n return virtual\n\n def get_virtual_fields_choice_questions(self, virtual, choice_questions):\n for question in choice_questions.all():\n for choice in question.choices.all():\n identifier = 'Q' + str(question.pk) + '_A' + str(choice.pk)\n virtual[(choice, False)] = identifier\n if choice.is_other_choice:\n identifier_answer = identifier + '_text'\n virtual[(choice, True)] = identifier_answer\n\n return virtual\n\n def get_virtual_fields_open_questions(self, virtual, open_questions):\n for question in open_questions.all():\n identifier = 'Q' + str(question.pk)\n virtual[(question, False)] = identifier\n identifier_answer = identifier + '_text'\n virtual[(question, True)] = identifier_answer\n\n return virtual\n\n def get_field_data(self, user, field):\n field_object, is_text_field = field\n\n if type(field_object) == poll_models.Choice:\n votes_qs = poll_models.Vote.objects.filter(\n choice=field_object,\n creator=user)\n if not is_text_field:\n value = int(votes_qs.exists())\n else:\n vote = votes_qs.first()\n if vote:\n value = poll_models.OtherVote.objects.get(vote=vote).answer\n else:\n value = ''\n else: # field_object is question\n answers_qs = poll_models.Answer.objects.filter(\n question=field_object,\n creator=user)\n if not is_text_field:\n value = int(answers_qs.exists())\n else:\n answer = answers_qs.first()\n if answer:\n value = answer.answer\n else:\n value = ''\n\n return value\n", "path": "meinberlin/apps/polls/exports.py"}]} | 1,887 | 730 |
gh_patches_debug_11535 | rasdani/github-patches | git_diff | Qiskit__qiskit-5613 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Avoid dependencies duplicity
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
Currently, when you want to add or update a dependency, you need to do that in the `requirements.txt` and `setup.py` files. That is really error-prone.
It would be nice to avoid that situation and make changes only in one of the files when a dependency is added or updated.
</issue>
<code>
[start of setup.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 "The Qiskit Terra setup file."
14
15 import os
16 import sys
17 from setuptools import setup, find_packages, Extension
18 try:
19 from Cython.Build import cythonize
20 except ImportError:
21 import subprocess
22 subprocess.call([sys.executable, '-m', 'pip', 'install', 'Cython>=0.27.1'])
23 from Cython.Build import cythonize
24
25 REQUIREMENTS = [
26 "contextvars>=2.4;python_version<'3.7'",
27 "jsonschema>=2.6",
28 "retworkx>=0.7.0",
29 "numpy>=1.17",
30 "ply>=3.10",
31 "psutil>=5",
32 "scipy>=1.4",
33 "sympy>=1.3",
34 "dill>=0.3",
35 "fastjsonschema>=2.10",
36 "python-constraint>=1.4",
37 "python-dateutil>=2.8.0",
38 ]
39
40 # Add Cython extensions here
41 CYTHON_EXTS = ['utils', 'swap_trial']
42 CYTHON_MODULE = 'qiskit.transpiler.passes.routing.cython.stochastic_swap'
43 CYTHON_SOURCE_DIR = 'qiskit/transpiler/passes/routing/cython/stochastic_swap'
44
45 INCLUDE_DIRS = []
46 # Extra link args
47 LINK_FLAGS = []
48 # If on Win and not in MSYS2 (i.e. Visual studio compile)
49 if (sys.platform == 'win32' and os.environ.get('MSYSTEM') is None):
50 COMPILER_FLAGS = ['/O2']
51 # Everything else
52 else:
53 COMPILER_FLAGS = ['-O2', '-funroll-loops', '-std=c++11']
54 if sys.platform == 'darwin':
55 # These are needed for compiling on OSX 10.14+
56 COMPILER_FLAGS.append('-mmacosx-version-min=10.9')
57 LINK_FLAGS.append('-mmacosx-version-min=10.9')
58
59
60 EXT_MODULES = []
61 # Add Cython Extensions
62 for ext in CYTHON_EXTS:
63 mod = Extension(CYTHON_MODULE + '.' + ext,
64 sources=[CYTHON_SOURCE_DIR + '/' + ext + '.pyx'],
65 include_dirs=INCLUDE_DIRS,
66 extra_compile_args=COMPILER_FLAGS,
67 extra_link_args=LINK_FLAGS,
68 language='c++')
69 EXT_MODULES.append(mod)
70
71 # Read long description from README.
72 README_PATH = os.path.join(os.path.abspath(os.path.dirname(__file__)),
73 'README.md')
74 with open(README_PATH) as readme_file:
75 README = readme_file.read()
76
77 setup(
78 name="qiskit-terra",
79 version="0.17.0",
80 description="Software for developing quantum computing programs",
81 long_description=README,
82 long_description_content_type='text/markdown',
83 url="https://github.com/Qiskit/qiskit-terra",
84 author="Qiskit Development Team",
85 author_email="[email protected]",
86 license="Apache 2.0",
87 classifiers=[
88 "Environment :: Console",
89 "License :: OSI Approved :: Apache Software License",
90 "Intended Audience :: Developers",
91 "Intended Audience :: Science/Research",
92 "Operating System :: Microsoft :: Windows",
93 "Operating System :: MacOS",
94 "Operating System :: POSIX :: Linux",
95 "Programming Language :: Python :: 3 :: Only",
96 "Programming Language :: Python :: 3.6",
97 "Programming Language :: Python :: 3.7",
98 "Programming Language :: Python :: 3.8",
99 "Programming Language :: Python :: 3.9",
100 "Topic :: Scientific/Engineering",
101 ],
102 keywords="qiskit sdk quantum",
103 packages=find_packages(exclude=['test*']),
104 install_requires=REQUIREMENTS,
105 setup_requires=['Cython>=0.27.1'],
106 include_package_data=True,
107 python_requires=">=3.6",
108 extras_require={
109 'visualization': ['matplotlib>=2.1', 'ipywidgets>=7.3.0',
110 'pydot', "pillow>=4.2.1", "pylatexenc>=1.4",
111 "seaborn>=0.9.0", "pygments>=2.4"],
112 'classical-function-compiler': ['tweedledum'],
113 'full-featured-simulators': ['qiskit-aer>=0.1'],
114 'crosstalk-pass': ['z3-solver>=4.7'],
115 },
116 project_urls={
117 "Bug Tracker": "https://github.com/Qiskit/qiskit-terra/issues",
118 "Documentation": "https://qiskit.org/documentation/",
119 "Source Code": "https://github.com/Qiskit/qiskit-terra",
120 },
121 ext_modules=cythonize(EXT_MODULES),
122 zip_safe=False
123 )
124
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,20 +22,8 @@
subprocess.call([sys.executable, '-m', 'pip', 'install', 'Cython>=0.27.1'])
from Cython.Build import cythonize
-REQUIREMENTS = [
- "contextvars>=2.4;python_version<'3.7'",
- "jsonschema>=2.6",
- "retworkx>=0.7.0",
- "numpy>=1.17",
- "ply>=3.10",
- "psutil>=5",
- "scipy>=1.4",
- "sympy>=1.3",
- "dill>=0.3",
- "fastjsonschema>=2.10",
- "python-constraint>=1.4",
- "python-dateutil>=2.8.0",
-]
+with open('requirements.txt') as f:
+ REQUIREMENTS = f.read().splitlines()
# Add Cython extensions here
CYTHON_EXTS = ['utils', 'swap_trial']
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,20 +22,8 @@\n subprocess.call([sys.executable, '-m', 'pip', 'install', 'Cython>=0.27.1'])\n from Cython.Build import cythonize\n \n-REQUIREMENTS = [\n- \"contextvars>=2.4;python_version<'3.7'\",\n- \"jsonschema>=2.6\",\n- \"retworkx>=0.7.0\",\n- \"numpy>=1.17\",\n- \"ply>=3.10\",\n- \"psutil>=5\",\n- \"scipy>=1.4\",\n- \"sympy>=1.3\",\n- \"dill>=0.3\",\n- \"fastjsonschema>=2.10\",\n- \"python-constraint>=1.4\",\n- \"python-dateutil>=2.8.0\",\n-]\n+with open('requirements.txt') as f:\n+ REQUIREMENTS = f.read().splitlines()\n \n # Add Cython extensions here\n CYTHON_EXTS = ['utils', 'swap_trial']\n", "issue": "Avoid dependencies duplicity\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\nCurrently, when you want to add or update a dependency, you need to do that in the `requirements.txt` and `setup.py` files. That is really error-prone.\r\n\r\nIt would be nice to avoid that situation and make changes only in one of the files when a dependency is added or updated.\r\n\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"The Qiskit Terra setup file.\"\n\nimport os\nimport sys\nfrom setuptools import setup, find_packages, Extension\ntry:\n from Cython.Build import cythonize\nexcept ImportError:\n import subprocess\n subprocess.call([sys.executable, '-m', 'pip', 'install', 'Cython>=0.27.1'])\n from Cython.Build import cythonize\n\nREQUIREMENTS = [\n \"contextvars>=2.4;python_version<'3.7'\",\n \"jsonschema>=2.6\",\n \"retworkx>=0.7.0\",\n \"numpy>=1.17\",\n \"ply>=3.10\",\n \"psutil>=5\",\n \"scipy>=1.4\",\n \"sympy>=1.3\",\n \"dill>=0.3\",\n \"fastjsonschema>=2.10\",\n \"python-constraint>=1.4\",\n \"python-dateutil>=2.8.0\",\n]\n\n# Add Cython extensions here\nCYTHON_EXTS = ['utils', 'swap_trial']\nCYTHON_MODULE = 'qiskit.transpiler.passes.routing.cython.stochastic_swap'\nCYTHON_SOURCE_DIR = 'qiskit/transpiler/passes/routing/cython/stochastic_swap'\n\nINCLUDE_DIRS = []\n# Extra link args\nLINK_FLAGS = []\n# If on Win and not in MSYS2 (i.e. Visual studio compile)\nif (sys.platform == 'win32' and os.environ.get('MSYSTEM') is None):\n COMPILER_FLAGS = ['/O2']\n# Everything else\nelse:\n COMPILER_FLAGS = ['-O2', '-funroll-loops', '-std=c++11']\n if sys.platform == 'darwin':\n # These are needed for compiling on OSX 10.14+\n COMPILER_FLAGS.append('-mmacosx-version-min=10.9')\n LINK_FLAGS.append('-mmacosx-version-min=10.9')\n\n\nEXT_MODULES = []\n# Add Cython Extensions\nfor ext in CYTHON_EXTS:\n mod = Extension(CYTHON_MODULE + '.' + ext,\n sources=[CYTHON_SOURCE_DIR + '/' + ext + '.pyx'],\n include_dirs=INCLUDE_DIRS,\n extra_compile_args=COMPILER_FLAGS,\n extra_link_args=LINK_FLAGS,\n language='c++')\n EXT_MODULES.append(mod)\n\n# Read long description from README.\nREADME_PATH = os.path.join(os.path.abspath(os.path.dirname(__file__)),\n 'README.md')\nwith open(README_PATH) as readme_file:\n README = readme_file.read()\n\nsetup(\n name=\"qiskit-terra\",\n version=\"0.17.0\",\n description=\"Software for developing quantum computing programs\",\n long_description=README,\n long_description_content_type='text/markdown',\n url=\"https://github.com/Qiskit/qiskit-terra\",\n author=\"Qiskit Development Team\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n classifiers=[\n \"Environment :: Console\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n ],\n keywords=\"qiskit sdk quantum\",\n packages=find_packages(exclude=['test*']),\n install_requires=REQUIREMENTS,\n setup_requires=['Cython>=0.27.1'],\n include_package_data=True,\n python_requires=\">=3.6\",\n extras_require={\n 'visualization': ['matplotlib>=2.1', 'ipywidgets>=7.3.0',\n 'pydot', \"pillow>=4.2.1\", \"pylatexenc>=1.4\",\n \"seaborn>=0.9.0\", \"pygments>=2.4\"],\n 'classical-function-compiler': ['tweedledum'],\n 'full-featured-simulators': ['qiskit-aer>=0.1'],\n 'crosstalk-pass': ['z3-solver>=4.7'],\n },\n project_urls={\n \"Bug Tracker\": \"https://github.com/Qiskit/qiskit-terra/issues\",\n \"Documentation\": \"https://qiskit.org/documentation/\",\n \"Source Code\": \"https://github.com/Qiskit/qiskit-terra\",\n },\n ext_modules=cythonize(EXT_MODULES),\n zip_safe=False\n)\n", "path": "setup.py"}]} | 2,057 | 255 |
gh_patches_debug_387 | rasdani/github-patches | git_diff | chainer__chainer-1568 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inconsistency between cupy.dstack and numpy.dstack
```
In [10]: import cupy, numpy
In [11]: a = cupy.arange(24).reshape(2, 3, 4)
In [12]: numpy.dstack((a.get(),))
Out[12]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
In [13]: cupy.dstack((a,))
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-13-aa573685da21> in <module>()
----> 1 cupy.dstack((a,))
/home/delta/dev/chainer/cupy/manipulation/join.py in dstack(tup)
101
102 """
--> 103 return concatenate(cupy.atleast_3d(*tup), 2)
104
105
/home/delta/dev/chainer/cupy/manipulation/join.py in concatenate(tup, axis)
59 ndim = a.ndim
60 shape = list(a.shape)
---> 61 axis = _get_positive_axis(a.ndim, axis)
62 continue
63
/home/delta/dev/chainer/cupy/manipulation/join.py in _get_positive_axis(ndim, axis)
167 a += ndim
168 if a < 0 or a >= ndim:
--> 169 raise IndexError('axis {} out of bounds [0, {})'.format(axis, ndim))
170 return a
IndexError: axis 2 out of bounds [0, 2)
```
</issue>
<code>
[start of cupy/manipulation/join.py]
1 import numpy
2 import six
3
4 import cupy
5
6
7 def column_stack(tup):
8 """Stacks 1-D and 2-D arrays as columns into a 2-D array.
9
10 A 1-D array is first converted to a 2-D column array. Then, the 2-D arrays
11 are concatenated along the second axis.
12
13 Args:
14 tup (sequence of arrays): 1-D or 2-D arrays to be stacked.
15
16 Returns:
17 cupy.ndarray: A new 2-D array of stacked columns.
18
19 .. seealso:: :func:`numpy.column_stack`
20
21 """
22 if any(not isinstance(a, cupy.ndarray) for a in tup):
23 raise TypeError('Only cupy arrays can be column stacked')
24
25 lst = list(tup)
26 for i, a in enumerate(lst):
27 if a.ndim == 1:
28 a = a[:, cupy.newaxis]
29 lst[i] = a
30 elif a.ndim != 2:
31 raise ValueError(
32 'Only 1 or 2 dimensional arrays can be column stacked')
33
34 return concatenate(lst, axis=1)
35
36
37 def concatenate(tup, axis=0):
38 """Joins arrays along an axis.
39
40 Args:
41 tup (sequence of arrays): Arrays to be joined. All of these should have
42 same dimensionalities except the specified axis.
43 axis (int): The axis to join arrays along.
44
45 Returns:
46 cupy.ndarray: Joined array.
47
48 .. seealso:: :func:`numpy.concatenate`
49
50 """
51 ndim = None
52 shape = None
53 for a in tup:
54 if not isinstance(a, cupy.ndarray):
55 raise TypeError('Only cupy arrays can be concatenated')
56 if a.ndim == 0:
57 raise TypeError('zero-dimensional arrays cannot be concatenated')
58 if ndim is None:
59 ndim = a.ndim
60 shape = list(a.shape)
61 axis = _get_positive_axis(a.ndim, axis)
62 continue
63
64 if a.ndim != ndim:
65 raise ValueError(
66 'All arrays to concatenate must have the same ndim')
67 if any(i != axis and shape[i] != a.shape[i]
68 for i in six.moves.range(ndim)):
69 raise ValueError(
70 'All arrays must have same shape except the axis to '
71 'concatenate')
72 shape[axis] += a.shape[axis]
73
74 if ndim is None:
75 raise ValueError('Cannot concatenate from empty tuple')
76
77 dtype = numpy.find_common_type([a.dtype for a in tup], [])
78 ret = cupy.empty(shape, dtype=dtype)
79
80 skip = (slice(None),) * axis
81 i = 0
82 for a in tup:
83 aw = a.shape[axis]
84 ret[skip + (slice(i, i + aw),)] = a
85 i += aw
86
87 return ret
88
89
90 def dstack(tup):
91 """Stacks arrays along the third axis.
92
93 Args:
94 tup (sequence of arrays): Arrays to be stacked. Each array is converted
95 by :func:`cupy.atleast_3d` before stacking.
96
97 Returns:
98 cupy.ndarray: Stacked array.
99
100 .. seealso:: :func:`numpy.dstack`
101
102 """
103 return concatenate(cupy.atleast_3d(*tup), 2)
104
105
106 def hstack(tup):
107 """Stacks arrays horizontally.
108
109 If an input array has one dimension, then the array is treated as a
110 horizontal vector and stacked along the first axis. Otherwise, the array is
111 stacked along the second axis.
112
113 Args:
114 tup (sequence of arrays): Arrays to be stacked.
115
116 Returns:
117 cupy.ndarray: Stacked array.
118
119 .. seealso:: :func:`numpy.hstack`
120
121 """
122 arrs = [cupy.atleast_1d(a) for a in tup]
123 axis = 1
124 if arrs[0].ndim == 1:
125 axis = 0
126 return concatenate(arrs, axis)
127
128
129 def vstack(tup):
130 """Stacks arrays vertically.
131
132 If an input array has one dimension, then the array is treated as a
133 horizontal vector and stacked along the additional axis at the head.
134 Otherwise, the array is stacked along the first axis.
135
136 Args:
137 tup (sequence of arrays): Arrays to be stacked. Each array is converted
138 by :func:`cupy.atleast_2d` before stacking.
139
140 Returns:
141 cupy.ndarray: Stacked array.
142
143 .. seealso:: :func:`numpy.dstack`
144
145 """
146 return concatenate([cupy.atleast_2d(m) for m in tup], 0)
147
148
149 def stack(tup, axis=0):
150 """Stacks arrays along a new axis.
151
152 Args:
153 tup (sequence of arrays): Arrays to be stacked.
154 axis (int): Axis along which the arrays are stacked.
155
156 Returns:
157 cupy.ndarray: Stacked array.
158
159 .. seealso:: :func:`numpy.stack`
160 """
161 return concatenate([cupy.expand_dims(x, axis) for x in tup], axis)
162
163
164 def _get_positive_axis(ndim, axis):
165 a = axis
166 if a < 0:
167 a += ndim
168 if a < 0 or a >= ndim:
169 raise IndexError('axis {} out of bounds [0, {})'.format(axis, ndim))
170 return a
171
[end of cupy/manipulation/join.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/manipulation/join.py b/cupy/manipulation/join.py
--- a/cupy/manipulation/join.py
+++ b/cupy/manipulation/join.py
@@ -100,7 +100,7 @@
.. seealso:: :func:`numpy.dstack`
"""
- return concatenate(cupy.atleast_3d(*tup), 2)
+ return concatenate([cupy.atleast_3d(m) for m in tup], 2)
def hstack(tup):
| {"golden_diff": "diff --git a/cupy/manipulation/join.py b/cupy/manipulation/join.py\n--- a/cupy/manipulation/join.py\n+++ b/cupy/manipulation/join.py\n@@ -100,7 +100,7 @@\n .. seealso:: :func:`numpy.dstack`\n \n \"\"\"\n- return concatenate(cupy.atleast_3d(*tup), 2)\n+ return concatenate([cupy.atleast_3d(m) for m in tup], 2)\n \n \n def hstack(tup):\n", "issue": "Inconsistency between cupy.dstack and numpy.dstack\n```\nIn [10]: import cupy, numpy\nIn [11]: a = cupy.arange(24).reshape(2, 3, 4)\nIn [12]: numpy.dstack((a.get(),))\nOut[12]: \narray([[[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]],\n\n [[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]]])\n\nIn [13]: cupy.dstack((a,))\n---------------------------------------------------------------------------\nIndexError Traceback (most recent call last)\n<ipython-input-13-aa573685da21> in <module>()\n----> 1 cupy.dstack((a,))\n\n/home/delta/dev/chainer/cupy/manipulation/join.py in dstack(tup)\n 101 \n 102 \"\"\"\n--> 103 return concatenate(cupy.atleast_3d(*tup), 2)\n 104 \n 105 \n\n/home/delta/dev/chainer/cupy/manipulation/join.py in concatenate(tup, axis)\n 59 ndim = a.ndim\n 60 shape = list(a.shape)\n---> 61 axis = _get_positive_axis(a.ndim, axis)\n 62 continue\n 63 \n\n/home/delta/dev/chainer/cupy/manipulation/join.py in _get_positive_axis(ndim, axis)\n 167 a += ndim\n 168 if a < 0 or a >= ndim:\n--> 169 raise IndexError('axis {} out of bounds [0, {})'.format(axis, ndim))\n 170 return a\n\nIndexError: axis 2 out of bounds [0, 2)\n```\n\n", "before_files": [{"content": "import numpy\nimport six\n\nimport cupy\n\n\ndef column_stack(tup):\n \"\"\"Stacks 1-D and 2-D arrays as columns into a 2-D array.\n\n A 1-D array is first converted to a 2-D column array. Then, the 2-D arrays\n are concatenated along the second axis.\n\n Args:\n tup (sequence of arrays): 1-D or 2-D arrays to be stacked.\n\n Returns:\n cupy.ndarray: A new 2-D array of stacked columns.\n\n .. seealso:: :func:`numpy.column_stack`\n\n \"\"\"\n if any(not isinstance(a, cupy.ndarray) for a in tup):\n raise TypeError('Only cupy arrays can be column stacked')\n\n lst = list(tup)\n for i, a in enumerate(lst):\n if a.ndim == 1:\n a = a[:, cupy.newaxis]\n lst[i] = a\n elif a.ndim != 2:\n raise ValueError(\n 'Only 1 or 2 dimensional arrays can be column stacked')\n\n return concatenate(lst, axis=1)\n\n\ndef concatenate(tup, axis=0):\n \"\"\"Joins arrays along an axis.\n\n Args:\n tup (sequence of arrays): Arrays to be joined. All of these should have\n same dimensionalities except the specified axis.\n axis (int): The axis to join arrays along.\n\n Returns:\n cupy.ndarray: Joined array.\n\n .. seealso:: :func:`numpy.concatenate`\n\n \"\"\"\n ndim = None\n shape = None\n for a in tup:\n if not isinstance(a, cupy.ndarray):\n raise TypeError('Only cupy arrays can be concatenated')\n if a.ndim == 0:\n raise TypeError('zero-dimensional arrays cannot be concatenated')\n if ndim is None:\n ndim = a.ndim\n shape = list(a.shape)\n axis = _get_positive_axis(a.ndim, axis)\n continue\n\n if a.ndim != ndim:\n raise ValueError(\n 'All arrays to concatenate must have the same ndim')\n if any(i != axis and shape[i] != a.shape[i]\n for i in six.moves.range(ndim)):\n raise ValueError(\n 'All arrays must have same shape except the axis to '\n 'concatenate')\n shape[axis] += a.shape[axis]\n\n if ndim is None:\n raise ValueError('Cannot concatenate from empty tuple')\n\n dtype = numpy.find_common_type([a.dtype for a in tup], [])\n ret = cupy.empty(shape, dtype=dtype)\n\n skip = (slice(None),) * axis\n i = 0\n for a in tup:\n aw = a.shape[axis]\n ret[skip + (slice(i, i + aw),)] = a\n i += aw\n\n return ret\n\n\ndef dstack(tup):\n \"\"\"Stacks arrays along the third axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked. Each array is converted\n by :func:`cupy.atleast_3d` before stacking.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.dstack`\n\n \"\"\"\n return concatenate(cupy.atleast_3d(*tup), 2)\n\n\ndef hstack(tup):\n \"\"\"Stacks arrays horizontally.\n\n If an input array has one dimension, then the array is treated as a\n horizontal vector and stacked along the first axis. Otherwise, the array is\n stacked along the second axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.hstack`\n\n \"\"\"\n arrs = [cupy.atleast_1d(a) for a in tup]\n axis = 1\n if arrs[0].ndim == 1:\n axis = 0\n return concatenate(arrs, axis)\n\n\ndef vstack(tup):\n \"\"\"Stacks arrays vertically.\n\n If an input array has one dimension, then the array is treated as a\n horizontal vector and stacked along the additional axis at the head.\n Otherwise, the array is stacked along the first axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked. Each array is converted\n by :func:`cupy.atleast_2d` before stacking.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.dstack`\n\n \"\"\"\n return concatenate([cupy.atleast_2d(m) for m in tup], 0)\n\n\ndef stack(tup, axis=0):\n \"\"\"Stacks arrays along a new axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked.\n axis (int): Axis along which the arrays are stacked.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.stack`\n \"\"\"\n return concatenate([cupy.expand_dims(x, axis) for x in tup], axis)\n\n\ndef _get_positive_axis(ndim, axis):\n a = axis\n if a < 0:\n a += ndim\n if a < 0 or a >= ndim:\n raise IndexError('axis {} out of bounds [0, {})'.format(axis, ndim))\n return a\n", "path": "cupy/manipulation/join.py"}]} | 2,592 | 120 |
gh_patches_debug_40989 | rasdani/github-patches | git_diff | deepset-ai__haystack-1309 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TransformersSummarizer crashes if given long input
If the TransformersSummarizer is given an input that is longer than the model's max_seq_len, an error will be thrown. Instead, I think a warning message should be printed to console and the input text should be truncated so that the Node can still run.
TransformersSummarizer crashes if given long input
If the TransformersSummarizer is given an input that is longer than the model's max_seq_len, an error will be thrown. Instead, I think a warning message should be printed to console and the input text should be truncated so that the Node can still run.
</issue>
<code>
[start of haystack/summarizer/transformers.py]
1 import logging
2 from typing import List, Optional
3
4 from transformers import pipeline
5 from transformers.models.auto.modeling_auto import AutoModelForSeq2SeqLM
6
7 from haystack import Document
8 from haystack.summarizer.base import BaseSummarizer
9
10 logger = logging.getLogger(__name__)
11
12
13 class TransformersSummarizer(BaseSummarizer):
14 """
15 Transformer based model to summarize the documents using the HuggingFace's transformers framework
16
17 You can use any model that has been fine-tuned on a summarization task. For example:
18 '`bart-large-cnn`', '`t5-small`', '`t5-base`', '`t5-large`', '`t5-3b`', '`t5-11b`'.
19 See the up-to-date list of available models on
20 `huggingface.co/models <https://huggingface.co/models?filter=summarization>`__
21
22 **Example**
23
24 ```python
25 | docs = [Document(text="PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions."
26 | "The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by"
27 | "the shutoffs which were expected to last through at least midday tomorrow.")]
28 |
29 | # Summarize
30 | summary = summarizer.predict(
31 | documents=docs,
32 | generate_single_summary=True
33 | )
34 |
35 | # Show results (List of Documents, containing summary and original text)
36 | print(summary)
37 |
38 | [
39 | {
40 | "text": "California's largest electricity provider has turned off power to hundreds of thousands of customers.",
41 | ...
42 | "meta": {
43 | "context": "PGE stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. ..."
44 | },
45 | ...
46 | },
47 ```
48 """
49
50 def __init__(
51 self,
52 model_name_or_path: str = "google/pegasus-xsum",
53 model_version: Optional[str] = None,
54 tokenizer: Optional[str] = None,
55 max_length: int = 200,
56 min_length: int = 5,
57 use_gpu: int = 0,
58 clean_up_tokenization_spaces: bool = True,
59 separator_for_single_summary: str = " ",
60 generate_single_summary: bool = False,
61 ):
62 """
63 Load a Summarization model from Transformers.
64 See the up-to-date list of available models at
65 https://huggingface.co/models?filter=summarization
66
67 :param model_name_or_path: Directory of a saved model or the name of a public model e.g.
68 'facebook/rag-token-nq', 'facebook/rag-sequence-nq'.
69 See https://huggingface.co/models?filter=summarization for full list of available models.
70 :param model_version: The version of model to use from the HuggingFace model hub. Can be tag name, branch name, or commit hash.
71 :param tokenizer: Name of the tokenizer (usually the same as model)
72 :param max_length: Maximum length of summarized text
73 :param min_length: Minimum length of summarized text
74 :param use_gpu: If < 0, then use cpu. If >= 0, this is the ordinal of the gpu to use
75 :param clean_up_tokenization_spaces: Whether or not to clean up the potential extra spaces in the text output
76 :param separator_for_single_summary: If `generate_single_summary=True` in `predict()`, we need to join all docs
77 into a single text. This separator appears between those subsequent docs.
78 :param generate_single_summary: Whether to generate a single summary for all documents or one summary per document.
79 If set to "True", all docs will be joined to a single string that will then
80 be summarized.
81 Important: The summary will depend on the order of the supplied documents!
82 """
83
84 # save init parameters to enable export of component config as YAML
85 self.set_config(
86 model_name_or_path=model_name_or_path, model_version=model_version, tokenizer=tokenizer,
87 max_length=max_length, min_length=min_length, use_gpu=use_gpu,
88 clean_up_tokenization_spaces=clean_up_tokenization_spaces,
89 separator_for_single_summary=separator_for_single_summary, generate_single_summary=generate_single_summary,
90 )
91
92 # TODO AutoModelForSeq2SeqLM is only necessary with transformers==4.1.1, with newer versions use the pipeline directly
93 if tokenizer is None:
94 tokenizer = model_name_or_path
95 model = AutoModelForSeq2SeqLM.from_pretrained(pretrained_model_name_or_path=model_name_or_path, revision=model_version)
96 self.summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, device=use_gpu)
97 self.max_length = max_length
98 self.min_length = min_length
99 self.clean_up_tokenization_spaces = clean_up_tokenization_spaces
100 self.separator_for_single_summary = separator_for_single_summary
101 self.generate_single_summary = generate_single_summary
102
103 def predict(self, documents: List[Document], generate_single_summary: Optional[bool] = None) -> List[Document]:
104 """
105 Produce the summarization from the supplied documents.
106 These document can for example be retrieved via the Retriever.
107
108 :param documents: Related documents (e.g. coming from a retriever) that the answer shall be conditioned on.
109 :param generate_single_summary: Whether to generate a single summary for all documents or one summary per document.
110 If set to "True", all docs will be joined to a single string that will then
111 be summarized.
112 Important: The summary will depend on the order of the supplied documents!
113 :return: List of Documents, where Document.text contains the summarization and Document.meta["context"]
114 the original, not summarized text
115 """
116
117 if self.min_length > self.max_length:
118 raise AttributeError("min_length cannot be greater than max_length")
119
120 if len(documents) == 0:
121 raise AttributeError("Summarizer needs at least one document to produce a summary.")
122
123 if generate_single_summary is None:
124 generate_single_summary = self.generate_single_summary
125
126 contexts: List[str] = [doc.text for doc in documents]
127
128 if generate_single_summary:
129 # Documents order is very important to produce summary.
130 # Different order of same documents produce different summary.
131 contexts = [self.separator_for_single_summary.join(contexts)]
132
133 summaries = self.summarizer(
134 contexts,
135 min_length=self.min_length,
136 max_length=self.max_length,
137 return_text=True,
138 clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,
139 )
140
141 result: List[Document] = []
142
143 for context, summarized_answer in zip(contexts, summaries):
144 cur_doc = Document(text=summarized_answer['summary_text'], meta={"context": context})
145 result.append(cur_doc)
146
147 return result
148
[end of haystack/summarizer/transformers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/haystack/summarizer/transformers.py b/haystack/summarizer/transformers.py
--- a/haystack/summarizer/transformers.py
+++ b/haystack/summarizer/transformers.py
@@ -1,5 +1,5 @@
import logging
-from typing import List, Optional
+from typing import List, Optional, Set
from transformers import pipeline
from transformers.models.auto.modeling_auto import AutoModelForSeq2SeqLM
@@ -99,8 +99,10 @@
self.clean_up_tokenization_spaces = clean_up_tokenization_spaces
self.separator_for_single_summary = separator_for_single_summary
self.generate_single_summary = generate_single_summary
+ self.print_log: Set[str] = set()
- def predict(self, documents: List[Document], generate_single_summary: Optional[bool] = None) -> List[Document]:
+ def predict(self, documents: List[Document], generate_single_summary: Optional[bool] = None,
+ truncation: bool = True) -> List[Document]:
"""
Produce the summarization from the supplied documents.
These document can for example be retrieved via the Retriever.
@@ -110,6 +112,7 @@
If set to "True", all docs will be joined to a single string that will then
be summarized.
Important: The summary will depend on the order of the supplied documents!
+ :param truncation: Truncate to a maximum length accepted by the model
:return: List of Documents, where Document.text contains the summarization and Document.meta["context"]
the original, not summarized text
"""
@@ -130,12 +133,25 @@
# Different order of same documents produce different summary.
contexts = [self.separator_for_single_summary.join(contexts)]
+ encoded_input = self.summarizer.tokenizer(contexts, verbose=False)
+ for input_id in encoded_input['input_ids']:
+ tokens_count: int = len(input_id)
+ if tokens_count > self.summarizer.tokenizer.model_max_length:
+ truncation_warning = "One or more of your input document texts is longer than the specified " \
+ f"maximum sequence length for this summarizer model. "\
+ f"Generating summary from first {self.summarizer.tokenizer.model_max_length}"\
+ f" tokens."
+ if truncation_warning not in self.print_log:
+ logger.warning(truncation_warning)
+ self.print_log.add(truncation_warning)
+
summaries = self.summarizer(
contexts,
min_length=self.min_length,
max_length=self.max_length,
return_text=True,
clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,
+ truncation=True,
)
result: List[Document] = []
| {"golden_diff": "diff --git a/haystack/summarizer/transformers.py b/haystack/summarizer/transformers.py\n--- a/haystack/summarizer/transformers.py\n+++ b/haystack/summarizer/transformers.py\n@@ -1,5 +1,5 @@\n import logging\n-from typing import List, Optional\n+from typing import List, Optional, Set\n \n from transformers import pipeline\n from transformers.models.auto.modeling_auto import AutoModelForSeq2SeqLM\n@@ -99,8 +99,10 @@\n self.clean_up_tokenization_spaces = clean_up_tokenization_spaces\n self.separator_for_single_summary = separator_for_single_summary\n self.generate_single_summary = generate_single_summary\n+ self.print_log: Set[str] = set()\n \n- def predict(self, documents: List[Document], generate_single_summary: Optional[bool] = None) -> List[Document]:\n+ def predict(self, documents: List[Document], generate_single_summary: Optional[bool] = None,\n+ truncation: bool = True) -> List[Document]:\n \"\"\"\n Produce the summarization from the supplied documents.\n These document can for example be retrieved via the Retriever.\n@@ -110,6 +112,7 @@\n If set to \"True\", all docs will be joined to a single string that will then\n be summarized.\n Important: The summary will depend on the order of the supplied documents!\n+ :param truncation: Truncate to a maximum length accepted by the model\n :return: List of Documents, where Document.text contains the summarization and Document.meta[\"context\"]\n the original, not summarized text\n \"\"\"\n@@ -130,12 +133,25 @@\n # Different order of same documents produce different summary.\n contexts = [self.separator_for_single_summary.join(contexts)]\n \n+ encoded_input = self.summarizer.tokenizer(contexts, verbose=False)\n+ for input_id in encoded_input['input_ids']:\n+ tokens_count: int = len(input_id)\n+ if tokens_count > self.summarizer.tokenizer.model_max_length:\n+ truncation_warning = \"One or more of your input document texts is longer than the specified \" \\\n+ f\"maximum sequence length for this summarizer model. \"\\\n+ f\"Generating summary from first {self.summarizer.tokenizer.model_max_length}\"\\\n+ f\" tokens.\"\n+ if truncation_warning not in self.print_log:\n+ logger.warning(truncation_warning)\n+ self.print_log.add(truncation_warning)\n+\n summaries = self.summarizer(\n contexts,\n min_length=self.min_length,\n max_length=self.max_length,\n return_text=True,\n clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,\n+ truncation=True,\n )\n \n result: List[Document] = []\n", "issue": "TransformersSummarizer crashes if given long input\nIf the TransformersSummarizer is given an input that is longer than the model's max_seq_len, an error will be thrown. Instead, I think a warning message should be printed to console and the input text should be truncated so that the Node can still run.\nTransformersSummarizer crashes if given long input\nIf the TransformersSummarizer is given an input that is longer than the model's max_seq_len, an error will be thrown. Instead, I think a warning message should be printed to console and the input text should be truncated so that the Node can still run.\n", "before_files": [{"content": "import logging\nfrom typing import List, Optional\n\nfrom transformers import pipeline\nfrom transformers.models.auto.modeling_auto import AutoModelForSeq2SeqLM\n\nfrom haystack import Document\nfrom haystack.summarizer.base import BaseSummarizer\n\nlogger = logging.getLogger(__name__)\n\n\nclass TransformersSummarizer(BaseSummarizer):\n \"\"\"\n Transformer based model to summarize the documents using the HuggingFace's transformers framework\n\n You can use any model that has been fine-tuned on a summarization task. For example:\n '`bart-large-cnn`', '`t5-small`', '`t5-base`', '`t5-large`', '`t5-3b`', '`t5-11b`'.\n See the up-to-date list of available models on\n `huggingface.co/models <https://huggingface.co/models?filter=summarization>`__\n\n **Example**\n\n ```python\n | docs = [Document(text=\"PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions.\"\n | \"The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by\"\n | \"the shutoffs which were expected to last through at least midday tomorrow.\")]\n |\n | # Summarize\n | summary = summarizer.predict(\n | documents=docs,\n | generate_single_summary=True\n | )\n |\n | # Show results (List of Documents, containing summary and original text)\n | print(summary)\n |\n | [\n | {\n | \"text\": \"California's largest electricity provider has turned off power to hundreds of thousands of customers.\",\n | ...\n | \"meta\": {\n | \"context\": \"PGE stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. ...\"\n | },\n | ...\n | },\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: str = \"google/pegasus-xsum\",\n model_version: Optional[str] = None,\n tokenizer: Optional[str] = None,\n max_length: int = 200,\n min_length: int = 5,\n use_gpu: int = 0,\n clean_up_tokenization_spaces: bool = True,\n separator_for_single_summary: str = \" \",\n generate_single_summary: bool = False,\n ):\n \"\"\"\n Load a Summarization model from Transformers.\n See the up-to-date list of available models at\n https://huggingface.co/models?filter=summarization\n\n :param model_name_or_path: Directory of a saved model or the name of a public model e.g.\n 'facebook/rag-token-nq', 'facebook/rag-sequence-nq'.\n See https://huggingface.co/models?filter=summarization for full list of available models.\n :param model_version: The version of model to use from the HuggingFace model hub. Can be tag name, branch name, or commit hash.\n :param tokenizer: Name of the tokenizer (usually the same as model)\n :param max_length: Maximum length of summarized text\n :param min_length: Minimum length of summarized text\n :param use_gpu: If < 0, then use cpu. If >= 0, this is the ordinal of the gpu to use\n :param clean_up_tokenization_spaces: Whether or not to clean up the potential extra spaces in the text output\n :param separator_for_single_summary: If `generate_single_summary=True` in `predict()`, we need to join all docs\n into a single text. This separator appears between those subsequent docs.\n :param generate_single_summary: Whether to generate a single summary for all documents or one summary per document.\n If set to \"True\", all docs will be joined to a single string that will then\n be summarized.\n Important: The summary will depend on the order of the supplied documents!\n \"\"\"\n\n # save init parameters to enable export of component config as YAML\n self.set_config(\n model_name_or_path=model_name_or_path, model_version=model_version, tokenizer=tokenizer,\n max_length=max_length, min_length=min_length, use_gpu=use_gpu,\n clean_up_tokenization_spaces=clean_up_tokenization_spaces,\n separator_for_single_summary=separator_for_single_summary, generate_single_summary=generate_single_summary,\n )\n\n # TODO AutoModelForSeq2SeqLM is only necessary with transformers==4.1.1, with newer versions use the pipeline directly\n if tokenizer is None:\n tokenizer = model_name_or_path\n model = AutoModelForSeq2SeqLM.from_pretrained(pretrained_model_name_or_path=model_name_or_path, revision=model_version)\n self.summarizer = pipeline(\"summarization\", model=model, tokenizer=tokenizer, device=use_gpu)\n self.max_length = max_length\n self.min_length = min_length\n self.clean_up_tokenization_spaces = clean_up_tokenization_spaces\n self.separator_for_single_summary = separator_for_single_summary\n self.generate_single_summary = generate_single_summary\n\n def predict(self, documents: List[Document], generate_single_summary: Optional[bool] = None) -> List[Document]:\n \"\"\"\n Produce the summarization from the supplied documents.\n These document can for example be retrieved via the Retriever.\n\n :param documents: Related documents (e.g. coming from a retriever) that the answer shall be conditioned on.\n :param generate_single_summary: Whether to generate a single summary for all documents or one summary per document.\n If set to \"True\", all docs will be joined to a single string that will then\n be summarized.\n Important: The summary will depend on the order of the supplied documents!\n :return: List of Documents, where Document.text contains the summarization and Document.meta[\"context\"]\n the original, not summarized text\n \"\"\"\n\n if self.min_length > self.max_length:\n raise AttributeError(\"min_length cannot be greater than max_length\")\n\n if len(documents) == 0:\n raise AttributeError(\"Summarizer needs at least one document to produce a summary.\")\n\n if generate_single_summary is None:\n generate_single_summary = self.generate_single_summary\n\n contexts: List[str] = [doc.text for doc in documents]\n\n if generate_single_summary:\n # Documents order is very important to produce summary.\n # Different order of same documents produce different summary.\n contexts = [self.separator_for_single_summary.join(contexts)]\n\n summaries = self.summarizer(\n contexts,\n min_length=self.min_length,\n max_length=self.max_length,\n return_text=True,\n clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,\n )\n\n result: List[Document] = []\n\n for context, summarized_answer in zip(contexts, summaries):\n cur_doc = Document(text=summarized_answer['summary_text'], meta={\"context\": context})\n result.append(cur_doc)\n\n return result\n", "path": "haystack/summarizer/transformers.py"}]} | 2,511 | 615 |
gh_patches_debug_3482 | rasdani/github-patches | git_diff | docker__docker-py-1528 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot wait blocking generator output from events()
Since upgrade to `docker 2.2.0`.
API 'events()' using same API client time out.
So I got `UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.` exception message.
But in my use case, `containers()` or other client APIs are reasonable to set an default timeout.
But `events()` should have another timeout setting for users.
</issue>
<code>
[start of docker/api/daemon.py]
1 import os
2 import warnings
3 from datetime import datetime
4
5 from .. import auth, utils
6 from ..constants import INSECURE_REGISTRY_DEPRECATION_WARNING
7
8
9 class DaemonApiMixin(object):
10 @utils.minimum_version('1.25')
11 def df(self):
12 """
13 Get data usage information.
14
15 Returns:
16 (dict): A dictionary representing different resource categories
17 and their respective data usage.
18
19 Raises:
20 :py:class:`docker.errors.APIError`
21 If the server returns an error.
22 """
23 url = self._url('/system/df')
24 return self._result(self._get(url), True)
25
26 def events(self, since=None, until=None, filters=None, decode=None):
27 """
28 Get real-time events from the server. Similar to the ``docker events``
29 command.
30
31 Args:
32 since (UTC datetime or int): Get events from this point
33 until (UTC datetime or int): Get events until this point
34 filters (dict): Filter the events by event time, container or image
35 decode (bool): If set to true, stream will be decoded into dicts on
36 the fly. False by default.
37
38 Returns:
39 (generator): A blocking generator you can iterate over to retrieve
40 events as they happen.
41
42 Raises:
43 :py:class:`docker.errors.APIError`
44 If the server returns an error.
45
46 Example:
47
48 >>> for event in client.events()
49 ... print event
50 {u'from': u'image/with:tag',
51 u'id': u'container-id',
52 u'status': u'start',
53 u'time': 1423339459}
54 ...
55 """
56
57 if isinstance(since, datetime):
58 since = utils.datetime_to_timestamp(since)
59
60 if isinstance(until, datetime):
61 until = utils.datetime_to_timestamp(until)
62
63 if filters:
64 filters = utils.convert_filters(filters)
65
66 params = {
67 'since': since,
68 'until': until,
69 'filters': filters
70 }
71
72 return self._stream_helper(
73 self._get(self._url('/events'), params=params, stream=True),
74 decode=decode
75 )
76
77 def info(self):
78 """
79 Display system-wide information. Identical to the ``docker info``
80 command.
81
82 Returns:
83 (dict): The info as a dict
84
85 Raises:
86 :py:class:`docker.errors.APIError`
87 If the server returns an error.
88 """
89 return self._result(self._get(self._url("/info")), True)
90
91 def login(self, username, password=None, email=None, registry=None,
92 reauth=False, insecure_registry=False, dockercfg_path=None):
93 """
94 Authenticate with a registry. Similar to the ``docker login`` command.
95
96 Args:
97 username (str): The registry username
98 password (str): The plaintext password
99 email (str): The email for the registry account
100 registry (str): URL to the registry. E.g.
101 ``https://index.docker.io/v1/``
102 reauth (bool): Whether refresh existing authentication on the
103 Docker server.
104 dockercfg_path (str): Use a custom path for the ``.dockercfg`` file
105 (default ``$HOME/.dockercfg``)
106
107 Returns:
108 (dict): The response from the login request
109
110 Raises:
111 :py:class:`docker.errors.APIError`
112 If the server returns an error.
113 """
114 if insecure_registry:
115 warnings.warn(
116 INSECURE_REGISTRY_DEPRECATION_WARNING.format('login()'),
117 DeprecationWarning
118 )
119
120 # If we don't have any auth data so far, try reloading the config file
121 # one more time in case anything showed up in there.
122 # If dockercfg_path is passed check to see if the config file exists,
123 # if so load that config.
124 if dockercfg_path and os.path.exists(dockercfg_path):
125 self._auth_configs = auth.load_config(dockercfg_path)
126 elif not self._auth_configs:
127 self._auth_configs = auth.load_config()
128
129 authcfg = auth.resolve_authconfig(self._auth_configs, registry)
130 # If we found an existing auth config for this registry and username
131 # combination, we can return it immediately unless reauth is requested.
132 if authcfg and authcfg.get('username', None) == username \
133 and not reauth:
134 return authcfg
135
136 req_data = {
137 'username': username,
138 'password': password,
139 'email': email,
140 'serveraddress': registry,
141 }
142
143 response = self._post_json(self._url('/auth'), data=req_data)
144 if response.status_code == 200:
145 self._auth_configs[registry or auth.INDEX_NAME] = req_data
146 return self._result(response, json=True)
147
148 def ping(self):
149 """
150 Checks the server is responsive. An exception will be raised if it
151 isn't responding.
152
153 Returns:
154 (bool) The response from the server.
155
156 Raises:
157 :py:class:`docker.errors.APIError`
158 If the server returns an error.
159 """
160 return self._result(self._get(self._url('/_ping'))) == 'OK'
161
162 def version(self, api_version=True):
163 """
164 Returns version information from the server. Similar to the ``docker
165 version`` command.
166
167 Returns:
168 (dict): The server version information
169
170 Raises:
171 :py:class:`docker.errors.APIError`
172 If the server returns an error.
173 """
174 url = self._url("/version", versioned_api=api_version)
175 return self._result(self._get(url), json=True)
176
[end of docker/api/daemon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/api/daemon.py b/docker/api/daemon.py
--- a/docker/api/daemon.py
+++ b/docker/api/daemon.py
@@ -68,9 +68,10 @@
'until': until,
'filters': filters
}
+ url = self._url('/events')
return self._stream_helper(
- self._get(self._url('/events'), params=params, stream=True),
+ self._get(url, params=params, stream=True, timeout=None),
decode=decode
)
| {"golden_diff": "diff --git a/docker/api/daemon.py b/docker/api/daemon.py\n--- a/docker/api/daemon.py\n+++ b/docker/api/daemon.py\n@@ -68,9 +68,10 @@\n 'until': until,\n 'filters': filters\n }\n+ url = self._url('/events')\n \n return self._stream_helper(\n- self._get(self._url('/events'), params=params, stream=True),\n+ self._get(url, params=params, stream=True, timeout=None),\n decode=decode\n )\n", "issue": "Cannot wait blocking generator output from events()\nSince upgrade to `docker 2.2.0`. \r\n\r\nAPI 'events()' using same API client time out. \r\nSo I got `UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.` exception message.\r\n\r\nBut in my use case, `containers()` or other client APIs are reasonable to set an default timeout.\r\nBut `events()` should have another timeout setting for users.\n", "before_files": [{"content": "import os\nimport warnings\nfrom datetime import datetime\n\nfrom .. import auth, utils\nfrom ..constants import INSECURE_REGISTRY_DEPRECATION_WARNING\n\n\nclass DaemonApiMixin(object):\n @utils.minimum_version('1.25')\n def df(self):\n \"\"\"\n Get data usage information.\n\n Returns:\n (dict): A dictionary representing different resource categories\n and their respective data usage.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url('/system/df')\n return self._result(self._get(url), True)\n\n def events(self, since=None, until=None, filters=None, decode=None):\n \"\"\"\n Get real-time events from the server. Similar to the ``docker events``\n command.\n\n Args:\n since (UTC datetime or int): Get events from this point\n until (UTC datetime or int): Get events until this point\n filters (dict): Filter the events by event time, container or image\n decode (bool): If set to true, stream will be decoded into dicts on\n the fly. False by default.\n\n Returns:\n (generator): A blocking generator you can iterate over to retrieve\n events as they happen.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> for event in client.events()\n ... print event\n {u'from': u'image/with:tag',\n u'id': u'container-id',\n u'status': u'start',\n u'time': 1423339459}\n ...\n \"\"\"\n\n if isinstance(since, datetime):\n since = utils.datetime_to_timestamp(since)\n\n if isinstance(until, datetime):\n until = utils.datetime_to_timestamp(until)\n\n if filters:\n filters = utils.convert_filters(filters)\n\n params = {\n 'since': since,\n 'until': until,\n 'filters': filters\n }\n\n return self._stream_helper(\n self._get(self._url('/events'), params=params, stream=True),\n decode=decode\n )\n\n def info(self):\n \"\"\"\n Display system-wide information. Identical to the ``docker info``\n command.\n\n Returns:\n (dict): The info as a dict\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self._result(self._get(self._url(\"/info\")), True)\n\n def login(self, username, password=None, email=None, registry=None,\n reauth=False, insecure_registry=False, dockercfg_path=None):\n \"\"\"\n Authenticate with a registry. Similar to the ``docker login`` command.\n\n Args:\n username (str): The registry username\n password (str): The plaintext password\n email (str): The email for the registry account\n registry (str): URL to the registry. E.g.\n ``https://index.docker.io/v1/``\n reauth (bool): Whether refresh existing authentication on the\n Docker server.\n dockercfg_path (str): Use a custom path for the ``.dockercfg`` file\n (default ``$HOME/.dockercfg``)\n\n Returns:\n (dict): The response from the login request\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('login()'),\n DeprecationWarning\n )\n\n # If we don't have any auth data so far, try reloading the config file\n # one more time in case anything showed up in there.\n # If dockercfg_path is passed check to see if the config file exists,\n # if so load that config.\n if dockercfg_path and os.path.exists(dockercfg_path):\n self._auth_configs = auth.load_config(dockercfg_path)\n elif not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # If we found an existing auth config for this registry and username\n # combination, we can return it immediately unless reauth is requested.\n if authcfg and authcfg.get('username', None) == username \\\n and not reauth:\n return authcfg\n\n req_data = {\n 'username': username,\n 'password': password,\n 'email': email,\n 'serveraddress': registry,\n }\n\n response = self._post_json(self._url('/auth'), data=req_data)\n if response.status_code == 200:\n self._auth_configs[registry or auth.INDEX_NAME] = req_data\n return self._result(response, json=True)\n\n def ping(self):\n \"\"\"\n Checks the server is responsive. An exception will be raised if it\n isn't responding.\n\n Returns:\n (bool) The response from the server.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self._result(self._get(self._url('/_ping'))) == 'OK'\n\n def version(self, api_version=True):\n \"\"\"\n Returns version information from the server. Similar to the ``docker\n version`` command.\n\n Returns:\n (dict): The server version information\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url(\"/version\", versioned_api=api_version)\n return self._result(self._get(url), json=True)\n", "path": "docker/api/daemon.py"}]} | 2,274 | 117 |
gh_patches_debug_52919 | rasdani/github-patches | git_diff | great-expectations__great_expectations-3469 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py]
1 import logging
2 from functools import reduce
3
4 from great_expectations.execution_engine import (
5 PandasExecutionEngine,
6 SparkDFExecutionEngine,
7 SqlAlchemyExecutionEngine,
8 )
9 from great_expectations.expectations.metrics.import_manager import F, sa
10 from great_expectations.expectations.metrics.map_metric_provider import (
11 MulticolumnMapMetricProvider,
12 multicolumn_condition_partial,
13 )
14
15 logger = logging.getLogger(__name__)
16
17
18 class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):
19 condition_metric_name = "select_column_values.unique.within_record"
20 condition_domain_keys = (
21 "batch_id",
22 "table",
23 "column_list",
24 "row_condition",
25 "condition_parser",
26 "ignore_row_if",
27 )
28
29 @multicolumn_condition_partial(engine=PandasExecutionEngine)
30 def _pandas(cls, column_list, **kwargs):
31 num_columns = len(column_list.columns)
32 row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns
33 return row_wise_cond
34
35 @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)
36 def _sqlalchemy(cls, column_list, **kwargs):
37 """
38 The present approach relies on an inefficient query condition construction implementation, whose computational
39 cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is
40 available, this is the only feasible mechanism under the current architecture, where map metric providers must
41 return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).
42 """
43 num_columns = len(column_list)
44
45 # An arbitrary "num_columns" value used for issuing an explanatory message as a warning.
46 if num_columns > 100:
47 logger.warning(
48 f"""Batch data with {num_columns} columns is detected. Computing the "{cls.condition_metric_name}" \
49 metric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.
50 """
51 )
52
53 conditions = sa.or_(
54 *(
55 sa.or_(
56 column_list[idx_src] == column_list[idx_dest],
57 sa.and_(
58 column_list[idx_src] == None, column_list[idx_dest] == None
59 ),
60 )
61 for idx_src in range(num_columns - 1)
62 for idx_dest in range(idx_src + 1, num_columns)
63 )
64 )
65 row_wise_cond = sa.not_(sa.or_(conditions))
66 return row_wise_cond
67
68 @multicolumn_condition_partial(engine=SparkDFExecutionEngine)
69 def _spark(cls, column_list, **kwargs):
70 column_names = column_list.columns
71 num_columns = len(column_names)
72
73 conditions = []
74 for idx_src in range(num_columns - 1):
75 for idx_dest in range(idx_src + 1, num_columns):
76 conditions.append(
77 F.col(column_names[idx_src]).eqNullSafe(
78 F.col(column_names[idx_dest])
79 )
80 )
81
82 row_wise_cond = ~reduce(lambda a, b: a | b, conditions)
83 return row_wise_cond
84
[end of great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
--- a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
+++ b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
@@ -62,7 +62,7 @@
for idx_dest in range(idx_src + 1, num_columns)
)
)
- row_wise_cond = sa.not_(sa.or_(conditions))
+ row_wise_cond = sa.not_(conditions)
return row_wise_cond
@multicolumn_condition_partial(engine=SparkDFExecutionEngine)
| {"golden_diff": "diff --git a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n--- a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n+++ b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n@@ -62,7 +62,7 @@\n for idx_dest in range(idx_src + 1, num_columns)\n )\n )\n- row_wise_cond = sa.not_(sa.or_(conditions))\n+ row_wise_cond = sa.not_(conditions)\n return row_wise_cond\n \n @multicolumn_condition_partial(engine=SparkDFExecutionEngine)\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import logging\nfrom functools import reduce\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SparkDFExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import F, sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n MulticolumnMapMetricProvider,\n multicolumn_condition_partial,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):\n condition_metric_name = \"select_column_values.unique.within_record\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_list\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n\n @multicolumn_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_list, **kwargs):\n num_columns = len(column_list.columns)\n row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns\n return row_wise_cond\n\n @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_list, **kwargs):\n \"\"\"\n The present approach relies on an inefficient query condition construction implementation, whose computational\n cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is\n available, this is the only feasible mechanism under the current architecture, where map metric providers must\n return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).\n \"\"\"\n num_columns = len(column_list)\n\n # An arbitrary \"num_columns\" value used for issuing an explanatory message as a warning.\n if num_columns > 100:\n logger.warning(\n f\"\"\"Batch data with {num_columns} columns is detected. Computing the \"{cls.condition_metric_name}\" \\\nmetric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.\n\"\"\"\n )\n\n conditions = sa.or_(\n *(\n sa.or_(\n column_list[idx_src] == column_list[idx_dest],\n sa.and_(\n column_list[idx_src] == None, column_list[idx_dest] == None\n ),\n )\n for idx_src in range(num_columns - 1)\n for idx_dest in range(idx_src + 1, num_columns)\n )\n )\n row_wise_cond = sa.not_(sa.or_(conditions))\n return row_wise_cond\n\n @multicolumn_condition_partial(engine=SparkDFExecutionEngine)\n def _spark(cls, column_list, **kwargs):\n column_names = column_list.columns\n num_columns = len(column_names)\n\n conditions = []\n for idx_src in range(num_columns - 1):\n for idx_dest in range(idx_src + 1, num_columns):\n conditions.append(\n F.col(column_names[idx_src]).eqNullSafe(\n F.col(column_names[idx_dest])\n )\n )\n\n row_wise_cond = ~reduce(lambda a, b: a | b, conditions)\n return row_wise_cond\n", "path": "great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py"}]} | 1,414 | 179 |
gh_patches_debug_43188 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-2357 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Axes zoom area does not resize in 0.12.4
### Short description
When a plot is first generated with a given size, say `width ` and `height` in pixels, the entire axes areas on the left and bottom of the plot are zoomable. When the plot size is resized such that the plot is larger (e.g. the window is grabbed along an edge or corner and expanded), only the top `height` portion of the y-axes or the left `width` portion of the x-axis retain the ability to zoom the plot using the scroll wheel. The axes area outside (i.e. the lower portion of the y-axis or the right portion of the x-axis) are not zoomable. If hovering over the plot, not on an axes, there is no issue with zooming. Reverting to 0.12.3 fixes issue.
I was able to reproduce this in both custom plots and many of the pyqtgraph.example scripts.
### Tested environment(s)
* PyQtGraph version: 0.12.4
* Qt Python binding: PySide2 5.15.2.1 Qt 5.15.2
* Python version: 3.7
* NumPy version: 1.21.6
* Operating system: Windows 10 Enterprise 21H2
* Installation method: pip
</issue>
<code>
[start of pyqtgraph/graphicsItems/GraphicsWidget.py]
1 from ..Qt import QtGui, QtWidgets
2 from .GraphicsItem import GraphicsItem
3
4 __all__ = ['GraphicsWidget']
5
6 class GraphicsWidget(GraphicsItem, QtWidgets.QGraphicsWidget):
7
8 _qtBaseClass = QtWidgets.QGraphicsWidget
9 def __init__(self, *args, **kargs):
10 """
11 **Bases:** :class:`GraphicsItem <pyqtgraph.GraphicsItem>`, :class:`QtWidgets.QGraphicsWidget`
12
13 Extends QGraphicsWidget with several helpful methods and workarounds for PyQt bugs.
14 Most of the extra functionality is inherited from :class:`GraphicsItem <pyqtgraph.GraphicsItem>`.
15 """
16 QtWidgets.QGraphicsWidget.__init__(self, *args, **kargs)
17 GraphicsItem.__init__(self)
18
19 # cache bouding rect and geometry
20 self._boundingRectCache = self._previousGeometry = None
21 self._painterPathCache = None
22
23 ## done by GraphicsItem init
24 #GraphicsScene.registerObject(self) ## workaround for pyqt bug in graphicsscene.items()
25
26 # Removed due to https://bugreports.qt-project.org/browse/PYSIDE-86
27 #def itemChange(self, change, value):
28 ## BEWARE: Calling QGraphicsWidget.itemChange can lead to crashing!
29 ##ret = QtWidgets.QGraphicsWidget.itemChange(self, change, value) ## segv occurs here
30 ## The default behavior is just to return the value argument, so we'll do that
31 ## without calling the original method.
32 #ret = value
33 #if change in [self.ItemParentHasChanged, self.ItemSceneHasChanged]:
34 #self._updateView()
35 #return ret
36
37 def setFixedHeight(self, h):
38 self.setMaximumHeight(h)
39 self.setMinimumHeight(h)
40
41 def setFixedWidth(self, h):
42 self.setMaximumWidth(h)
43 self.setMinimumWidth(h)
44
45 def height(self):
46 return self.geometry().height()
47
48 def width(self):
49 return self.geometry().width()
50
51 def boundingRect(self):
52 geometry = self.geometry()
53 if geometry != self._previousGeometry:
54 self._painterPathCache = None
55
56 br = self.mapRectFromParent(geometry).normalized()
57 self._boundingRectCache = br
58 self._previousGeometry = geometry
59 else:
60 br = self._boundingRectCache
61
62 return br
63
64 def shape(self): ## No idea why this is necessary, but rotated items do not receive clicks otherwise.
65 p = self._painterPathCache
66 if p is None:
67 self._painterPathCache = p = QtGui.QPainterPath()
68 p.addRect(self.boundingRect())
69
70 return p
71
[end of pyqtgraph/graphicsItems/GraphicsWidget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyqtgraph/graphicsItems/GraphicsWidget.py b/pyqtgraph/graphicsItems/GraphicsWidget.py
--- a/pyqtgraph/graphicsItems/GraphicsWidget.py
+++ b/pyqtgraph/graphicsItems/GraphicsWidget.py
@@ -3,36 +3,43 @@
__all__ = ['GraphicsWidget']
+
class GraphicsWidget(GraphicsItem, QtWidgets.QGraphicsWidget):
_qtBaseClass = QtWidgets.QGraphicsWidget
- def __init__(self, *args, **kargs):
+
+ def __init__(self, *args, **kwargs):
"""
**Bases:** :class:`GraphicsItem <pyqtgraph.GraphicsItem>`, :class:`QtWidgets.QGraphicsWidget`
Extends QGraphicsWidget with several helpful methods and workarounds for PyQt bugs.
Most of the extra functionality is inherited from :class:`GraphicsItem <pyqtgraph.GraphicsItem>`.
"""
- QtWidgets.QGraphicsWidget.__init__(self, *args, **kargs)
+ QtWidgets.QGraphicsWidget.__init__(self, *args, **kwargs)
GraphicsItem.__init__(self)
- # cache bouding rect and geometry
+ # cache bounding rect and geometry
self._boundingRectCache = self._previousGeometry = None
self._painterPathCache = None
-
- ## done by GraphicsItem init
- #GraphicsScene.registerObject(self) ## workaround for pyqt bug in graphicsscene.items()
+ self.geometryChanged.connect(self._resetCachedProperties)
+
+ # done by GraphicsItem init
+ # GraphicsScene.registerObject(self) # workaround for pyqt bug in GraphicsScene.items()
# Removed due to https://bugreports.qt-project.org/browse/PYSIDE-86
- #def itemChange(self, change, value):
- ## BEWARE: Calling QGraphicsWidget.itemChange can lead to crashing!
- ##ret = QtWidgets.QGraphicsWidget.itemChange(self, change, value) ## segv occurs here
- ## The default behavior is just to return the value argument, so we'll do that
- ## without calling the original method.
- #ret = value
- #if change in [self.ItemParentHasChanged, self.ItemSceneHasChanged]:
- #self._updateView()
- #return ret
+ # def itemChange(self, change, value):
+ # # BEWARE: Calling QGraphicsWidget.itemChange can lead to crashing!
+ # # ret = QtWidgets.QGraphicsWidget.itemChange(self, change, value) # segv occurs here
+ # # The default behavior is just to return the value argument, so we'll do that
+ # # without calling the original method.
+ # ret = value
+ # if change in [self.ItemParentHasChanged, self.ItemSceneHasChanged]:
+ # self._updateView()
+ # return ret
+
+ def _resetCachedProperties(self):
+ self._boundingRectCache = self._previousGeometry = None
+ self._painterPathCache = None
def setFixedHeight(self, h):
self.setMaximumHeight(h)
@@ -41,10 +48,10 @@
def setFixedWidth(self, h):
self.setMaximumWidth(h)
self.setMinimumWidth(h)
-
+
def height(self):
return self.geometry().height()
-
+
def width(self):
return self.geometry().width()
@@ -52,19 +59,16 @@
geometry = self.geometry()
if geometry != self._previousGeometry:
self._painterPathCache = None
-
br = self.mapRectFromParent(geometry).normalized()
self._boundingRectCache = br
self._previousGeometry = geometry
else:
br = self._boundingRectCache
-
return br
- def shape(self): ## No idea why this is necessary, but rotated items do not receive clicks otherwise.
+ def shape(self):
p = self._painterPathCache
if p is None:
self._painterPathCache = p = QtGui.QPainterPath()
p.addRect(self.boundingRect())
-
return p
| {"golden_diff": "diff --git a/pyqtgraph/graphicsItems/GraphicsWidget.py b/pyqtgraph/graphicsItems/GraphicsWidget.py\n--- a/pyqtgraph/graphicsItems/GraphicsWidget.py\n+++ b/pyqtgraph/graphicsItems/GraphicsWidget.py\n@@ -3,36 +3,43 @@\n \n __all__ = ['GraphicsWidget']\n \n+\n class GraphicsWidget(GraphicsItem, QtWidgets.QGraphicsWidget):\n \n _qtBaseClass = QtWidgets.QGraphicsWidget\n- def __init__(self, *args, **kargs):\n+\n+ def __init__(self, *args, **kwargs):\n \"\"\"\n **Bases:** :class:`GraphicsItem <pyqtgraph.GraphicsItem>`, :class:`QtWidgets.QGraphicsWidget`\n \n Extends QGraphicsWidget with several helpful methods and workarounds for PyQt bugs. \n Most of the extra functionality is inherited from :class:`GraphicsItem <pyqtgraph.GraphicsItem>`.\n \"\"\"\n- QtWidgets.QGraphicsWidget.__init__(self, *args, **kargs)\n+ QtWidgets.QGraphicsWidget.__init__(self, *args, **kwargs)\n GraphicsItem.__init__(self)\n \n- # cache bouding rect and geometry\n+ # cache bounding rect and geometry\n self._boundingRectCache = self._previousGeometry = None\n self._painterPathCache = None\n- \n- ## done by GraphicsItem init\n- #GraphicsScene.registerObject(self) ## workaround for pyqt bug in graphicsscene.items()\n+ self.geometryChanged.connect(self._resetCachedProperties)\n+\n+ # done by GraphicsItem init\n+ # GraphicsScene.registerObject(self) # workaround for pyqt bug in GraphicsScene.items()\n \n # Removed due to https://bugreports.qt-project.org/browse/PYSIDE-86\n- #def itemChange(self, change, value):\n- ## BEWARE: Calling QGraphicsWidget.itemChange can lead to crashing!\n- ##ret = QtWidgets.QGraphicsWidget.itemChange(self, change, value) ## segv occurs here\n- ## The default behavior is just to return the value argument, so we'll do that\n- ## without calling the original method.\n- #ret = value\n- #if change in [self.ItemParentHasChanged, self.ItemSceneHasChanged]:\n- #self._updateView()\n- #return ret\n+ # def itemChange(self, change, value):\n+ # # BEWARE: Calling QGraphicsWidget.itemChange can lead to crashing!\n+ # # ret = QtWidgets.QGraphicsWidget.itemChange(self, change, value) # segv occurs here\n+ # # The default behavior is just to return the value argument, so we'll do that\n+ # # without calling the original method.\n+ # ret = value\n+ # if change in [self.ItemParentHasChanged, self.ItemSceneHasChanged]:\n+ # self._updateView()\n+ # return ret\n+\n+ def _resetCachedProperties(self):\n+ self._boundingRectCache = self._previousGeometry = None\n+ self._painterPathCache = None\n \n def setFixedHeight(self, h):\n self.setMaximumHeight(h)\n@@ -41,10 +48,10 @@\n def setFixedWidth(self, h):\n self.setMaximumWidth(h)\n self.setMinimumWidth(h)\n- \n+\n def height(self):\n return self.geometry().height()\n- \n+\n def width(self):\n return self.geometry().width()\n \n@@ -52,19 +59,16 @@\n geometry = self.geometry()\n if geometry != self._previousGeometry:\n self._painterPathCache = None\n- \n br = self.mapRectFromParent(geometry).normalized()\n self._boundingRectCache = br\n self._previousGeometry = geometry\n else:\n br = self._boundingRectCache\n-\n return br\n \n- def shape(self): ## No idea why this is necessary, but rotated items do not receive clicks otherwise.\n+ def shape(self):\n p = self._painterPathCache\n if p is None:\n self._painterPathCache = p = QtGui.QPainterPath()\n p.addRect(self.boundingRect())\n-\n return p\n", "issue": "Axes zoom area does not resize in 0.12.4\n### Short description\r\nWhen a plot is first generated with a given size, say `width ` and `height` in pixels, the entire axes areas on the left and bottom of the plot are zoomable. When the plot size is resized such that the plot is larger (e.g. the window is grabbed along an edge or corner and expanded), only the top `height` portion of the y-axes or the left `width` portion of the x-axis retain the ability to zoom the plot using the scroll wheel. The axes area outside (i.e. the lower portion of the y-axis or the right portion of the x-axis) are not zoomable. If hovering over the plot, not on an axes, there is no issue with zooming. Reverting to 0.12.3 fixes issue.\r\n\r\nI was able to reproduce this in both custom plots and many of the pyqtgraph.example scripts.\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 0.12.4\r\n * Qt Python binding: PySide2 5.15.2.1 Qt 5.15.2\r\n * Python version: 3.7\r\n * NumPy version: 1.21.6\r\n * Operating system: Windows 10 Enterprise 21H2\r\n * Installation method: pip\r\n\n", "before_files": [{"content": "from ..Qt import QtGui, QtWidgets\nfrom .GraphicsItem import GraphicsItem\n\n__all__ = ['GraphicsWidget']\n\nclass GraphicsWidget(GraphicsItem, QtWidgets.QGraphicsWidget):\n \n _qtBaseClass = QtWidgets.QGraphicsWidget\n def __init__(self, *args, **kargs):\n \"\"\"\n **Bases:** :class:`GraphicsItem <pyqtgraph.GraphicsItem>`, :class:`QtWidgets.QGraphicsWidget`\n \n Extends QGraphicsWidget with several helpful methods and workarounds for PyQt bugs. \n Most of the extra functionality is inherited from :class:`GraphicsItem <pyqtgraph.GraphicsItem>`.\n \"\"\"\n QtWidgets.QGraphicsWidget.__init__(self, *args, **kargs)\n GraphicsItem.__init__(self)\n\n # cache bouding rect and geometry\n self._boundingRectCache = self._previousGeometry = None\n self._painterPathCache = None\n \n ## done by GraphicsItem init\n #GraphicsScene.registerObject(self) ## workaround for pyqt bug in graphicsscene.items()\n\n # Removed due to https://bugreports.qt-project.org/browse/PYSIDE-86\n #def itemChange(self, change, value):\n ## BEWARE: Calling QGraphicsWidget.itemChange can lead to crashing!\n ##ret = QtWidgets.QGraphicsWidget.itemChange(self, change, value) ## segv occurs here\n ## The default behavior is just to return the value argument, so we'll do that\n ## without calling the original method.\n #ret = value\n #if change in [self.ItemParentHasChanged, self.ItemSceneHasChanged]:\n #self._updateView()\n #return ret\n\n def setFixedHeight(self, h):\n self.setMaximumHeight(h)\n self.setMinimumHeight(h)\n\n def setFixedWidth(self, h):\n self.setMaximumWidth(h)\n self.setMinimumWidth(h)\n \n def height(self):\n return self.geometry().height()\n \n def width(self):\n return self.geometry().width()\n\n def boundingRect(self):\n geometry = self.geometry()\n if geometry != self._previousGeometry:\n self._painterPathCache = None\n \n br = self.mapRectFromParent(geometry).normalized()\n self._boundingRectCache = br\n self._previousGeometry = geometry\n else:\n br = self._boundingRectCache\n\n return br\n\n def shape(self): ## No idea why this is necessary, but rotated items do not receive clicks otherwise.\n p = self._painterPathCache\n if p is None:\n self._painterPathCache = p = QtGui.QPainterPath()\n p.addRect(self.boundingRect())\n\n return p\n", "path": "pyqtgraph/graphicsItems/GraphicsWidget.py"}]} | 1,541 | 914 |
gh_patches_debug_9524 | rasdani/github-patches | git_diff | ansible-collections__community.general-458 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rhsm_repository is really slow
##### SUMMARY
Using rhsm_repository is really slow, making it annoying to use a playback that uses it.
(copied from https://github.com/ansible/ansible/issues/69722)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
rhsm_repository
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = /root/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
[defaults]
stdout_callback = yaml
force_handlers = True
```
##### OS / ENVIRONMENT
Ansible machine:
Target machine: RHEL 8
##### STEPS TO REPRODUCE
```yaml
- name: enable Red Hat CodeReady repo
rhsm_repository:
name: codeready-builder-for-rhel-8-x86_64-rpms
when: ansible_facts['distribution'] == "RedHat" and ansible_facts['distribution_major_version'] == "8"
```
##### EXPECTED RESULTS
The above task should complete in a short time, at least when the repository is already enabled.
##### ACTUAL RESULTS
You have time to get coffee whilst your playbook runs.
</issue>
<code>
[start of plugins/modules/packaging/os/rhsm_repository.py]
1 #!/usr/bin/python
2
3 # Copyright: (c) 2017, Giovanni Sciortino (@giovannisciortino)
4 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
5
6 from __future__ import absolute_import, division, print_function
7 __metaclass__ = type
8
9 DOCUMENTATION = '''
10 ---
11 module: rhsm_repository
12 short_description: Manage RHSM repositories using the subscription-manager command
13 description:
14 - Manage (Enable/Disable) RHSM repositories to the Red Hat Subscription
15 Management entitlement platform using the C(subscription-manager) command.
16 author: Giovanni Sciortino (@giovannisciortino)
17 notes:
18 - In order to manage RHSM repositories the system must be already registered
19 to RHSM manually or using the Ansible C(redhat_subscription) module.
20
21 requirements:
22 - subscription-manager
23 options:
24 state:
25 description:
26 - If state is equal to present or disabled, indicates the desired
27 repository state.
28 choices: [present, enabled, absent, disabled]
29 required: True
30 default: "present"
31 name:
32 description:
33 - The ID of repositories to enable.
34 - To operate on several repositories this can accept a comma separated
35 list or a YAML list.
36 required: True
37 purge:
38 description:
39 - Disable all currently enabled repositories that are not not specified in C(name).
40 Only set this to C(True) if passing in a list of repositories to the C(name) field.
41 Using this with C(loop) will most likely not have the desired result.
42 type: bool
43 default: False
44 '''
45
46 EXAMPLES = '''
47 - name: Enable a RHSM repository
48 rhsm_repository:
49 name: rhel-7-server-rpms
50
51 - name: Disable all RHSM repositories
52 rhsm_repository:
53 name: '*'
54 state: disabled
55
56 - name: Enable all repositories starting with rhel-6-server
57 rhsm_repository:
58 name: rhel-6-server*
59 state: enabled
60
61 - name: Disable all repositories except rhel-7-server-rpms
62 rhsm_repository:
63 name: rhel-7-server-rpms
64 purge: True
65 '''
66
67 RETURN = '''
68 repositories:
69 description:
70 - The list of RHSM repositories with their states.
71 - When this module is used to change the repository states, this list contains the updated states after the changes.
72 returned: success
73 type: list
74 '''
75
76 import re
77 import os
78 from fnmatch import fnmatch
79 from copy import deepcopy
80 from ansible.module_utils.basic import AnsibleModule
81
82
83 def run_subscription_manager(module, arguments):
84 # Execute subscription-manager with arguments and manage common errors
85 rhsm_bin = module.get_bin_path('subscription-manager')
86 if not rhsm_bin:
87 module.fail_json(msg='The executable file subscription-manager was not found in PATH')
88
89 lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
90 rc, out, err = module.run_command("%s %s" % (rhsm_bin, " ".join(arguments)), environ_update=lang_env)
91
92 if rc == 1 and (err == 'The password you typed is invalid.\nPlease try again.\n' or os.getuid() != 0):
93 module.fail_json(msg='The executable file subscription-manager must be run using root privileges')
94 elif rc == 0 and out == 'This system has no repositories available through subscriptions.\n':
95 module.fail_json(msg='This system has no repositories available through subscriptions')
96 elif rc == 1:
97 module.fail_json(msg='subscription-manager failed with the following error: %s' % err)
98 else:
99 return rc, out, err
100
101
102 def get_repository_list(module, list_parameter):
103 # Generate RHSM repository list and return a list of dict
104 if list_parameter == 'list_enabled':
105 rhsm_arguments = ['repos', '--list-enabled']
106 elif list_parameter == 'list_disabled':
107 rhsm_arguments = ['repos', '--list-disabled']
108 elif list_parameter == 'list':
109 rhsm_arguments = ['repos', '--list']
110 rc, out, err = run_subscription_manager(module, rhsm_arguments)
111
112 skip_lines = [
113 '+----------------------------------------------------------+',
114 ' Available Repositories in /etc/yum.repos.d/redhat.repo'
115 ]
116 repo_id_re = re.compile(r'Repo ID:\s+(.*)')
117 repo_name_re = re.compile(r'Repo Name:\s+(.*)')
118 repo_url_re = re.compile(r'Repo URL:\s+(.*)')
119 repo_enabled_re = re.compile(r'Enabled:\s+(.*)')
120
121 repo_id = ''
122 repo_name = ''
123 repo_url = ''
124 repo_enabled = ''
125
126 repo_result = []
127 for line in out.splitlines():
128 if line == '' or line in skip_lines:
129 continue
130
131 repo_id_match = repo_id_re.match(line)
132 if repo_id_match:
133 repo_id = repo_id_match.group(1)
134 continue
135
136 repo_name_match = repo_name_re.match(line)
137 if repo_name_match:
138 repo_name = repo_name_match.group(1)
139 continue
140
141 repo_url_match = repo_url_re.match(line)
142 if repo_url_match:
143 repo_url = repo_url_match.group(1)
144 continue
145
146 repo_enabled_match = repo_enabled_re.match(line)
147 if repo_enabled_match:
148 repo_enabled = repo_enabled_match.group(1)
149
150 repo = {
151 "id": repo_id,
152 "name": repo_name,
153 "url": repo_url,
154 "enabled": True if repo_enabled == '1' else False
155 }
156
157 repo_result.append(repo)
158
159 return repo_result
160
161
162 def repository_modify(module, state, name, purge=False):
163 name = set(name)
164 current_repo_list = get_repository_list(module, 'list')
165 updated_repo_list = deepcopy(current_repo_list)
166 matched_existing_repo = {}
167 for repoid in name:
168 matched_existing_repo[repoid] = []
169 for idx, repo in enumerate(current_repo_list):
170 if fnmatch(repo['id'], repoid):
171 matched_existing_repo[repoid].append(repo)
172 # Update current_repo_list to return it as result variable
173 updated_repo_list[idx]['enabled'] = True if state == 'enabled' else False
174
175 changed = False
176 results = []
177 diff_before = ""
178 diff_after = ""
179 rhsm_arguments = ['repos']
180
181 for repoid in matched_existing_repo:
182 if len(matched_existing_repo[repoid]) == 0:
183 results.append("%s is not a valid repository ID" % repoid)
184 module.fail_json(results=results, msg="%s is not a valid repository ID" % repoid)
185 for repo in matched_existing_repo[repoid]:
186 if state in ['disabled', 'absent']:
187 if repo['enabled']:
188 changed = True
189 diff_before += "Repository '%s' is enabled for this system\n" % repo['id']
190 diff_after += "Repository '%s' is disabled for this system\n" % repo['id']
191 results.append("Repository '%s' is disabled for this system" % repo['id'])
192 rhsm_arguments += ['--disable', repo['id']]
193 elif state in ['enabled', 'present']:
194 if not repo['enabled']:
195 changed = True
196 diff_before += "Repository '%s' is disabled for this system\n" % repo['id']
197 diff_after += "Repository '%s' is enabled for this system\n" % repo['id']
198 results.append("Repository '%s' is enabled for this system" % repo['id'])
199 rhsm_arguments += ['--enable', repo['id']]
200
201 # Disable all enabled repos on the system that are not in the task and not
202 # marked as disabled by the task
203 if purge:
204 enabled_repo_ids = set(repo['id'] for repo in updated_repo_list if repo['enabled'])
205 matched_repoids_set = set(matched_existing_repo.keys())
206 difference = enabled_repo_ids.difference(matched_repoids_set)
207 if len(difference) > 0:
208 for repoid in difference:
209 changed = True
210 diff_before.join("Repository '{repoid}'' is enabled for this system\n".format(repoid=repoid))
211 diff_after.join("Repository '{repoid}' is disabled for this system\n".format(repoid=repoid))
212 results.append("Repository '{repoid}' is disabled for this system".format(repoid=repoid))
213 rhsm_arguments.extend(['--disable', repoid])
214
215 diff = {'before': diff_before,
216 'after': diff_after,
217 'before_header': "RHSM repositories",
218 'after_header': "RHSM repositories"}
219
220 if not module.check_mode:
221 rc, out, err = run_subscription_manager(module, rhsm_arguments)
222 results = out.splitlines()
223 module.exit_json(results=results, changed=changed, repositories=updated_repo_list, diff=diff)
224
225
226 def main():
227 module = AnsibleModule(
228 argument_spec=dict(
229 name=dict(type='list', required=True),
230 state=dict(choices=['enabled', 'disabled', 'present', 'absent'], default='enabled'),
231 purge=dict(type='bool', default=False),
232 ),
233 supports_check_mode=True,
234 )
235 name = module.params['name']
236 state = module.params['state']
237 purge = module.params['purge']
238
239 repository_modify(module, state, name, purge)
240
241
242 if __name__ == '__main__':
243 main()
244
[end of plugins/modules/packaging/os/rhsm_repository.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/packaging/os/rhsm_repository.py b/plugins/modules/packaging/os/rhsm_repository.py
--- a/plugins/modules/packaging/os/rhsm_repository.py
+++ b/plugins/modules/packaging/os/rhsm_repository.py
@@ -217,7 +217,7 @@
'before_header': "RHSM repositories",
'after_header': "RHSM repositories"}
- if not module.check_mode:
+ if not module.check_mode and changed:
rc, out, err = run_subscription_manager(module, rhsm_arguments)
results = out.splitlines()
module.exit_json(results=results, changed=changed, repositories=updated_repo_list, diff=diff)
| {"golden_diff": "diff --git a/plugins/modules/packaging/os/rhsm_repository.py b/plugins/modules/packaging/os/rhsm_repository.py\n--- a/plugins/modules/packaging/os/rhsm_repository.py\n+++ b/plugins/modules/packaging/os/rhsm_repository.py\n@@ -217,7 +217,7 @@\n 'before_header': \"RHSM repositories\",\n 'after_header': \"RHSM repositories\"}\n \n- if not module.check_mode:\n+ if not module.check_mode and changed:\n rc, out, err = run_subscription_manager(module, rhsm_arguments)\n results = out.splitlines()\n module.exit_json(results=results, changed=changed, repositories=updated_repo_list, diff=diff)\n", "issue": "rhsm_repository is really slow\n##### SUMMARY\r\nUsing rhsm_repository is really slow, making it annoying to use a playback that uses it.\r\n(copied from https://github.com/ansible/ansible/issues/69722)\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nrhsm_repository\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.9.9\r\n config file = /root/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\n[defaults]\r\nstdout_callback = yaml\r\nforce_handlers = True\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nAnsible machine:\r\nTarget machine: RHEL 8\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n```yaml\r\n- name: enable Red Hat CodeReady repo\r\n rhsm_repository:\r\n name: codeready-builder-for-rhel-8-x86_64-rpms\r\n when: ansible_facts['distribution'] == \"RedHat\" and ansible_facts['distribution_major_version'] == \"8\"\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nThe above task should complete in a short time, at least when the repository is already enabled.\r\n\r\n\r\n##### ACTUAL RESULTS\r\nYou have time to get coffee whilst your playbook runs.\n", "before_files": [{"content": "#!/usr/bin/python\n\n# Copyright: (c) 2017, Giovanni Sciortino (@giovannisciortino)\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\n---\nmodule: rhsm_repository\nshort_description: Manage RHSM repositories using the subscription-manager command\ndescription:\n - Manage (Enable/Disable) RHSM repositories to the Red Hat Subscription\n Management entitlement platform using the C(subscription-manager) command.\nauthor: Giovanni Sciortino (@giovannisciortino)\nnotes:\n - In order to manage RHSM repositories the system must be already registered\n to RHSM manually or using the Ansible C(redhat_subscription) module.\n\nrequirements:\n - subscription-manager\noptions:\n state:\n description:\n - If state is equal to present or disabled, indicates the desired\n repository state.\n choices: [present, enabled, absent, disabled]\n required: True\n default: \"present\"\n name:\n description:\n - The ID of repositories to enable.\n - To operate on several repositories this can accept a comma separated\n list or a YAML list.\n required: True\n purge:\n description:\n - Disable all currently enabled repositories that are not not specified in C(name).\n Only set this to C(True) if passing in a list of repositories to the C(name) field.\n Using this with C(loop) will most likely not have the desired result.\n type: bool\n default: False\n'''\n\nEXAMPLES = '''\n- name: Enable a RHSM repository\n rhsm_repository:\n name: rhel-7-server-rpms\n\n- name: Disable all RHSM repositories\n rhsm_repository:\n name: '*'\n state: disabled\n\n- name: Enable all repositories starting with rhel-6-server\n rhsm_repository:\n name: rhel-6-server*\n state: enabled\n\n- name: Disable all repositories except rhel-7-server-rpms\n rhsm_repository:\n name: rhel-7-server-rpms\n purge: True\n'''\n\nRETURN = '''\nrepositories:\n description:\n - The list of RHSM repositories with their states.\n - When this module is used to change the repository states, this list contains the updated states after the changes.\n returned: success\n type: list\n'''\n\nimport re\nimport os\nfrom fnmatch import fnmatch\nfrom copy import deepcopy\nfrom ansible.module_utils.basic import AnsibleModule\n\n\ndef run_subscription_manager(module, arguments):\n # Execute subscription-manager with arguments and manage common errors\n rhsm_bin = module.get_bin_path('subscription-manager')\n if not rhsm_bin:\n module.fail_json(msg='The executable file subscription-manager was not found in PATH')\n\n lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')\n rc, out, err = module.run_command(\"%s %s\" % (rhsm_bin, \" \".join(arguments)), environ_update=lang_env)\n\n if rc == 1 and (err == 'The password you typed is invalid.\\nPlease try again.\\n' or os.getuid() != 0):\n module.fail_json(msg='The executable file subscription-manager must be run using root privileges')\n elif rc == 0 and out == 'This system has no repositories available through subscriptions.\\n':\n module.fail_json(msg='This system has no repositories available through subscriptions')\n elif rc == 1:\n module.fail_json(msg='subscription-manager failed with the following error: %s' % err)\n else:\n return rc, out, err\n\n\ndef get_repository_list(module, list_parameter):\n # Generate RHSM repository list and return a list of dict\n if list_parameter == 'list_enabled':\n rhsm_arguments = ['repos', '--list-enabled']\n elif list_parameter == 'list_disabled':\n rhsm_arguments = ['repos', '--list-disabled']\n elif list_parameter == 'list':\n rhsm_arguments = ['repos', '--list']\n rc, out, err = run_subscription_manager(module, rhsm_arguments)\n\n skip_lines = [\n '+----------------------------------------------------------+',\n ' Available Repositories in /etc/yum.repos.d/redhat.repo'\n ]\n repo_id_re = re.compile(r'Repo ID:\\s+(.*)')\n repo_name_re = re.compile(r'Repo Name:\\s+(.*)')\n repo_url_re = re.compile(r'Repo URL:\\s+(.*)')\n repo_enabled_re = re.compile(r'Enabled:\\s+(.*)')\n\n repo_id = ''\n repo_name = ''\n repo_url = ''\n repo_enabled = ''\n\n repo_result = []\n for line in out.splitlines():\n if line == '' or line in skip_lines:\n continue\n\n repo_id_match = repo_id_re.match(line)\n if repo_id_match:\n repo_id = repo_id_match.group(1)\n continue\n\n repo_name_match = repo_name_re.match(line)\n if repo_name_match:\n repo_name = repo_name_match.group(1)\n continue\n\n repo_url_match = repo_url_re.match(line)\n if repo_url_match:\n repo_url = repo_url_match.group(1)\n continue\n\n repo_enabled_match = repo_enabled_re.match(line)\n if repo_enabled_match:\n repo_enabled = repo_enabled_match.group(1)\n\n repo = {\n \"id\": repo_id,\n \"name\": repo_name,\n \"url\": repo_url,\n \"enabled\": True if repo_enabled == '1' else False\n }\n\n repo_result.append(repo)\n\n return repo_result\n\n\ndef repository_modify(module, state, name, purge=False):\n name = set(name)\n current_repo_list = get_repository_list(module, 'list')\n updated_repo_list = deepcopy(current_repo_list)\n matched_existing_repo = {}\n for repoid in name:\n matched_existing_repo[repoid] = []\n for idx, repo in enumerate(current_repo_list):\n if fnmatch(repo['id'], repoid):\n matched_existing_repo[repoid].append(repo)\n # Update current_repo_list to return it as result variable\n updated_repo_list[idx]['enabled'] = True if state == 'enabled' else False\n\n changed = False\n results = []\n diff_before = \"\"\n diff_after = \"\"\n rhsm_arguments = ['repos']\n\n for repoid in matched_existing_repo:\n if len(matched_existing_repo[repoid]) == 0:\n results.append(\"%s is not a valid repository ID\" % repoid)\n module.fail_json(results=results, msg=\"%s is not a valid repository ID\" % repoid)\n for repo in matched_existing_repo[repoid]:\n if state in ['disabled', 'absent']:\n if repo['enabled']:\n changed = True\n diff_before += \"Repository '%s' is enabled for this system\\n\" % repo['id']\n diff_after += \"Repository '%s' is disabled for this system\\n\" % repo['id']\n results.append(\"Repository '%s' is disabled for this system\" % repo['id'])\n rhsm_arguments += ['--disable', repo['id']]\n elif state in ['enabled', 'present']:\n if not repo['enabled']:\n changed = True\n diff_before += \"Repository '%s' is disabled for this system\\n\" % repo['id']\n diff_after += \"Repository '%s' is enabled for this system\\n\" % repo['id']\n results.append(\"Repository '%s' is enabled for this system\" % repo['id'])\n rhsm_arguments += ['--enable', repo['id']]\n\n # Disable all enabled repos on the system that are not in the task and not\n # marked as disabled by the task\n if purge:\n enabled_repo_ids = set(repo['id'] for repo in updated_repo_list if repo['enabled'])\n matched_repoids_set = set(matched_existing_repo.keys())\n difference = enabled_repo_ids.difference(matched_repoids_set)\n if len(difference) > 0:\n for repoid in difference:\n changed = True\n diff_before.join(\"Repository '{repoid}'' is enabled for this system\\n\".format(repoid=repoid))\n diff_after.join(\"Repository '{repoid}' is disabled for this system\\n\".format(repoid=repoid))\n results.append(\"Repository '{repoid}' is disabled for this system\".format(repoid=repoid))\n rhsm_arguments.extend(['--disable', repoid])\n\n diff = {'before': diff_before,\n 'after': diff_after,\n 'before_header': \"RHSM repositories\",\n 'after_header': \"RHSM repositories\"}\n\n if not module.check_mode:\n rc, out, err = run_subscription_manager(module, rhsm_arguments)\n results = out.splitlines()\n module.exit_json(results=results, changed=changed, repositories=updated_repo_list, diff=diff)\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(type='list', required=True),\n state=dict(choices=['enabled', 'disabled', 'present', 'absent'], default='enabled'),\n purge=dict(type='bool', default=False),\n ),\n supports_check_mode=True,\n )\n name = module.params['name']\n state = module.params['state']\n purge = module.params['purge']\n\n repository_modify(module, state, name, purge)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/packaging/os/rhsm_repository.py"}]} | 3,592 | 154 |
gh_patches_debug_19989 | rasdani/github-patches | git_diff | AnalogJ__lexicon-276 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
api change of online.net broke lexicon plugin
Hi,
Blame them ! i'm working on it...
</issue>
<code>
[start of lexicon/providers/online.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3
4 import json
5 import logging
6
7 import requests
8
9 from .base import Provider as BaseProvider
10
11 logger = logging.getLogger(__name__)
12
13
14 def ProviderParser(subparser):
15 subparser.add_argument("--auth-token", help="specify private api token")
16
17 def to_data(type, content):
18 if type == "TXT":
19 return '"{0}"'.format(content)
20 else:
21 return content
22
23 class Provider(BaseProvider):
24
25 def __init__(self, options, engine_overrides=None):
26 super(Provider, self).__init__(options, engine_overrides)
27 self.zone_name = 'Zone Automatic Lexicon '
28 self.domain_id = None
29 self.passive_zone = None
30 self.active_zone = None
31 self.api_endpoint = self.engine_overrides.get('api_endpoint', 'https://api.online.net/api/v1')
32
33 def authenticate(self):
34 payload = self._get('/domain/')
35 domain = self.options['domain']
36 did = None
37 for row in payload:
38 if row['name'] == domain:
39 did = row['id']
40 break
41
42 if did is None:
43 raise Exception('No domain found')
44
45 self.domain_id = did
46 self.init_zones()
47
48 def list_zones(self):
49 return self._get('/domain/{0}/version'.format(self.domain_id))
50
51 def init_zones(self):
52 # sets current zone version
53 zone_name_a = self.zone_name + 'A'
54 zone_name_b = self.zone_name + 'B'
55 active_row = None
56 passive_row = None
57 for row in self.list_zones():
58 if row['active'] == True:
59 active_row = row
60 elif row['name'] == zone_name_a or row['name'] == zone_name_b:
61 passive_row = row
62
63 if passive_row is None:
64 passive_row = self._post('/domain/{0}/version'.format(self.domain_id), {
65 'name': zone_name_b if active_row['name'] == zone_name_a else zone_name_a
66 })
67
68 self.active_zone = active_row['uuid_ref']
69 self.passive_zone = passive_row['uuid_ref']
70 self.update_passive_zone()
71
72
73 def update_passive_zone(self):
74 self._put(
75 '/domain/{0}/version/{1}/zone_from_bind'.format(
76 self.domain_id,
77 self.passive_zone
78 ),
79 self.get_bind_zone()
80 )
81
82 def get_bind_zone(self):
83 records = self.list_zone_records(self.active_zone)
84 # then convert records to bind format
85 bindStr = ''
86 for record in records:
87 bindStr = bindStr + '{0} {1} IN {2} {3}{4}\n'.format(
88 record['name'] or '@',
89 record['ttl'],
90 record['type'],
91 '{0} '.format(record['aux']) if 'aux' in record else '',
92 record['data'] or ''
93 )
94 return bindStr
95
96 def enable_zone(self):
97 zone = self.passive_zone
98 if zone is None:
99 raise Exception("Could not enable uninitialized passive_zone")
100 payload = self._patch('/domain/{0}/version/{1}/enable'.format(
101 self.domain_id,
102 zone
103 ))
104 self.passive_zone = self.active_zone
105 self.active_zone = zone
106 self.update_passive_zone()
107
108
109 # Create record. If record already exists with the same content, do nothing'
110 def create_record(self, type, name, content):
111 try:
112 record = self.find_record(type, name, content)
113 if record is not None:
114 return True
115
116 record = {
117 'name': self._fqdn_name(name),
118 'type': type,
119 'data': to_data(type, content),
120 'priority': self.options['priority'] or '',
121 'ttl': self.options['ttl'] or ''
122 }
123
124 payload = self._post(
125 '/domain/{0}/version/{1}/zone'.format(
126 self.domain_id,
127 self.passive_zone
128 ),
129 record
130 )
131 except Exception as e:
132 logger.debug(e)
133 return False
134
135 self.enable_zone()
136 logger.debug('create_record: %s', True)
137 return True
138
139 def find_zone_records(self, zone, type=None, name=None, content=None):
140 records = []
141 for record in self.list_zone_records(zone):
142 processed_record = {
143 'id': record['id'],
144 'type': record['type'],
145 'name': self._full_name(record['name']),
146 'ttl': record['ttl'],
147 'content': record['data'],
148 'priority': record['aux'] if 'aux' in record else ''
149 }
150 records.append(self._clean_TXT_record(processed_record))
151
152 if type:
153 records = [record for record in records if record['type'] == type]
154 if name:
155 fullName = self._full_name(name)
156 records = [record for record in records if record['name'] == fullName]
157 if content:
158 records = [record for record in records if record['content'] == content]
159
160 logger.debug('list_records: %s', records)
161 return records
162
163 def list_zone_records(self, zone_id):
164 return self._get('/domain/{0}/version/{1}/zone'.format(self.domain_id, zone_id))
165
166 def list_records(self, type=None, name=None, content=None):
167 return self.find_zone_records(self.passive_zone, type, name, content)
168
169 def find_record(self, type=None, name=None, content=None):
170 record = None
171 records = self.list_records(type, name, content)
172 if len(records) < 1:
173 return None
174 else:
175 return records[0]
176
177
178 # Create or update a record.
179 def update_record(self, id, type=None, name=None, content=None):
180 record = self.find_record(type, name)
181 if record is None:
182 logger.debug("cannot find record to update: %s %s %s", id, type, name)
183 return True
184 if type:
185 record['type'] = type
186 if name:
187 record['name'] = self._fqdn_name(name)
188 if content:
189 record['data'] = to_data(type, content)
190 if self.options.get('ttl'):
191 record['ttl'] = self.options.get('ttl')
192 # it is weird that 'aux' becomes 'priority' in online's api
193 if self.options['priority']:
194 record['priority'] = self.options['priority']
195
196 if id is None:
197 id = record['id']
198
199 record.pop('id')
200
201 try:
202 payload = self._patch('/domain/{0}/version/{1}/zone/{2}'.format(
203 self.domain_id,
204 self.passive_zone,
205 id
206 ), record)
207
208 except Exception as e:
209 logger.debug(e)
210 return False
211
212 self.enable_zone()
213 # If it didn't raise from the http status code, then we're good
214 logger.debug('update_record: %s', id)
215 return True
216
217 # Delete an existing record.
218 # If record does not exist, do nothing.
219 def delete_record(self, id=None, type=None, name=None, content=None):
220 records = self.list_records(type, name, content)
221 if len(records) == 0:
222 logger.debug("Cannot find records %s %s %s", type, name, content)
223 return False
224 logger.debug('delete_records: %s records found', len(records))
225 try:
226 for record in records:
227 payload = self._delete('/domain/{0}/version/{1}/zone/{2}'.format(
228 self.domain_id,
229 self.passive_zone,
230 record['id']
231 ))
232 except Exception as e:
233 logger.debug(e)
234 return False
235
236 self.enable_zone()
237 # is always True at this point, if a non 200 response is returned an error is raised.
238 logger.debug('delete_record: %s', True)
239 return True
240
241 def _patch(self, url='/', data=None, query_params=None):
242 return self._request('PATCH', url, data=data, query_params=query_params)
243
244 # Helpers
245 def _request(self, action='GET', url='/', data=None, query_params=None):
246 if query_params is None:
247 query_params = {}
248
249 headers = {
250 'Accept': 'application/json',
251 'Authorization': 'Bearer {0}'.format(self.options['auth_token'])
252 }
253 if data is not None:
254 if type(data) is str:
255 headers['Content-Type'] = 'text/plain';
256 else:
257 headers['Content-Type'] = 'application/json';
258 data = json.dumps(data)
259
260 r = requests.request(
261 action,
262 self.api_endpoint + url,
263 params=query_params,
264 data=data,
265 headers=headers
266 )
267 r.raise_for_status() # if the request fails for any reason, throw an error.
268
269 return r.text and r.json() or ''
270
[end of lexicon/providers/online.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lexicon/providers/online.py b/lexicon/providers/online.py
--- a/lexicon/providers/online.py
+++ b/lexicon/providers/online.py
@@ -25,24 +25,12 @@
def __init__(self, options, engine_overrides=None):
super(Provider, self).__init__(options, engine_overrides)
self.zone_name = 'Zone Automatic Lexicon '
- self.domain_id = None
self.passive_zone = None
self.active_zone = None
+ self.domain_id = self.options['domain']
self.api_endpoint = self.engine_overrides.get('api_endpoint', 'https://api.online.net/api/v1')
def authenticate(self):
- payload = self._get('/domain/')
- domain = self.options['domain']
- did = None
- for row in payload:
- if row['name'] == domain:
- did = row['id']
- break
-
- if did is None:
- raise Exception('No domain found')
-
- self.domain_id = did
self.init_zones()
def list_zones(self):
| {"golden_diff": "diff --git a/lexicon/providers/online.py b/lexicon/providers/online.py\n--- a/lexicon/providers/online.py\n+++ b/lexicon/providers/online.py\n@@ -25,24 +25,12 @@\n def __init__(self, options, engine_overrides=None):\n super(Provider, self).__init__(options, engine_overrides)\n self.zone_name = 'Zone Automatic Lexicon '\n- self.domain_id = None\n self.passive_zone = None\n self.active_zone = None\n+ self.domain_id = self.options['domain']\n self.api_endpoint = self.engine_overrides.get('api_endpoint', 'https://api.online.net/api/v1')\n \n def authenticate(self):\n- payload = self._get('/domain/')\n- domain = self.options['domain']\n- did = None\n- for row in payload:\n- if row['name'] == domain:\n- did = row['id']\n- break\n-\n- if did is None:\n- raise Exception('No domain found')\n-\n- self.domain_id = did\n self.init_zones()\n \n def list_zones(self):\n", "issue": "api change of online.net broke lexicon plugin\nHi,\r\n\r\nBlame them ! i'm working on it...\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\n\nimport json\nimport logging\n\nimport requests\n\nfrom .base import Provider as BaseProvider\n\nlogger = logging.getLogger(__name__)\n\n\ndef ProviderParser(subparser):\n subparser.add_argument(\"--auth-token\", help=\"specify private api token\")\n\ndef to_data(type, content):\n if type == \"TXT\":\n return '\"{0}\"'.format(content)\n else:\n return content\n\nclass Provider(BaseProvider):\n\n def __init__(self, options, engine_overrides=None):\n super(Provider, self).__init__(options, engine_overrides)\n self.zone_name = 'Zone Automatic Lexicon '\n self.domain_id = None\n self.passive_zone = None\n self.active_zone = None\n self.api_endpoint = self.engine_overrides.get('api_endpoint', 'https://api.online.net/api/v1')\n\n def authenticate(self):\n payload = self._get('/domain/')\n domain = self.options['domain']\n did = None\n for row in payload:\n if row['name'] == domain:\n did = row['id']\n break\n\n if did is None:\n raise Exception('No domain found')\n\n self.domain_id = did\n self.init_zones()\n\n def list_zones(self):\n return self._get('/domain/{0}/version'.format(self.domain_id))\n\n def init_zones(self):\n # sets current zone version\n zone_name_a = self.zone_name + 'A'\n zone_name_b = self.zone_name + 'B'\n active_row = None\n passive_row = None\n for row in self.list_zones():\n if row['active'] == True:\n active_row = row\n elif row['name'] == zone_name_a or row['name'] == zone_name_b:\n passive_row = row\n\n if passive_row is None:\n passive_row = self._post('/domain/{0}/version'.format(self.domain_id), {\n 'name': zone_name_b if active_row['name'] == zone_name_a else zone_name_a\n })\n\n self.active_zone = active_row['uuid_ref']\n self.passive_zone = passive_row['uuid_ref']\n self.update_passive_zone()\n\n\n def update_passive_zone(self):\n self._put(\n '/domain/{0}/version/{1}/zone_from_bind'.format(\n self.domain_id,\n self.passive_zone\n ),\n self.get_bind_zone()\n )\n\n def get_bind_zone(self):\n records = self.list_zone_records(self.active_zone)\n # then convert records to bind format\n bindStr = ''\n for record in records:\n bindStr = bindStr + '{0} {1} IN {2} {3}{4}\\n'.format(\n record['name'] or '@',\n record['ttl'],\n record['type'],\n '{0} '.format(record['aux']) if 'aux' in record else '',\n record['data'] or ''\n )\n return bindStr\n\n def enable_zone(self):\n zone = self.passive_zone\n if zone is None:\n raise Exception(\"Could not enable uninitialized passive_zone\")\n payload = self._patch('/domain/{0}/version/{1}/enable'.format(\n self.domain_id,\n zone\n ))\n self.passive_zone = self.active_zone\n self.active_zone = zone\n self.update_passive_zone()\n\n\n # Create record. If record already exists with the same content, do nothing'\n def create_record(self, type, name, content):\n try:\n record = self.find_record(type, name, content)\n if record is not None:\n return True\n\n record = {\n 'name': self._fqdn_name(name),\n 'type': type,\n 'data': to_data(type, content),\n 'priority': self.options['priority'] or '',\n 'ttl': self.options['ttl'] or ''\n }\n\n payload = self._post(\n '/domain/{0}/version/{1}/zone'.format(\n self.domain_id,\n self.passive_zone\n ),\n record\n )\n except Exception as e:\n logger.debug(e)\n return False\n\n self.enable_zone()\n logger.debug('create_record: %s', True)\n return True\n\n def find_zone_records(self, zone, type=None, name=None, content=None):\n records = []\n for record in self.list_zone_records(zone):\n processed_record = {\n 'id': record['id'],\n 'type': record['type'],\n 'name': self._full_name(record['name']),\n 'ttl': record['ttl'],\n 'content': record['data'],\n 'priority': record['aux'] if 'aux' in record else ''\n }\n records.append(self._clean_TXT_record(processed_record))\n\n if type:\n records = [record for record in records if record['type'] == type]\n if name:\n fullName = self._full_name(name)\n records = [record for record in records if record['name'] == fullName]\n if content:\n records = [record for record in records if record['content'] == content]\n\n logger.debug('list_records: %s', records)\n return records\n\n def list_zone_records(self, zone_id):\n return self._get('/domain/{0}/version/{1}/zone'.format(self.domain_id, zone_id))\n\n def list_records(self, type=None, name=None, content=None):\n return self.find_zone_records(self.passive_zone, type, name, content)\n\n def find_record(self, type=None, name=None, content=None):\n record = None\n records = self.list_records(type, name, content)\n if len(records) < 1:\n return None\n else:\n return records[0]\n\n\n # Create or update a record.\n def update_record(self, id, type=None, name=None, content=None):\n record = self.find_record(type, name)\n if record is None:\n logger.debug(\"cannot find record to update: %s %s %s\", id, type, name)\n return True\n if type:\n record['type'] = type\n if name:\n record['name'] = self._fqdn_name(name)\n if content:\n record['data'] = to_data(type, content)\n if self.options.get('ttl'):\n record['ttl'] = self.options.get('ttl')\n # it is weird that 'aux' becomes 'priority' in online's api\n if self.options['priority']:\n record['priority'] = self.options['priority']\n\n if id is None:\n id = record['id']\n\n record.pop('id')\n\n try:\n payload = self._patch('/domain/{0}/version/{1}/zone/{2}'.format(\n self.domain_id,\n self.passive_zone,\n id\n ), record)\n\n except Exception as e:\n logger.debug(e)\n return False\n\n self.enable_zone()\n # If it didn't raise from the http status code, then we're good\n logger.debug('update_record: %s', id)\n return True\n\n # Delete an existing record.\n # If record does not exist, do nothing.\n def delete_record(self, id=None, type=None, name=None, content=None):\n records = self.list_records(type, name, content)\n if len(records) == 0:\n logger.debug(\"Cannot find records %s %s %s\", type, name, content)\n return False\n logger.debug('delete_records: %s records found', len(records))\n try:\n for record in records:\n payload = self._delete('/domain/{0}/version/{1}/zone/{2}'.format(\n self.domain_id,\n self.passive_zone,\n record['id']\n ))\n except Exception as e:\n logger.debug(e)\n return False\n\n self.enable_zone()\n # is always True at this point, if a non 200 response is returned an error is raised.\n logger.debug('delete_record: %s', True)\n return True\n\n def _patch(self, url='/', data=None, query_params=None):\n return self._request('PATCH', url, data=data, query_params=query_params)\n\n # Helpers\n def _request(self, action='GET', url='/', data=None, query_params=None):\n if query_params is None:\n query_params = {}\n\n headers = {\n 'Accept': 'application/json',\n 'Authorization': 'Bearer {0}'.format(self.options['auth_token'])\n }\n if data is not None:\n if type(data) is str:\n headers['Content-Type'] = 'text/plain';\n else:\n headers['Content-Type'] = 'application/json';\n data = json.dumps(data)\n\n r = requests.request(\n action,\n self.api_endpoint + url,\n params=query_params,\n data=data,\n headers=headers\n )\n r.raise_for_status() # if the request fails for any reason, throw an error.\n\n return r.text and r.json() or ''\n", "path": "lexicon/providers/online.py"}]} | 3,221 | 250 |
gh_patches_debug_31627 | rasdani/github-patches | git_diff | ddionrails__ddionrails-798 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove StudyRedirectView
~blocked by: #126~
</issue>
<code>
[start of ddionrails/studies/views.py]
1 # -*- coding: utf-8 -*-
2
3 """ Views for ddionrails.studies app """
4
5 from django.http.request import HttpRequest
6 from django.http.response import HttpResponse
7 from django.shortcuts import get_object_or_404, render
8 from django.views.generic import DetailView
9 from django.views.generic.base import RedirectView
10
11 from ddionrails.data.models import Dataset, Variable
12 from ddionrails.instruments.models import Instrument, Question
13
14 from .models import Study
15
16
17 class StudyRedirectView(RedirectView):
18 """ RedirectView for studies.Study model """
19
20 permanent = False
21
22 def get_redirect_url(self, *args, **kwargs):
23 study = get_object_or_404(Study, id=kwargs["id"])
24 return study.get_absolute_url()
25
26
27 class StudyDetailView(DetailView):
28 """ DetailView for studies.Study model """
29
30 model = Study
31 template_name = "studies/study_detail.html"
32 slug_url_kwarg = "study_name"
33 slug_field = "name"
34
35 def get_queryset(self):
36 queryset = super(StudyDetailView, self).get_queryset()
37 return queryset.only("name", "label", "config", "description")
38
39 def get_context_data(self, **kwargs):
40 context = super().get_context_data(**kwargs)
41 context["num_datasets"] = Dataset.objects.filter(study=self.object).count()
42 context["num_variables"] = Variable.objects.filter(
43 dataset__study=self.object
44 ).count()
45 context["num_instruments"] = Instrument.objects.filter(study=self.object).count()
46 context["num_questions"] = Question.objects.filter(
47 instrument__study=self.object
48 ).count()
49
50 context["dataset_list"] = (
51 Dataset.objects.select_related(
52 "study", "conceptual_dataset", "period", "analysis_unit"
53 )
54 .filter(study=self.object)
55 .only(
56 "name",
57 "label",
58 "study__name",
59 "conceptual_dataset__name",
60 "conceptual_dataset__label",
61 "period__name",
62 "period__label",
63 "analysis_unit__name",
64 "analysis_unit__label",
65 )
66 )
67 context["instrument_list"] = (
68 Instrument.objects.select_related("study", "period", "analysis_unit")
69 .filter(study=self.object)
70 .only(
71 "name",
72 "label",
73 "study__name",
74 "period__name",
75 "period__label",
76 "analysis_unit__name",
77 "analysis_unit__label",
78 )
79 )
80 return context
81
82
83 def study_topics(request: HttpRequest, study_name: str, language: str) -> HttpResponse:
84 study = get_object_or_404(Study, name=study_name)
85 context = dict(study=study, language=language)
86 return render(request, "studies/study_topics.html", context=context)
87
[end of ddionrails/studies/views.py]
[start of config/urls.py]
1 # -*- coding: utf-8 -*-
2
3 """ Root URLConf for ddionrails project """
4
5 from django.conf import settings
6 from django.conf.urls.static import static
7 from django.contrib import admin
8 from django.urls import include, path, re_path
9 from django.views.generic.base import TemplateView
10
11 import ddionrails.instruments.views as instruments_views
12 import ddionrails.publications.views as publications_views
13 from config.views import HomePageView
14 from ddionrails.concepts.views import TopicRedirectView
15 from ddionrails.data.views import VariableRedirectView
16 from ddionrails.studies.views import StudyDetailView, StudyRedirectView, study_topics
17
18 # These variable names are desired by Django
19 handler400 = "config.views.bad_request" # pylint: disable=invalid-name
20 handler403 = "config.views.permission_denied" # pylint: disable=invalid-name
21 handler404 = "config.views.page_not_found" # pylint: disable=invalid-name
22 handler500 = "config.views.server_error" # pylint: disable=invalid-name
23
24 admin.site.site_header = "DDI on Rails Admin"
25 admin.site.site_title = "DDI on Rails Admin"
26 admin.site.index_title = "Welcome to DDI on Rails Admin"
27
28 urlpatterns = [
29 path("", HomePageView.as_view(), name="home"),
30 path(
31 "imprint/",
32 TemplateView.as_view(template_name="pages/imprint.html"),
33 name="imprint",
34 ),
35 path(
36 "contact/",
37 TemplateView.as_view(template_name="pages/contact.html"),
38 name="contact",
39 ),
40 path("admin/doc/", include("django.contrib.admindocs.urls")),
41 path("admin/", admin.site.urls),
42 path("concept/", include("ddionrails.concepts.urls", namespace="concepts")),
43 path("workspace/", include("ddionrails.workspace.urls", namespace="workspace")),
44 re_path(
45 (
46 r"^search/((?:all|variables|concepts|questions|publications|topics)"
47 r"\?{0,1}.*){0,1}$"
48 ),
49 TemplateView.as_view(template_name="search/search.html"),
50 name="search",
51 ),
52 path("api/", include("ddionrails.api.urls", namespace="api")),
53 path("django-rq/", include("django_rq.urls")),
54 path("user/", include("django.contrib.auth.urls")),
55 # Study by name
56 path("<slug:study_name>", StudyDetailView.as_view(), name="study_detail"),
57 # Study-specific links
58 path("<slug:study_name>/data/", include("ddionrails.data.urls", namespace="data")),
59 path(
60 "<slug:study_name>/publ/",
61 include("ddionrails.publications.urls", namespace="publ"),
62 ),
63 path(
64 "<slug:study_name>/inst/",
65 include("ddionrails.instruments.urls", namespace="inst"),
66 ),
67 path("<slug:study_name>/topics/<slug:language>", study_topics, name="study_topics"),
68 # Redirects for search interface
69 path(
70 "publication/<uuid:id>",
71 publications_views.PublicationRedirectView.as_view(),
72 name="publication_redirect",
73 ),
74 path("variable/<uuid:id>", VariableRedirectView.as_view(), name="variable_redirect"),
75 path("topic/<uuid:id>", TopicRedirectView.as_view(), name="topic_redirect"),
76 path(
77 "instrument/<uuid:id>",
78 instruments_views.InstrumentRedirectView.as_view(),
79 name="instrument_redirect",
80 ),
81 path(
82 "question/<uuid:id>",
83 instruments_views.QuestionRedirectView.as_view(),
84 name="question_redirect",
85 ),
86 path("study/<uuid:id>", StudyRedirectView.as_view(), name="study_redirect"),
87 ]
88
89 if settings.DEBUG:
90 import debug_toolbar
91
92 urlpatterns = urlpatterns + static(
93 settings.MEDIA_URL, document_root=settings.MEDIA_ROOT
94 )
95 urlpatterns = [path(r"__debug__/", include(debug_toolbar.urls))] + urlpatterns
96
[end of config/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/config/urls.py b/config/urls.py
--- a/config/urls.py
+++ b/config/urls.py
@@ -13,7 +13,7 @@
from config.views import HomePageView
from ddionrails.concepts.views import TopicRedirectView
from ddionrails.data.views import VariableRedirectView
-from ddionrails.studies.views import StudyDetailView, StudyRedirectView, study_topics
+from ddionrails.studies.views import StudyDetailView, study_topics
# These variable names are desired by Django
handler400 = "config.views.bad_request" # pylint: disable=invalid-name
@@ -83,7 +83,6 @@
instruments_views.QuestionRedirectView.as_view(),
name="question_redirect",
),
- path("study/<uuid:id>", StudyRedirectView.as_view(), name="study_redirect"),
]
if settings.DEBUG:
diff --git a/ddionrails/studies/views.py b/ddionrails/studies/views.py
--- a/ddionrails/studies/views.py
+++ b/ddionrails/studies/views.py
@@ -6,7 +6,6 @@
from django.http.response import HttpResponse
from django.shortcuts import get_object_or_404, render
from django.views.generic import DetailView
-from django.views.generic.base import RedirectView
from ddionrails.data.models import Dataset, Variable
from ddionrails.instruments.models import Instrument, Question
@@ -14,16 +13,6 @@
from .models import Study
-class StudyRedirectView(RedirectView):
- """ RedirectView for studies.Study model """
-
- permanent = False
-
- def get_redirect_url(self, *args, **kwargs):
- study = get_object_or_404(Study, id=kwargs["id"])
- return study.get_absolute_url()
-
-
class StudyDetailView(DetailView):
""" DetailView for studies.Study model """
| {"golden_diff": "diff --git a/config/urls.py b/config/urls.py\n--- a/config/urls.py\n+++ b/config/urls.py\n@@ -13,7 +13,7 @@\n from config.views import HomePageView\n from ddionrails.concepts.views import TopicRedirectView\n from ddionrails.data.views import VariableRedirectView\n-from ddionrails.studies.views import StudyDetailView, StudyRedirectView, study_topics\n+from ddionrails.studies.views import StudyDetailView, study_topics\n \n # These variable names are desired by Django\n handler400 = \"config.views.bad_request\" # pylint: disable=invalid-name\n@@ -83,7 +83,6 @@\n instruments_views.QuestionRedirectView.as_view(),\n name=\"question_redirect\",\n ),\n- path(\"study/<uuid:id>\", StudyRedirectView.as_view(), name=\"study_redirect\"),\n ]\n \n if settings.DEBUG:\ndiff --git a/ddionrails/studies/views.py b/ddionrails/studies/views.py\n--- a/ddionrails/studies/views.py\n+++ b/ddionrails/studies/views.py\n@@ -6,7 +6,6 @@\n from django.http.response import HttpResponse\n from django.shortcuts import get_object_or_404, render\n from django.views.generic import DetailView\n-from django.views.generic.base import RedirectView\n \n from ddionrails.data.models import Dataset, Variable\n from ddionrails.instruments.models import Instrument, Question\n@@ -14,16 +13,6 @@\n from .models import Study\n \n \n-class StudyRedirectView(RedirectView):\n- \"\"\" RedirectView for studies.Study model \"\"\"\n-\n- permanent = False\n-\n- def get_redirect_url(self, *args, **kwargs):\n- study = get_object_or_404(Study, id=kwargs[\"id\"])\n- return study.get_absolute_url()\n-\n-\n class StudyDetailView(DetailView):\n \"\"\" DetailView for studies.Study model \"\"\"\n", "issue": "Remove StudyRedirectView\n~blocked by: #126~\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Views for ddionrails.studies app \"\"\"\n\nfrom django.http.request import HttpRequest\nfrom django.http.response import HttpResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.generic import DetailView\nfrom django.views.generic.base import RedirectView\n\nfrom ddionrails.data.models import Dataset, Variable\nfrom ddionrails.instruments.models import Instrument, Question\n\nfrom .models import Study\n\n\nclass StudyRedirectView(RedirectView):\n \"\"\" RedirectView for studies.Study model \"\"\"\n\n permanent = False\n\n def get_redirect_url(self, *args, **kwargs):\n study = get_object_or_404(Study, id=kwargs[\"id\"])\n return study.get_absolute_url()\n\n\nclass StudyDetailView(DetailView):\n \"\"\" DetailView for studies.Study model \"\"\"\n\n model = Study\n template_name = \"studies/study_detail.html\"\n slug_url_kwarg = \"study_name\"\n slug_field = \"name\"\n\n def get_queryset(self):\n queryset = super(StudyDetailView, self).get_queryset()\n return queryset.only(\"name\", \"label\", \"config\", \"description\")\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"num_datasets\"] = Dataset.objects.filter(study=self.object).count()\n context[\"num_variables\"] = Variable.objects.filter(\n dataset__study=self.object\n ).count()\n context[\"num_instruments\"] = Instrument.objects.filter(study=self.object).count()\n context[\"num_questions\"] = Question.objects.filter(\n instrument__study=self.object\n ).count()\n\n context[\"dataset_list\"] = (\n Dataset.objects.select_related(\n \"study\", \"conceptual_dataset\", \"period\", \"analysis_unit\"\n )\n .filter(study=self.object)\n .only(\n \"name\",\n \"label\",\n \"study__name\",\n \"conceptual_dataset__name\",\n \"conceptual_dataset__label\",\n \"period__name\",\n \"period__label\",\n \"analysis_unit__name\",\n \"analysis_unit__label\",\n )\n )\n context[\"instrument_list\"] = (\n Instrument.objects.select_related(\"study\", \"period\", \"analysis_unit\")\n .filter(study=self.object)\n .only(\n \"name\",\n \"label\",\n \"study__name\",\n \"period__name\",\n \"period__label\",\n \"analysis_unit__name\",\n \"analysis_unit__label\",\n )\n )\n return context\n\n\ndef study_topics(request: HttpRequest, study_name: str, language: str) -> HttpResponse:\n study = get_object_or_404(Study, name=study_name)\n context = dict(study=study, language=language)\n return render(request, \"studies/study_topics.html\", context=context)\n", "path": "ddionrails/studies/views.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Root URLConf for ddionrails project \"\"\"\n\nfrom django.conf import settings\nfrom django.conf.urls.static import static\nfrom django.contrib import admin\nfrom django.urls import include, path, re_path\nfrom django.views.generic.base import TemplateView\n\nimport ddionrails.instruments.views as instruments_views\nimport ddionrails.publications.views as publications_views\nfrom config.views import HomePageView\nfrom ddionrails.concepts.views import TopicRedirectView\nfrom ddionrails.data.views import VariableRedirectView\nfrom ddionrails.studies.views import StudyDetailView, StudyRedirectView, study_topics\n\n# These variable names are desired by Django\nhandler400 = \"config.views.bad_request\" # pylint: disable=invalid-name\nhandler403 = \"config.views.permission_denied\" # pylint: disable=invalid-name\nhandler404 = \"config.views.page_not_found\" # pylint: disable=invalid-name\nhandler500 = \"config.views.server_error\" # pylint: disable=invalid-name\n\nadmin.site.site_header = \"DDI on Rails Admin\"\nadmin.site.site_title = \"DDI on Rails Admin\"\nadmin.site.index_title = \"Welcome to DDI on Rails Admin\"\n\nurlpatterns = [\n path(\"\", HomePageView.as_view(), name=\"home\"),\n path(\n \"imprint/\",\n TemplateView.as_view(template_name=\"pages/imprint.html\"),\n name=\"imprint\",\n ),\n path(\n \"contact/\",\n TemplateView.as_view(template_name=\"pages/contact.html\"),\n name=\"contact\",\n ),\n path(\"admin/doc/\", include(\"django.contrib.admindocs.urls\")),\n path(\"admin/\", admin.site.urls),\n path(\"concept/\", include(\"ddionrails.concepts.urls\", namespace=\"concepts\")),\n path(\"workspace/\", include(\"ddionrails.workspace.urls\", namespace=\"workspace\")),\n re_path(\n (\n r\"^search/((?:all|variables|concepts|questions|publications|topics)\"\n r\"\\?{0,1}.*){0,1}$\"\n ),\n TemplateView.as_view(template_name=\"search/search.html\"),\n name=\"search\",\n ),\n path(\"api/\", include(\"ddionrails.api.urls\", namespace=\"api\")),\n path(\"django-rq/\", include(\"django_rq.urls\")),\n path(\"user/\", include(\"django.contrib.auth.urls\")),\n # Study by name\n path(\"<slug:study_name>\", StudyDetailView.as_view(), name=\"study_detail\"),\n # Study-specific links\n path(\"<slug:study_name>/data/\", include(\"ddionrails.data.urls\", namespace=\"data\")),\n path(\n \"<slug:study_name>/publ/\",\n include(\"ddionrails.publications.urls\", namespace=\"publ\"),\n ),\n path(\n \"<slug:study_name>/inst/\",\n include(\"ddionrails.instruments.urls\", namespace=\"inst\"),\n ),\n path(\"<slug:study_name>/topics/<slug:language>\", study_topics, name=\"study_topics\"),\n # Redirects for search interface\n path(\n \"publication/<uuid:id>\",\n publications_views.PublicationRedirectView.as_view(),\n name=\"publication_redirect\",\n ),\n path(\"variable/<uuid:id>\", VariableRedirectView.as_view(), name=\"variable_redirect\"),\n path(\"topic/<uuid:id>\", TopicRedirectView.as_view(), name=\"topic_redirect\"),\n path(\n \"instrument/<uuid:id>\",\n instruments_views.InstrumentRedirectView.as_view(),\n name=\"instrument_redirect\",\n ),\n path(\n \"question/<uuid:id>\",\n instruments_views.QuestionRedirectView.as_view(),\n name=\"question_redirect\",\n ),\n path(\"study/<uuid:id>\", StudyRedirectView.as_view(), name=\"study_redirect\"),\n]\n\nif settings.DEBUG:\n import debug_toolbar\n\n urlpatterns = urlpatterns + static(\n settings.MEDIA_URL, document_root=settings.MEDIA_ROOT\n )\n urlpatterns = [path(r\"__debug__/\", include(debug_toolbar.urls))] + urlpatterns\n", "path": "config/urls.py"}]} | 2,356 | 407 |
gh_patches_debug_41159 | rasdani/github-patches | git_diff | azavea__raster-vision-328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix raster stats bug
If you run compute_raster_stats on 4-channel imagery (yielding stats for 4 channels), and use a `channel_order` of [0, 1, 2] in your raster_transformer, and then switch to using 3-channel imagery, it leads to an error because currently the `means` do not have the `channel_order` applied to them before being subtracted from the raster. In other words, 4 channel means is subtracted from a 3 channel raster.
</issue>
<code>
[start of src/rastervision/builders/raster_transformer_builder.py]
1 from rastervision.core.raster_transformer import RasterTransformer
2
3
4 def build(config):
5 return RasterTransformer(config)
6
[end of src/rastervision/builders/raster_transformer_builder.py]
[start of src/rastervision/core/raster_transformer.py]
1 import numpy as np
2
3 from rastervision.core.raster_stats import RasterStats
4
5
6 class RasterTransformer(object):
7 """Transforms chips according to a config."""
8
9 def __init__(self, options):
10 """Construct a new RasterTransformer.
11
12 Args:
13 options: protos.raster_transformer_pb2.RasterTransformer
14 """
15 self.options = options
16 self.raster_stats = None
17 if options.stats_uri:
18 self.raster_stats = RasterStats()
19 self.raster_stats.load(options.stats_uri)
20
21 def transform(self, chip):
22 """Transform a chip.
23
24 Selects a subset of the channels and transforms non-uint8 to
25 uint8 values using options.stats_uri
26
27 Args:
28 chip: [height, width, channels] numpy array
29
30 Returns:
31 [height, width, channels] uint8 numpy array where channels is equal
32 to len(self.options.channel_order)
33 """
34 if chip.dtype != np.uint8:
35 if self.raster_stats:
36 # Subtract mean and divide by std to get zscores.
37 means = np.array(self.raster_stats.means)
38 means = means[np.newaxis, np.newaxis, :].astype(np.float)
39 stds = np.array(self.raster_stats.stds)
40 stds = stds[np.newaxis, np.newaxis, :].astype(np.float)
41
42 # Don't transform NODATA zero values.
43 nodata = chip == 0
44
45 chip = chip - means
46 chip = chip / stds
47
48 # Make zscores that fall between -3 and 3 span 0 to 255.
49 chip += 3
50 chip /= 6
51
52 chip = np.clip(chip, 0, 1)
53 chip *= 255
54 chip = chip.astype(np.uint8)
55
56 chip[nodata] = 0
57 else:
58 raise ValueError(
59 'Need to provide stats_uri for non-uint8 rasters.')
60
61 if self.options.channel_order:
62 return chip[:, :, self.options.channel_order]
63 return chip
64
[end of src/rastervision/core/raster_transformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/rastervision/builders/raster_transformer_builder.py b/src/rastervision/builders/raster_transformer_builder.py
--- a/src/rastervision/builders/raster_transformer_builder.py
+++ b/src/rastervision/builders/raster_transformer_builder.py
@@ -1,5 +1,12 @@
from rastervision.core.raster_transformer import RasterTransformer
+from rastervision.core.raster_stats import RasterStats
def build(config):
- return RasterTransformer(config)
+ raster_stats = None
+ if config.stats_uri:
+ raster_stats = RasterStats()
+ raster_stats.load(config.stats_uri)
+
+ return RasterTransformer(
+ channel_order=config.channel_order, raster_stats=raster_stats)
diff --git a/src/rastervision/core/raster_transformer.py b/src/rastervision/core/raster_transformer.py
--- a/src/rastervision/core/raster_transformer.py
+++ b/src/rastervision/core/raster_transformer.py
@@ -1,43 +1,50 @@
import numpy as np
-from rastervision.core.raster_stats import RasterStats
-
class RasterTransformer(object):
- """Transforms chips according to a config."""
+ """Transforms raw chips to be input to a neural network."""
- def __init__(self, options):
+ def __init__(self, channel_order=None, raster_stats=None):
"""Construct a new RasterTransformer.
Args:
- options: protos.raster_transformer_pb2.RasterTransformer
+ channel_order: numpy array of length n where n is the number of
+ channels to use and the values are channel indices
+ raster_stats: (RasterStats) used to transform chip to have
+ desired statistics
"""
- self.options = options
- self.raster_stats = None
- if options.stats_uri:
- self.raster_stats = RasterStats()
- self.raster_stats.load(options.stats_uri)
+ self.channel_order = channel_order
+ self.raster_stats = raster_stats
def transform(self, chip):
"""Transform a chip.
Selects a subset of the channels and transforms non-uint8 to
- uint8 values using options.stats_uri
+ uint8 values using raster_stats.
Args:
chip: [height, width, channels] numpy array
Returns:
[height, width, channels] uint8 numpy array where channels is equal
- to len(self.options.channel_order)
+ to len(channel_order)
"""
+ if self.channel_order is None:
+ channel_order = np.arange(chip.shape[2])
+ else:
+ channel_order = self.channel_order
+
+ chip = chip[:, :, channel_order]
+
if chip.dtype != np.uint8:
if self.raster_stats:
# Subtract mean and divide by std to get zscores.
means = np.array(self.raster_stats.means)
- means = means[np.newaxis, np.newaxis, :].astype(np.float)
+ means = means[np.newaxis, np.newaxis, channel_order].astype(
+ np.float)
stds = np.array(self.raster_stats.stds)
- stds = stds[np.newaxis, np.newaxis, :].astype(np.float)
+ stds = stds[np.newaxis, np.newaxis, channel_order].astype(
+ np.float)
# Don't transform NODATA zero values.
nodata = chip == 0
@@ -56,8 +63,6 @@
chip[nodata] = 0
else:
raise ValueError(
- 'Need to provide stats_uri for non-uint8 rasters.')
+ 'Need to provide raster_stats for non-uint8 rasters.')
- if self.options.channel_order:
- return chip[:, :, self.options.channel_order]
return chip
| {"golden_diff": "diff --git a/src/rastervision/builders/raster_transformer_builder.py b/src/rastervision/builders/raster_transformer_builder.py\n--- a/src/rastervision/builders/raster_transformer_builder.py\n+++ b/src/rastervision/builders/raster_transformer_builder.py\n@@ -1,5 +1,12 @@\n from rastervision.core.raster_transformer import RasterTransformer\n+from rastervision.core.raster_stats import RasterStats\n \n \n def build(config):\n- return RasterTransformer(config)\n+ raster_stats = None\n+ if config.stats_uri:\n+ raster_stats = RasterStats()\n+ raster_stats.load(config.stats_uri)\n+\n+ return RasterTransformer(\n+ channel_order=config.channel_order, raster_stats=raster_stats)\ndiff --git a/src/rastervision/core/raster_transformer.py b/src/rastervision/core/raster_transformer.py\n--- a/src/rastervision/core/raster_transformer.py\n+++ b/src/rastervision/core/raster_transformer.py\n@@ -1,43 +1,50 @@\n import numpy as np\n \n-from rastervision.core.raster_stats import RasterStats\n-\n \n class RasterTransformer(object):\n- \"\"\"Transforms chips according to a config.\"\"\"\n+ \"\"\"Transforms raw chips to be input to a neural network.\"\"\"\n \n- def __init__(self, options):\n+ def __init__(self, channel_order=None, raster_stats=None):\n \"\"\"Construct a new RasterTransformer.\n \n Args:\n- options: protos.raster_transformer_pb2.RasterTransformer\n+ channel_order: numpy array of length n where n is the number of\n+ channels to use and the values are channel indices\n+ raster_stats: (RasterStats) used to transform chip to have\n+ desired statistics\n \"\"\"\n- self.options = options\n- self.raster_stats = None\n- if options.stats_uri:\n- self.raster_stats = RasterStats()\n- self.raster_stats.load(options.stats_uri)\n+ self.channel_order = channel_order\n+ self.raster_stats = raster_stats\n \n def transform(self, chip):\n \"\"\"Transform a chip.\n \n Selects a subset of the channels and transforms non-uint8 to\n- uint8 values using options.stats_uri\n+ uint8 values using raster_stats.\n \n Args:\n chip: [height, width, channels] numpy array\n \n Returns:\n [height, width, channels] uint8 numpy array where channels is equal\n- to len(self.options.channel_order)\n+ to len(channel_order)\n \"\"\"\n+ if self.channel_order is None:\n+ channel_order = np.arange(chip.shape[2])\n+ else:\n+ channel_order = self.channel_order\n+\n+ chip = chip[:, :, channel_order]\n+\n if chip.dtype != np.uint8:\n if self.raster_stats:\n # Subtract mean and divide by std to get zscores.\n means = np.array(self.raster_stats.means)\n- means = means[np.newaxis, np.newaxis, :].astype(np.float)\n+ means = means[np.newaxis, np.newaxis, channel_order].astype(\n+ np.float)\n stds = np.array(self.raster_stats.stds)\n- stds = stds[np.newaxis, np.newaxis, :].astype(np.float)\n+ stds = stds[np.newaxis, np.newaxis, channel_order].astype(\n+ np.float)\n \n # Don't transform NODATA zero values.\n nodata = chip == 0\n@@ -56,8 +63,6 @@\n chip[nodata] = 0\n else:\n raise ValueError(\n- 'Need to provide stats_uri for non-uint8 rasters.')\n+ 'Need to provide raster_stats for non-uint8 rasters.')\n \n- if self.options.channel_order:\n- return chip[:, :, self.options.channel_order]\n return chip\n", "issue": "Fix raster stats bug\nIf you run compute_raster_stats on 4-channel imagery (yielding stats for 4 channels), and use a `channel_order` of [0, 1, 2] in your raster_transformer, and then switch to using 3-channel imagery, it leads to an error because currently the `means` do not have the `channel_order` applied to them before being subtracted from the raster. In other words, 4 channel means is subtracted from a 3 channel raster.\n", "before_files": [{"content": "from rastervision.core.raster_transformer import RasterTransformer\n\n\ndef build(config):\n return RasterTransformer(config)\n", "path": "src/rastervision/builders/raster_transformer_builder.py"}, {"content": "import numpy as np\n\nfrom rastervision.core.raster_stats import RasterStats\n\n\nclass RasterTransformer(object):\n \"\"\"Transforms chips according to a config.\"\"\"\n\n def __init__(self, options):\n \"\"\"Construct a new RasterTransformer.\n\n Args:\n options: protos.raster_transformer_pb2.RasterTransformer\n \"\"\"\n self.options = options\n self.raster_stats = None\n if options.stats_uri:\n self.raster_stats = RasterStats()\n self.raster_stats.load(options.stats_uri)\n\n def transform(self, chip):\n \"\"\"Transform a chip.\n\n Selects a subset of the channels and transforms non-uint8 to\n uint8 values using options.stats_uri\n\n Args:\n chip: [height, width, channels] numpy array\n\n Returns:\n [height, width, channels] uint8 numpy array where channels is equal\n to len(self.options.channel_order)\n \"\"\"\n if chip.dtype != np.uint8:\n if self.raster_stats:\n # Subtract mean and divide by std to get zscores.\n means = np.array(self.raster_stats.means)\n means = means[np.newaxis, np.newaxis, :].astype(np.float)\n stds = np.array(self.raster_stats.stds)\n stds = stds[np.newaxis, np.newaxis, :].astype(np.float)\n\n # Don't transform NODATA zero values.\n nodata = chip == 0\n\n chip = chip - means\n chip = chip / stds\n\n # Make zscores that fall between -3 and 3 span 0 to 255.\n chip += 3\n chip /= 6\n\n chip = np.clip(chip, 0, 1)\n chip *= 255\n chip = chip.astype(np.uint8)\n\n chip[nodata] = 0\n else:\n raise ValueError(\n 'Need to provide stats_uri for non-uint8 rasters.')\n\n if self.options.channel_order:\n return chip[:, :, self.options.channel_order]\n return chip\n", "path": "src/rastervision/core/raster_transformer.py"}]} | 1,291 | 854 |
gh_patches_debug_11754 | rasdani/github-patches | git_diff | svthalia__concrexit-1826 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add search parameter to event registrations admin api
### Is your feature request related to a problem? Please describe.
I'm always frustrated when I can't search for a registration in the event admin.
### Describe the solution you'd like
A search parameter (by member.name or name) on `api/v2/admin/events/<id>/registrations/`.
### Motivation
Then we can search for registrations. A parameter is desirable for consistency in making pagination available.
### Describe alternatives you've considered
Local search.
</issue>
<code>
[start of website/events/api/v2/admin/views.py]
1 from django.http import Http404
2 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
3 from rest_framework import status
4 from rest_framework.exceptions import ValidationError, PermissionDenied
5 from rest_framework.generics import get_object_or_404
6 from rest_framework.response import Response
7 from rest_framework.views import APIView
8 from rest_framework import filters as framework_filters
9
10 from events import services
11 from events.api.v2.admin import filters
12 from events.api.v2.admin.permissions import IsOrganiser
13 from events.api.v2.admin.serializers.event import (
14 EventListAdminSerializer,
15 EventAdminSerializer,
16 )
17 from events.api.v2.admin.serializers.event_registration import (
18 EventRegistrationAdminSerializer,
19 )
20 from events.exceptions import RegistrationError
21 from events.models import Event, EventRegistration
22 from thaliawebsite.api.v2.admin.views import (
23 AdminListAPIView,
24 AdminRetrieveAPIView,
25 AdminCreateAPIView,
26 AdminUpdateAPIView,
27 AdminDestroyAPIView,
28 AdminPermissionsMixin,
29 )
30 import events.api.v2.filters as normal_filters
31
32
33 class EventAdminListCreateAPIView(AdminListAPIView, AdminCreateAPIView):
34 queryset = Event.objects.prefetch_related("organiser")
35 permission_classes = [IsAuthenticatedOrTokenHasScope]
36 required_scopes = ["events:admin"]
37 filter_backends = [
38 framework_filters.OrderingFilter,
39 normal_filters.CategoryFilter,
40 normal_filters.OrganiserFilter,
41 normal_filters.EventDateFilter,
42 filters.PublishedFilter,
43 ]
44 ordering_fields = (
45 "start",
46 "end",
47 "published",
48 "registration_start",
49 "registration_end",
50 )
51
52 def get_serializer_class(self):
53 if self.request.method.lower() == "post":
54 return EventAdminSerializer
55 return EventListAdminSerializer
56
57
58 class EventAdminDetailAPIView(
59 AdminRetrieveAPIView, AdminUpdateAPIView, AdminDestroyAPIView
60 ):
61 queryset = Event.objects.all()
62 serializer_class = EventAdminSerializer
63 permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]
64 required_scopes = ["events:admin"]
65
66
67 class EventRegistrationAdminListView(AdminListAPIView, AdminCreateAPIView):
68 """Returns a list of registrations."""
69
70 serializer_class = EventRegistrationAdminSerializer
71 permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]
72 required_scopes = ["events:admin"]
73 filter_backends = (
74 framework_filters.OrderingFilter,
75 filters.EventRegistrationCancelledFilter,
76 )
77 ordering_fields = ("queue_position", "date", "date_cancelled")
78
79 def get_queryset(self):
80 event = get_object_or_404(Event, pk=self.kwargs.get("pk"))
81 if event:
82 return EventRegistration.objects.filter(event_id=event).prefetch_related(
83 "member", "member__profile"
84 )
85 return EventRegistration.objects.none()
86
87
88 class EventRegistrationAdminDetailView(
89 AdminRetrieveAPIView, AdminUpdateAPIView, AdminDestroyAPIView
90 ):
91 """Returns details of an event registration."""
92
93 serializer_class = EventRegistrationAdminSerializer
94 queryset = EventRegistration.objects.all()
95 permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]
96 required_scopes = ["events:admin"]
97 event_lookup_field = "event_id"
98
99 def get_queryset(self):
100 return super().get_queryset().filter(event=self.kwargs["event_id"])
101
102
103 class EventRegistrationAdminFieldsView(AdminPermissionsMixin, APIView):
104 """Returns details of an event registration."""
105
106 permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]
107 required_scopes = ["events:admin"]
108
109 def get_queryset(self):
110 return EventRegistration.objects.filter(event=self.kwargs["event_id"])
111
112 def get_object(self):
113 event_registration = get_object_or_404(
114 EventRegistration,
115 event=self.kwargs["event_id"],
116 pk=self.kwargs["registration_id"],
117 )
118
119 if not event_registration.event.has_fields:
120 raise Http404
121
122 return event_registration
123
124 def get(self, request, *args, **kwargs):
125 return Response(
126 data=services.registration_fields(request, registration=self.get_object()),
127 status=status.HTTP_200_OK,
128 )
129
130 def put(self, request, *args, **kwargs):
131 original = services.registration_fields(request, registration=self.get_object())
132 required_keys = set(original.keys()) - set(request.data.keys())
133 if len(required_keys) > 0:
134 raise ValidationError(
135 f"Missing keys '{', '.join(required_keys)}' in request",
136 status.HTTP_400_BAD_REQUEST,
137 )
138
139 services.update_registration(
140 registration=self.get_object(), field_values=request.data.items()
141 )
142
143 return Response(
144 data=services.registration_fields(request, registration=self.get_object()),
145 status=status.HTTP_200_OK,
146 )
147
148 def patch(self, request, *args, **kwargs):
149 services.update_registration(
150 registration=self.get_object(), field_values=request.data.items()
151 )
152
153 return Response(
154 data=services.registration_fields(request, registration=self.get_object()),
155 status=status.HTTP_200_OK,
156 )
157
[end of website/events/api/v2/admin/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/events/api/v2/admin/views.py b/website/events/api/v2/admin/views.py
--- a/website/events/api/v2/admin/views.py
+++ b/website/events/api/v2/admin/views.py
@@ -72,9 +72,15 @@
required_scopes = ["events:admin"]
filter_backends = (
framework_filters.OrderingFilter,
+ framework_filters.SearchFilter,
filters.EventRegistrationCancelledFilter,
)
ordering_fields = ("queue_position", "date", "date_cancelled")
+ search_fields = (
+ "member__first_name",
+ "member__last_name",
+ "name",
+ )
def get_queryset(self):
event = get_object_or_404(Event, pk=self.kwargs.get("pk"))
| {"golden_diff": "diff --git a/website/events/api/v2/admin/views.py b/website/events/api/v2/admin/views.py\n--- a/website/events/api/v2/admin/views.py\n+++ b/website/events/api/v2/admin/views.py\n@@ -72,9 +72,15 @@\n required_scopes = [\"events:admin\"]\n filter_backends = (\n framework_filters.OrderingFilter,\n+ framework_filters.SearchFilter,\n filters.EventRegistrationCancelledFilter,\n )\n ordering_fields = (\"queue_position\", \"date\", \"date_cancelled\")\n+ search_fields = (\n+ \"member__first_name\",\n+ \"member__last_name\",\n+ \"name\",\n+ )\n \n def get_queryset(self):\n event = get_object_or_404(Event, pk=self.kwargs.get(\"pk\"))\n", "issue": "Add search parameter to event registrations admin api\n### Is your feature request related to a problem? Please describe.\r\nI'm always frustrated when I can't search for a registration in the event admin.\r\n\r\n### Describe the solution you'd like\r\nA search parameter (by member.name or name) on `api/v2/admin/events/<id>/registrations/`.\r\n\r\n### Motivation\r\nThen we can search for registrations. A parameter is desirable for consistency in making pagination available.\r\n\r\n### Describe alternatives you've considered\r\nLocal search.\r\n\n", "before_files": [{"content": "from django.http import Http404\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import status\nfrom rest_framework.exceptions import ValidationError, PermissionDenied\nfrom rest_framework.generics import get_object_or_404\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\nfrom rest_framework import filters as framework_filters\n\nfrom events import services\nfrom events.api.v2.admin import filters\nfrom events.api.v2.admin.permissions import IsOrganiser\nfrom events.api.v2.admin.serializers.event import (\n EventListAdminSerializer,\n EventAdminSerializer,\n)\nfrom events.api.v2.admin.serializers.event_registration import (\n EventRegistrationAdminSerializer,\n)\nfrom events.exceptions import RegistrationError\nfrom events.models import Event, EventRegistration\nfrom thaliawebsite.api.v2.admin.views import (\n AdminListAPIView,\n AdminRetrieveAPIView,\n AdminCreateAPIView,\n AdminUpdateAPIView,\n AdminDestroyAPIView,\n AdminPermissionsMixin,\n)\nimport events.api.v2.filters as normal_filters\n\n\nclass EventAdminListCreateAPIView(AdminListAPIView, AdminCreateAPIView):\n queryset = Event.objects.prefetch_related(\"organiser\")\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"events:admin\"]\n filter_backends = [\n framework_filters.OrderingFilter,\n normal_filters.CategoryFilter,\n normal_filters.OrganiserFilter,\n normal_filters.EventDateFilter,\n filters.PublishedFilter,\n ]\n ordering_fields = (\n \"start\",\n \"end\",\n \"published\",\n \"registration_start\",\n \"registration_end\",\n )\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"post\":\n return EventAdminSerializer\n return EventListAdminSerializer\n\n\nclass EventAdminDetailAPIView(\n AdminRetrieveAPIView, AdminUpdateAPIView, AdminDestroyAPIView\n):\n queryset = Event.objects.all()\n serializer_class = EventAdminSerializer\n permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"events:admin\"]\n\n\nclass EventRegistrationAdminListView(AdminListAPIView, AdminCreateAPIView):\n \"\"\"Returns a list of registrations.\"\"\"\n\n serializer_class = EventRegistrationAdminSerializer\n permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"events:admin\"]\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.EventRegistrationCancelledFilter,\n )\n ordering_fields = (\"queue_position\", \"date\", \"date_cancelled\")\n\n def get_queryset(self):\n event = get_object_or_404(Event, pk=self.kwargs.get(\"pk\"))\n if event:\n return EventRegistration.objects.filter(event_id=event).prefetch_related(\n \"member\", \"member__profile\"\n )\n return EventRegistration.objects.none()\n\n\nclass EventRegistrationAdminDetailView(\n AdminRetrieveAPIView, AdminUpdateAPIView, AdminDestroyAPIView\n):\n \"\"\"Returns details of an event registration.\"\"\"\n\n serializer_class = EventRegistrationAdminSerializer\n queryset = EventRegistration.objects.all()\n permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"events:admin\"]\n event_lookup_field = \"event_id\"\n\n def get_queryset(self):\n return super().get_queryset().filter(event=self.kwargs[\"event_id\"])\n\n\nclass EventRegistrationAdminFieldsView(AdminPermissionsMixin, APIView):\n \"\"\"Returns details of an event registration.\"\"\"\n\n permission_classes = [IsOrganiser, IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"events:admin\"]\n\n def get_queryset(self):\n return EventRegistration.objects.filter(event=self.kwargs[\"event_id\"])\n\n def get_object(self):\n event_registration = get_object_or_404(\n EventRegistration,\n event=self.kwargs[\"event_id\"],\n pk=self.kwargs[\"registration_id\"],\n )\n\n if not event_registration.event.has_fields:\n raise Http404\n\n return event_registration\n\n def get(self, request, *args, **kwargs):\n return Response(\n data=services.registration_fields(request, registration=self.get_object()),\n status=status.HTTP_200_OK,\n )\n\n def put(self, request, *args, **kwargs):\n original = services.registration_fields(request, registration=self.get_object())\n required_keys = set(original.keys()) - set(request.data.keys())\n if len(required_keys) > 0:\n raise ValidationError(\n f\"Missing keys '{', '.join(required_keys)}' in request\",\n status.HTTP_400_BAD_REQUEST,\n )\n\n services.update_registration(\n registration=self.get_object(), field_values=request.data.items()\n )\n\n return Response(\n data=services.registration_fields(request, registration=self.get_object()),\n status=status.HTTP_200_OK,\n )\n\n def patch(self, request, *args, **kwargs):\n services.update_registration(\n registration=self.get_object(), field_values=request.data.items()\n )\n\n return Response(\n data=services.registration_fields(request, registration=self.get_object()),\n status=status.HTTP_200_OK,\n )\n", "path": "website/events/api/v2/admin/views.py"}]} | 2,084 | 172 |
gh_patches_debug_30375 | rasdani/github-patches | git_diff | sublimelsp__LSP-796 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'lsp_symbol_definition' via keybinding using `ccls' as client.
While I was trying to bind the 'lsp_symbol_definition'-command to my custom keys, I had so suffer from a `... type str does not support Buffer API´.
On revisiting the relevant file ("plugin/goto.py") I found the TODO comment which mentions the future implementation of DocumentLink-support.
By dumping the response-payload of the 'lsp_symbol_definition'-command I found out that the received payload has different keys as beeing parsed by the 'handle_response()' function.
See here:
```
[{targetUri: file:///usr/include/bash/builtins/bashgetopt.h, targetRange: {end: {line: 38, character: 30}, start: {line:
38, character: 0}}, targetSelectionRange: {end: {line: 38, character: 26}, start: {line: 38, character: 11}}}]
```
### Solution
So I added the following snippet to provide the right parsing method in case of this setup/scenario:
```python
if 'targetUri' in location:
file_path = uri_to_filename(location['targetUri'])
start = Point.from_lsp(location['targetRange']['start'])
else:
file_path = uri_to_filename(location["uri"])
start = Point.from_lsp(location['range']['start'])
```
By adding this the desired function of my keybinding worked.
I just wanted to share this in the hope it might be useful.
Best regards !
</issue>
<code>
[start of plugin/goto.py]
1 import sublime
2
3 from .core.registry import LspTextCommand
4 from .core.protocol import Request, Point
5 from .core.documents import get_document_position, get_position, is_at_word
6 from .core.url import uri_to_filename
7 from .core.logging import debug
8 from Default.history_list import get_jump_history_for_view
9
10 try:
11 from typing import List, Dict, Optional, Any
12 assert List and Dict and Optional and Any
13 except ImportError:
14 pass
15
16
17 class LspGotoCommand(LspTextCommand):
18
19 def __init__(self, view: sublime.View) -> None:
20 super().__init__(view)
21 self.goto_kind = "definition"
22
23 def is_enabled(self, event: 'Optional[dict]' = None) -> bool:
24 if self.has_client_with_capability(self.goto_kind + "Provider"):
25 return is_at_word(self.view, event)
26 return False
27
28 def run(self, edit: sublime.Edit, event: 'Optional[dict]' = None) -> None:
29 client = self.client_with_capability(self.goto_kind + "Provider")
30 if client:
31 pos = get_position(self.view, event)
32 document_position = get_document_position(self.view, pos)
33 if document_position:
34 request_type = getattr(Request, self.goto_kind)
35 if not request_type:
36 debug("unrecognized goto kind:", self.goto_kind)
37 return
38 request = request_type(document_position)
39 client.send_request(
40 request, lambda response: self.handle_response(response, pos))
41
42 def handle_response(self, response: 'Optional[Any]', position: int) -> None:
43 window = sublime.active_window()
44 if response:
45 # Save to jump back history.
46 get_jump_history_for_view(self.view).push_selection(self.view)
47 # TODO: DocumentLink support.
48 location = response if isinstance(response, dict) else response[0]
49 file_path = uri_to_filename(location.get("uri"))
50 start = Point.from_lsp(location['range']['start'])
51 file_location = "{}:{}:{}".format(file_path, start.row + 1, start.col + 1)
52 debug("opening location", location)
53 window.open_file(file_location, sublime.ENCODED_POSITION)
54 # TODO: can add region here.
55 else:
56 sublime.status_message("Empty response from language server, "
57 "reverting to Sublime's built-in Goto Definition")
58 window.run_command("goto_definition")
59
60 def want_event(self) -> bool:
61 return True
62
63
64 class LspSymbolDefinitionCommand(LspGotoCommand):
65
66 def __init__(self, view: sublime.View) -> None:
67 super().__init__(view)
68 self.goto_kind = "definition"
69
70
71 class LspSymbolTypeDefinitionCommand(LspGotoCommand):
72
73 def __init__(self, view: sublime.View) -> None:
74 super().__init__(view)
75 self.goto_kind = "typeDefinition"
76
77
78 class LspSymbolDeclarationCommand(LspGotoCommand):
79
80 def __init__(self, view: sublime.View) -> None:
81 super().__init__(view)
82 self.goto_kind = "declaration"
83
84
85 class LspSymbolImplementationCommand(LspGotoCommand):
86
87 def __init__(self, view: sublime.View) -> None:
88 super().__init__(view)
89 self.goto_kind = "implementation"
90
[end of plugin/goto.py]
[start of plugin/core/sessions.py]
1 from .types import ClientConfig, ClientStates, Settings
2 from .protocol import Request
3 from .transports import start_tcp_transport, start_tcp_listener, TCPTransport, Transport
4 from .rpc import Client, attach_stdio_client
5 from .process import start_server
6 from .url import filename_to_uri
7 from .logging import debug
8 import os
9 from .protocol import completion_item_kinds, symbol_kinds
10 try:
11 from typing import Callable, Dict, Any, Optional
12 assert Callable and Dict and Any and Optional and Transport
13 except ImportError:
14 pass
15
16
17 def create_session(config: ClientConfig,
18 project_path: str,
19 env: dict,
20 settings: Settings,
21 on_pre_initialize: 'Optional[Callable[[Session], None]]' = None,
22 on_post_initialize: 'Optional[Callable[[Session], None]]' = None,
23 on_post_exit: 'Optional[Callable[[str], None]]' = None,
24 bootstrap_client: 'Optional[Any]' = None) -> 'Optional[Session]':
25
26 def with_client(client: Client) -> 'Session':
27 return Session(
28 config=config,
29 project_path=project_path,
30 client=client,
31 on_pre_initialize=on_pre_initialize,
32 on_post_initialize=on_post_initialize,
33 on_post_exit=on_post_exit)
34
35 session = None
36 if config.binary_args:
37 tcp_port = config.tcp_port
38 server_args = config.binary_args
39
40 if config.tcp_mode == "host":
41 socket = start_tcp_listener(tcp_port or 0)
42 tcp_port = socket.getsockname()[1]
43 server_args = list(s.replace("{port}", str(tcp_port)) for s in config.binary_args)
44
45 process = start_server(server_args, project_path, env, settings.log_stderr)
46 if process:
47 if config.tcp_mode == "host":
48 client_socket, address = socket.accept()
49 transport = TCPTransport(client_socket) # type: Transport
50 session = with_client(Client(transport, settings))
51 elif tcp_port:
52 transport = start_tcp_transport(tcp_port, config.tcp_host)
53 if transport:
54 session = with_client(Client(transport, settings))
55 else:
56 # try to terminate the process
57 try:
58 process.terminate()
59 except Exception:
60 pass
61 else:
62 session = with_client(attach_stdio_client(process, settings))
63 else:
64 if config.tcp_port:
65 transport = start_tcp_transport(config.tcp_port)
66 session = with_client(Client(transport, settings))
67 elif bootstrap_client:
68 session = with_client(bootstrap_client)
69 else:
70 debug("No way to start session")
71 return session
72
73
74 def get_initialize_params(project_path: str, config: ClientConfig) -> dict:
75 initializeParams = {
76 "processId": os.getpid(),
77 "rootUri": filename_to_uri(project_path),
78 "rootPath": project_path,
79 "capabilities": {
80 "textDocument": {
81 "synchronization": {
82 "didSave": True,
83 "willSaveWaitUntil": True
84 },
85 "hover": {
86 "contentFormat": ["markdown", "plaintext"]
87 },
88 "completion": {
89 "completionItem": {
90 "snippetSupport": True
91 },
92 "completionItemKind": {
93 "valueSet": completion_item_kinds
94 }
95 },
96 "signatureHelp": {
97 "signatureInformation": {
98 "documentationFormat": ["markdown", "plaintext"],
99 "parameterInformation": {
100 "labelOffsetSupport": True
101 }
102 }
103 },
104 "references": {},
105 "documentHighlight": {},
106 "documentSymbol": {
107 "symbolKind": {
108 "valueSet": symbol_kinds
109 }
110 },
111 "formatting": {},
112 "rangeFormatting": {},
113 "declaration": {},
114 "definition": {},
115 "typeDefinition": {},
116 "implementation": {},
117 "codeAction": {
118 "codeActionLiteralSupport": {
119 "codeActionKind": {
120 "valueSet": []
121 }
122 }
123 },
124 "rename": {},
125 "colorProvider": {},
126 "publishDiagnostics": {
127 "relatedInformation": True
128 }
129 },
130 "workspace": {
131 "applyEdit": True,
132 "didChangeConfiguration": {},
133 "executeCommand": {},
134 "symbol": {
135 "symbolKind": {
136 "valueSet": symbol_kinds
137 }
138 }
139 }
140 }
141 }
142 if config.init_options:
143 initializeParams['initializationOptions'] = config.init_options
144
145 return initializeParams
146
147
148 class Session(object):
149 def __init__(self,
150 config: ClientConfig,
151 project_path: str,
152 client: Client,
153 on_pre_initialize: 'Optional[Callable[[Session], None]]' = None,
154 on_post_initialize: 'Optional[Callable[[Session], None]]' = None,
155 on_post_exit: 'Optional[Callable[[str], None]]' = None) -> None:
156 self.config = config
157 self.state = ClientStates.STARTING
158 self._on_post_initialize = on_post_initialize
159 self._on_post_exit = on_post_exit
160 self.capabilities = dict() # type: Dict[str, Any]
161 self.client = client
162 if on_pre_initialize:
163 on_pre_initialize(self)
164 self._initialize(project_path)
165
166 def has_capability(self, capability: str) -> bool:
167 return capability in self.capabilities and self.capabilities[capability] is not False
168
169 def get_capability(self, capability: str) -> 'Optional[Any]':
170 return self.capabilities.get(capability)
171
172 def _initialize(self, project_path: str) -> None:
173 params = get_initialize_params(project_path, self.config)
174 self.client.send_request(
175 Request.initialize(params),
176 lambda result: self._handle_initialize_result(result))
177
178 def _handle_initialize_result(self, result: 'Any') -> None:
179 self.state = ClientStates.READY
180 self.capabilities = result.get('capabilities', dict())
181 if self._on_post_initialize:
182 self._on_post_initialize(self)
183
184 def end(self) -> None:
185 self.state = ClientStates.STOPPING
186 self.client.send_request(
187 Request.shutdown(),
188 lambda result: self._handle_shutdown_result(),
189 lambda error: self._handle_shutdown_result())
190
191 def _handle_shutdown_result(self) -> None:
192 self.client.exit()
193 self.client = None # type: ignore
194 self.capabilities = dict()
195 if self._on_post_exit:
196 self._on_post_exit(self.config.name)
197
[end of plugin/core/sessions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/core/sessions.py b/plugin/core/sessions.py
--- a/plugin/core/sessions.py
+++ b/plugin/core/sessions.py
@@ -110,10 +110,10 @@
},
"formatting": {},
"rangeFormatting": {},
- "declaration": {},
- "definition": {},
- "typeDefinition": {},
- "implementation": {},
+ "declaration": {"linkSupport": True},
+ "definition": {"linkSupport": True},
+ "typeDefinition": {"linkSupport": True},
+ "implementation": {"linkSupport": True},
"codeAction": {
"codeActionLiteralSupport": {
"codeActionKind": {
diff --git a/plugin/goto.py b/plugin/goto.py
--- a/plugin/goto.py
+++ b/plugin/goto.py
@@ -46,8 +46,13 @@
get_jump_history_for_view(self.view).push_selection(self.view)
# TODO: DocumentLink support.
location = response if isinstance(response, dict) else response[0]
- file_path = uri_to_filename(location.get("uri"))
- start = Point.from_lsp(location['range']['start'])
+ if "targetUri" in location:
+ # TODO: Do something clever with originSelectionRange and targetRange.
+ file_path = uri_to_filename(location["targetUri"])
+ start = Point.from_lsp(location["targetSelectionRange"]["start"])
+ else:
+ file_path = uri_to_filename(location["uri"])
+ start = Point.from_lsp(location["range"]["start"])
file_location = "{}:{}:{}".format(file_path, start.row + 1, start.col + 1)
debug("opening location", location)
window.open_file(file_location, sublime.ENCODED_POSITION)
| {"golden_diff": "diff --git a/plugin/core/sessions.py b/plugin/core/sessions.py\n--- a/plugin/core/sessions.py\n+++ b/plugin/core/sessions.py\n@@ -110,10 +110,10 @@\n },\n \"formatting\": {},\n \"rangeFormatting\": {},\n- \"declaration\": {},\n- \"definition\": {},\n- \"typeDefinition\": {},\n- \"implementation\": {},\n+ \"declaration\": {\"linkSupport\": True},\n+ \"definition\": {\"linkSupport\": True},\n+ \"typeDefinition\": {\"linkSupport\": True},\n+ \"implementation\": {\"linkSupport\": True},\n \"codeAction\": {\n \"codeActionLiteralSupport\": {\n \"codeActionKind\": {\ndiff --git a/plugin/goto.py b/plugin/goto.py\n--- a/plugin/goto.py\n+++ b/plugin/goto.py\n@@ -46,8 +46,13 @@\n get_jump_history_for_view(self.view).push_selection(self.view)\n # TODO: DocumentLink support.\n location = response if isinstance(response, dict) else response[0]\n- file_path = uri_to_filename(location.get(\"uri\"))\n- start = Point.from_lsp(location['range']['start'])\n+ if \"targetUri\" in location:\n+ # TODO: Do something clever with originSelectionRange and targetRange.\n+ file_path = uri_to_filename(location[\"targetUri\"])\n+ start = Point.from_lsp(location[\"targetSelectionRange\"][\"start\"])\n+ else:\n+ file_path = uri_to_filename(location[\"uri\"])\n+ start = Point.from_lsp(location[\"range\"][\"start\"])\n file_location = \"{}:{}:{}\".format(file_path, start.row + 1, start.col + 1)\n debug(\"opening location\", location)\n window.open_file(file_location, sublime.ENCODED_POSITION)\n", "issue": "'lsp_symbol_definition' via keybinding using `ccls' as client.\nWhile I was trying to bind the 'lsp_symbol_definition'-command to my custom keys, I had so suffer from a `... type str does not support Buffer API\u00b4.\r\nOn revisiting the relevant file (\"plugin/goto.py\") I found the TODO comment which mentions the future implementation of DocumentLink-support.\r\n\r\nBy dumping the response-payload of the 'lsp_symbol_definition'-command I found out that the received payload has different keys as beeing parsed by the 'handle_response()' function.\r\nSee here:\r\n```\r\n[{targetUri: file:///usr/include/bash/builtins/bashgetopt.h, targetRange: {end: {line: 38, character: 30}, start: {line:\r\n38, character: 0}}, targetSelectionRange: {end: {line: 38, character: 26}, start: {line: 38, character: 11}}}]\r\n```\r\n\r\n### Solution\r\n\r\nSo I added the following snippet to provide the right parsing method in case of this setup/scenario:\r\n\r\n```python\r\n if 'targetUri' in location:\r\n file_path = uri_to_filename(location['targetUri'])\r\n start = Point.from_lsp(location['targetRange']['start'])\r\n else:\r\n file_path = uri_to_filename(location[\"uri\"])\r\n start = Point.from_lsp(location['range']['start'])\r\n```\r\n\r\nBy adding this the desired function of my keybinding worked.\r\n\r\nI just wanted to share this in the hope it might be useful.\r\nBest regards !\r\n\n", "before_files": [{"content": "import sublime\n\nfrom .core.registry import LspTextCommand\nfrom .core.protocol import Request, Point\nfrom .core.documents import get_document_position, get_position, is_at_word\nfrom .core.url import uri_to_filename\nfrom .core.logging import debug\nfrom Default.history_list import get_jump_history_for_view\n\ntry:\n from typing import List, Dict, Optional, Any\n assert List and Dict and Optional and Any\nexcept ImportError:\n pass\n\n\nclass LspGotoCommand(LspTextCommand):\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.goto_kind = \"definition\"\n\n def is_enabled(self, event: 'Optional[dict]' = None) -> bool:\n if self.has_client_with_capability(self.goto_kind + \"Provider\"):\n return is_at_word(self.view, event)\n return False\n\n def run(self, edit: sublime.Edit, event: 'Optional[dict]' = None) -> None:\n client = self.client_with_capability(self.goto_kind + \"Provider\")\n if client:\n pos = get_position(self.view, event)\n document_position = get_document_position(self.view, pos)\n if document_position:\n request_type = getattr(Request, self.goto_kind)\n if not request_type:\n debug(\"unrecognized goto kind:\", self.goto_kind)\n return\n request = request_type(document_position)\n client.send_request(\n request, lambda response: self.handle_response(response, pos))\n\n def handle_response(self, response: 'Optional[Any]', position: int) -> None:\n window = sublime.active_window()\n if response:\n # Save to jump back history.\n get_jump_history_for_view(self.view).push_selection(self.view)\n # TODO: DocumentLink support.\n location = response if isinstance(response, dict) else response[0]\n file_path = uri_to_filename(location.get(\"uri\"))\n start = Point.from_lsp(location['range']['start'])\n file_location = \"{}:{}:{}\".format(file_path, start.row + 1, start.col + 1)\n debug(\"opening location\", location)\n window.open_file(file_location, sublime.ENCODED_POSITION)\n # TODO: can add region here.\n else:\n sublime.status_message(\"Empty response from language server, \"\n \"reverting to Sublime's built-in Goto Definition\")\n window.run_command(\"goto_definition\")\n\n def want_event(self) -> bool:\n return True\n\n\nclass LspSymbolDefinitionCommand(LspGotoCommand):\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.goto_kind = \"definition\"\n\n\nclass LspSymbolTypeDefinitionCommand(LspGotoCommand):\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.goto_kind = \"typeDefinition\"\n\n\nclass LspSymbolDeclarationCommand(LspGotoCommand):\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.goto_kind = \"declaration\"\n\n\nclass LspSymbolImplementationCommand(LspGotoCommand):\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.goto_kind = \"implementation\"\n", "path": "plugin/goto.py"}, {"content": "from .types import ClientConfig, ClientStates, Settings\nfrom .protocol import Request\nfrom .transports import start_tcp_transport, start_tcp_listener, TCPTransport, Transport\nfrom .rpc import Client, attach_stdio_client\nfrom .process import start_server\nfrom .url import filename_to_uri\nfrom .logging import debug\nimport os\nfrom .protocol import completion_item_kinds, symbol_kinds\ntry:\n from typing import Callable, Dict, Any, Optional\n assert Callable and Dict and Any and Optional and Transport\nexcept ImportError:\n pass\n\n\ndef create_session(config: ClientConfig,\n project_path: str,\n env: dict,\n settings: Settings,\n on_pre_initialize: 'Optional[Callable[[Session], None]]' = None,\n on_post_initialize: 'Optional[Callable[[Session], None]]' = None,\n on_post_exit: 'Optional[Callable[[str], None]]' = None,\n bootstrap_client: 'Optional[Any]' = None) -> 'Optional[Session]':\n\n def with_client(client: Client) -> 'Session':\n return Session(\n config=config,\n project_path=project_path,\n client=client,\n on_pre_initialize=on_pre_initialize,\n on_post_initialize=on_post_initialize,\n on_post_exit=on_post_exit)\n\n session = None\n if config.binary_args:\n tcp_port = config.tcp_port\n server_args = config.binary_args\n\n if config.tcp_mode == \"host\":\n socket = start_tcp_listener(tcp_port or 0)\n tcp_port = socket.getsockname()[1]\n server_args = list(s.replace(\"{port}\", str(tcp_port)) for s in config.binary_args)\n\n process = start_server(server_args, project_path, env, settings.log_stderr)\n if process:\n if config.tcp_mode == \"host\":\n client_socket, address = socket.accept()\n transport = TCPTransport(client_socket) # type: Transport\n session = with_client(Client(transport, settings))\n elif tcp_port:\n transport = start_tcp_transport(tcp_port, config.tcp_host)\n if transport:\n session = with_client(Client(transport, settings))\n else:\n # try to terminate the process\n try:\n process.terminate()\n except Exception:\n pass\n else:\n session = with_client(attach_stdio_client(process, settings))\n else:\n if config.tcp_port:\n transport = start_tcp_transport(config.tcp_port)\n session = with_client(Client(transport, settings))\n elif bootstrap_client:\n session = with_client(bootstrap_client)\n else:\n debug(\"No way to start session\")\n return session\n\n\ndef get_initialize_params(project_path: str, config: ClientConfig) -> dict:\n initializeParams = {\n \"processId\": os.getpid(),\n \"rootUri\": filename_to_uri(project_path),\n \"rootPath\": project_path,\n \"capabilities\": {\n \"textDocument\": {\n \"synchronization\": {\n \"didSave\": True,\n \"willSaveWaitUntil\": True\n },\n \"hover\": {\n \"contentFormat\": [\"markdown\", \"plaintext\"]\n },\n \"completion\": {\n \"completionItem\": {\n \"snippetSupport\": True\n },\n \"completionItemKind\": {\n \"valueSet\": completion_item_kinds\n }\n },\n \"signatureHelp\": {\n \"signatureInformation\": {\n \"documentationFormat\": [\"markdown\", \"plaintext\"],\n \"parameterInformation\": {\n \"labelOffsetSupport\": True\n }\n }\n },\n \"references\": {},\n \"documentHighlight\": {},\n \"documentSymbol\": {\n \"symbolKind\": {\n \"valueSet\": symbol_kinds\n }\n },\n \"formatting\": {},\n \"rangeFormatting\": {},\n \"declaration\": {},\n \"definition\": {},\n \"typeDefinition\": {},\n \"implementation\": {},\n \"codeAction\": {\n \"codeActionLiteralSupport\": {\n \"codeActionKind\": {\n \"valueSet\": []\n }\n }\n },\n \"rename\": {},\n \"colorProvider\": {},\n \"publishDiagnostics\": {\n \"relatedInformation\": True\n }\n },\n \"workspace\": {\n \"applyEdit\": True,\n \"didChangeConfiguration\": {},\n \"executeCommand\": {},\n \"symbol\": {\n \"symbolKind\": {\n \"valueSet\": symbol_kinds\n }\n }\n }\n }\n }\n if config.init_options:\n initializeParams['initializationOptions'] = config.init_options\n\n return initializeParams\n\n\nclass Session(object):\n def __init__(self,\n config: ClientConfig,\n project_path: str,\n client: Client,\n on_pre_initialize: 'Optional[Callable[[Session], None]]' = None,\n on_post_initialize: 'Optional[Callable[[Session], None]]' = None,\n on_post_exit: 'Optional[Callable[[str], None]]' = None) -> None:\n self.config = config\n self.state = ClientStates.STARTING\n self._on_post_initialize = on_post_initialize\n self._on_post_exit = on_post_exit\n self.capabilities = dict() # type: Dict[str, Any]\n self.client = client\n if on_pre_initialize:\n on_pre_initialize(self)\n self._initialize(project_path)\n\n def has_capability(self, capability: str) -> bool:\n return capability in self.capabilities and self.capabilities[capability] is not False\n\n def get_capability(self, capability: str) -> 'Optional[Any]':\n return self.capabilities.get(capability)\n\n def _initialize(self, project_path: str) -> None:\n params = get_initialize_params(project_path, self.config)\n self.client.send_request(\n Request.initialize(params),\n lambda result: self._handle_initialize_result(result))\n\n def _handle_initialize_result(self, result: 'Any') -> None:\n self.state = ClientStates.READY\n self.capabilities = result.get('capabilities', dict())\n if self._on_post_initialize:\n self._on_post_initialize(self)\n\n def end(self) -> None:\n self.state = ClientStates.STOPPING\n self.client.send_request(\n Request.shutdown(),\n lambda result: self._handle_shutdown_result(),\n lambda error: self._handle_shutdown_result())\n\n def _handle_shutdown_result(self) -> None:\n self.client.exit()\n self.client = None # type: ignore\n self.capabilities = dict()\n if self._on_post_exit:\n self._on_post_exit(self.config.name)\n", "path": "plugin/core/sessions.py"}]} | 3,617 | 384 |
gh_patches_debug_19041 | rasdani/github-patches | git_diff | spacetelescope__jwql-530 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a 'conda env export' to logging module
We could add the output of `conda env export` to the beginning of log files so that the exact environment being used to run a script is logged.
</issue>
<code>
[start of jwql/utils/logging_functions.py]
1 """ Logging functions for the ``jwql`` automation platform.
2
3 This module provides decorators to log the execution of modules. Log
4 files are written to the ``logs/`` directory in the ``jwql`` central
5 storage area, named by module name and timestamp, e.g.
6 ``monitor_filesystem/monitor_filesystem_2018-06-20-15:22:51.log``
7
8
9 Authors
10 -------
11
12 - Catherine Martlin
13 - Alex Viana (WFC3 QL Version)
14 - Matthew Bourque
15
16 Use
17 ---
18
19 To log the execution of a module, use:
20 ::
21
22 import os
23 import logging
24
25 from jwql.logging.logging_functions import configure_logging
26 from jwql.logging.logging_functions import log_info
27 from jwql.logging.logging_functions import log_fail
28
29 @log_info
30 @log_fail
31 def my_main_function():
32 pass
33
34 if __name__ == '__main__':
35
36 module = os.path.basename(__file__).replace('.py', '')
37 configure_logging(module)
38
39 my_main_function()
40
41 Dependencies
42 ------------
43
44 The user must have a configuration file named ``config.json``
45 placed in the ``utils`` directory and it must contain keys for
46 ``log_dir`` and ``admin_account``.
47
48 References
49 ----------
50 This code is adopted and updated from python routine
51 ``logging_functions.py`` written by Alex Viana, 2013 for the WFC3
52 Quicklook automation platform.
53 """
54
55 import datetime
56 import getpass
57 import importlib
58 import logging
59 import os
60 import pwd
61 import socket
62 import sys
63 import time
64 import traceback
65
66 from functools import wraps
67
68 from jwql.utils.permissions import set_permissions
69 from jwql.utils.utils import get_config, ensure_dir_exists
70
71
72 def configure_logging(module):
73 """Configure the log file with a standard logging format.
74
75 Parameters
76 ----------
77 module : str
78 The name of the module being logged.
79 production_mode : bool
80 Whether or not the output should be written to the production
81 environement.
82 path : str
83 Where to write the log if user-supplied path; default to working dir.
84
85 Returns
86 -------
87 log_file : str
88 The path to the file where the log is written to.
89 """
90
91 # Determine log file location
92 log_file = make_log_file(module)
93
94 # Make sure no other root lhandlers exist before configuring the logger
95 for handler in logging.root.handlers[:]:
96 logging.root.removeHandler(handler)
97
98 # Create the log file and set the permissions
99 logging.basicConfig(filename=log_file,
100 format='%(asctime)s %(levelname)s: %(message)s',
101 datefmt='%m/%d/%Y %H:%M:%S %p',
102 level=logging.INFO)
103 print('Log file initialized to {}'.format(log_file))
104 set_permissions(log_file)
105
106 return log_file
107
108
109 def get_log_status(log_file):
110 """Returns the end status of the given ``log_file`` (i.e.
111 ``SUCCESS`` or ``FAILURE``)
112
113 Parameters
114 ----------
115 log_file : str
116 The path to the file where the log is written to
117
118 Returns
119 -------
120 status : bool
121 The status of the execution of the script described by the log
122 file (i.e. ``SUCCESS`` or ``FAILURE``)
123 """
124
125 with open(log_file, 'r') as f:
126 data = f.readlines()
127 last_line = data[-1].strip()
128
129 if 'Completed Successfully' in last_line:
130 return 'SUCCESS'
131 else:
132 return 'FAILURE'
133
134
135 def make_log_file(module):
136 """Create the log file name based on the module name.
137
138 The name of the ``log_file`` is a combination of the name of the
139 module being logged and the current datetime.
140
141 Parameters
142 ----------
143 module : str
144 The name of the module being logged.
145 production_mode : bool
146 Whether or not the output should be written to the production
147 environment.
148 path : str
149 Where to write the log if user-supplied path; default to
150 working dir.
151
152 Returns
153 -------
154 log_file : str
155 The full path to where the log file will be written to.
156 """
157
158 # Build filename
159 timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M')
160 filename = '{0}_{1}.log'.format(module, timestamp)
161
162 # Determine save location
163 user = pwd.getpwuid(os.getuid()).pw_name
164 admin_account = get_config()['admin_account']
165 log_path = get_config()['log_dir']
166
167 # For production
168 if user == admin_account and socket.gethostname()[0] == 'p':
169 log_file = os.path.join(log_path, 'prod', module, filename)
170
171 # For test
172 elif user == admin_account and socket.gethostname()[0] == 't':
173 log_file = os.path.join(log_path, 'test', module, filename)
174
175 # For dev
176 elif user == admin_account and socket.gethostname()[0] == 'd':
177 log_file = os.path.join(log_path, 'dev', module, filename)
178
179 # For local (also write to dev)
180 else:
181 log_file = os.path.join(log_path, 'dev', module, filename)
182
183 # Make sure parent directory exists
184 ensure_dir_exists(os.path.dirname(log_file))
185
186 return log_file
187
188
189 def log_info(func):
190 """Decorator to log useful system information.
191
192 This function can be used as a decorator to log user environment
193 and system information. Future packages we want to track can be
194 added or removed as necessary.
195
196 Parameters
197 ----------
198 func : func
199 The function to decorate.
200
201 Returns
202 -------
203 wrapped : func
204 The wrapped function.
205 """
206
207 @wraps(func)
208 def wrapped(*args, **kwargs):
209
210 # Log environment information
211 logging.info('User: ' + getpass.getuser())
212 logging.info('System: ' + socket.gethostname())
213 logging.info('Python Version: ' + sys.version.replace('\n', ''))
214 logging.info('Python Executable Path: ' + sys.executable)
215
216 # Read in setup.py file to build list of required modules
217 with open(get_config()['setup_file']) as f:
218 data = f.readlines()
219
220 for i, line in enumerate(data):
221 if 'REQUIRES = [' in line:
222 begin = i + 1
223 elif 'setup(' in line:
224 end = i - 2
225 required_modules = data[begin:end]
226
227 # Clean up the module list
228 module_list = [item.strip().replace("'", "").replace(",", "").split("=")[0].split(">")[0].split("<")[0] for item in required_modules]
229
230 # Log common module version information
231 for module in module_list:
232 try:
233 mod = importlib.import_module(module)
234 logging.info(module + ' Version: ' + mod.__version__)
235 logging.info(module + ' Path: ' + mod.__path__[0])
236 except (ImportError, AttributeError) as err:
237 logging.warning(err)
238
239 logging.info('')
240
241 # Call the function and time it
242 t1_cpu = time.clock()
243 t1_time = time.time()
244 func(*args, **kwargs)
245 t2_cpu = time.clock()
246 t2_time = time.time()
247
248 # Log execution time
249 hours_cpu, remainder_cpu = divmod(t2_cpu - t1_cpu, 60 * 60)
250 minutes_cpu, seconds_cpu = divmod(remainder_cpu, 60)
251 hours_time, remainder_time = divmod(t2_time - t1_time, 60 * 60)
252 minutes_time, seconds_time = divmod(remainder_time, 60)
253 logging.info('Elapsed Real Time: {}:{}:{}'.format(int(hours_time), int(minutes_time), int(seconds_time)))
254 logging.info('Elapsed CPU Time: {}:{}:{}'.format(int(hours_cpu), int(minutes_cpu), int(seconds_cpu)))
255
256 return wrapped
257
258
259 def log_fail(func):
260 """Decorator to log crashes in the decorated code.
261
262 Parameters
263 ----------
264 func : func
265 The function to decorate.
266
267 Returns
268 -------
269 wrapped : func
270 The wrapped function.
271 """
272
273 @wraps(func)
274 def wrapped(*args, **kwargs):
275
276 try:
277
278 # Run the function
279 func(*args, **kwargs)
280 logging.info('Completed Successfully')
281
282 except Exception:
283 logging.critical(traceback.format_exc())
284 logging.critical('CRASHED')
285
286 return wrapped
287
288
289 def log_timing(func):
290 """Decorator to time a module or function within a code.
291
292 Parameters
293 ----------
294 func : func
295 The function to time.
296
297 Returns
298 -------
299 wrapped : func
300 The wrapped function. Will log the time."""
301
302 def wrapped(*args, **kwargs):
303
304 # Call the function and time it
305 t1_cpu = time.process_time()
306 t1_time = time.time()
307 func(*args, **kwargs)
308 t2_cpu = time.process_time()
309 t2_time = time.time()
310
311 # Log execution time
312 hours_cpu, remainder_cpu = divmod(t2_cpu - t1_cpu, 60 * 60)
313 minutes_cpu, seconds_cpu = divmod(remainder_cpu, 60)
314 hours_time, remainder_time = divmod(t2_time - t1_time, 60 * 60)
315 minutes_time, seconds_time = divmod(remainder_time, 60)
316 logging.info('Elapsed Real Time of {}: {}:{}:{}'.format(func.__name__, int(hours_time), int(minutes_time), int(seconds_time)))
317 logging.info('Elapsed CPU Time of {}: {}:{}:{}'.format(func.__name__, int(hours_cpu), int(minutes_cpu), int(seconds_cpu)))
318
319 return wrapped
320
[end of jwql/utils/logging_functions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jwql/utils/logging_functions.py b/jwql/utils/logging_functions.py
--- a/jwql/utils/logging_functions.py
+++ b/jwql/utils/logging_functions.py
@@ -10,8 +10,9 @@
-------
- Catherine Martlin
- - Alex Viana (WFC3 QL Version)
+ - Alex Viana (wfc3ql Version)
- Matthew Bourque
+ - Jason Neal
Use
---
@@ -59,6 +60,7 @@
import os
import pwd
import socket
+import subprocess
import sys
import time
import traceback
@@ -236,7 +238,10 @@
except (ImportError, AttributeError) as err:
logging.warning(err)
- logging.info('')
+ environment = subprocess.check_output(['conda', 'env', 'export'], universal_newlines=True)
+ logging.info('Environment:')
+ for line in environment.split('\n'):
+ logging.info(line)
# Call the function and time it
t1_cpu = time.clock()
| {"golden_diff": "diff --git a/jwql/utils/logging_functions.py b/jwql/utils/logging_functions.py\n--- a/jwql/utils/logging_functions.py\n+++ b/jwql/utils/logging_functions.py\n@@ -10,8 +10,9 @@\n -------\n \n - Catherine Martlin\n- - Alex Viana (WFC3 QL Version)\n+ - Alex Viana (wfc3ql Version)\n - Matthew Bourque\n+ - Jason Neal\n \n Use\n ---\n@@ -59,6 +60,7 @@\n import os\n import pwd\n import socket\n+import subprocess\n import sys\n import time\n import traceback\n@@ -236,7 +238,10 @@\n except (ImportError, AttributeError) as err:\n logging.warning(err)\n \n- logging.info('')\n+ environment = subprocess.check_output(['conda', 'env', 'export'], universal_newlines=True)\n+ logging.info('Environment:')\n+ for line in environment.split('\\n'):\n+ logging.info(line)\n \n # Call the function and time it\n t1_cpu = time.clock()\n", "issue": "Add a 'conda env export' to logging module\nWe could add the output of `conda env export` to the beginning of log files so that the exact environment being used to run a script is logged. \n", "before_files": [{"content": "\"\"\" Logging functions for the ``jwql`` automation platform.\n\nThis module provides decorators to log the execution of modules. Log\nfiles are written to the ``logs/`` directory in the ``jwql`` central\nstorage area, named by module name and timestamp, e.g.\n``monitor_filesystem/monitor_filesystem_2018-06-20-15:22:51.log``\n\n\nAuthors\n-------\n\n - Catherine Martlin\n - Alex Viana (WFC3 QL Version)\n - Matthew Bourque\n\nUse\n---\n\n To log the execution of a module, use:\n ::\n\n import os\n import logging\n\n from jwql.logging.logging_functions import configure_logging\n from jwql.logging.logging_functions import log_info\n from jwql.logging.logging_functions import log_fail\n\n @log_info\n @log_fail\n def my_main_function():\n pass\n\n if __name__ == '__main__':\n\n module = os.path.basename(__file__).replace('.py', '')\n configure_logging(module)\n\n my_main_function()\n\nDependencies\n------------\n\n The user must have a configuration file named ``config.json``\n placed in the ``utils`` directory and it must contain keys for\n ``log_dir`` and ``admin_account``.\n\nReferences\n----------\n This code is adopted and updated from python routine\n ``logging_functions.py`` written by Alex Viana, 2013 for the WFC3\n Quicklook automation platform.\n\"\"\"\n\nimport datetime\nimport getpass\nimport importlib\nimport logging\nimport os\nimport pwd\nimport socket\nimport sys\nimport time\nimport traceback\n\nfrom functools import wraps\n\nfrom jwql.utils.permissions import set_permissions\nfrom jwql.utils.utils import get_config, ensure_dir_exists\n\n\ndef configure_logging(module):\n \"\"\"Configure the log file with a standard logging format.\n\n Parameters\n ----------\n module : str\n The name of the module being logged.\n production_mode : bool\n Whether or not the output should be written to the production\n environement.\n path : str\n Where to write the log if user-supplied path; default to working dir.\n\n Returns\n -------\n log_file : str\n The path to the file where the log is written to.\n \"\"\"\n\n # Determine log file location\n log_file = make_log_file(module)\n\n # Make sure no other root lhandlers exist before configuring the logger\n for handler in logging.root.handlers[:]:\n logging.root.removeHandler(handler)\n\n # Create the log file and set the permissions\n logging.basicConfig(filename=log_file,\n format='%(asctime)s %(levelname)s: %(message)s',\n datefmt='%m/%d/%Y %H:%M:%S %p',\n level=logging.INFO)\n print('Log file initialized to {}'.format(log_file))\n set_permissions(log_file)\n\n return log_file\n\n\ndef get_log_status(log_file):\n \"\"\"Returns the end status of the given ``log_file`` (i.e.\n ``SUCCESS`` or ``FAILURE``)\n\n Parameters\n ----------\n log_file : str\n The path to the file where the log is written to\n\n Returns\n -------\n status : bool\n The status of the execution of the script described by the log\n file (i.e. ``SUCCESS`` or ``FAILURE``)\n \"\"\"\n\n with open(log_file, 'r') as f:\n data = f.readlines()\n last_line = data[-1].strip()\n\n if 'Completed Successfully' in last_line:\n return 'SUCCESS'\n else:\n return 'FAILURE'\n\n\ndef make_log_file(module):\n \"\"\"Create the log file name based on the module name.\n\n The name of the ``log_file`` is a combination of the name of the\n module being logged and the current datetime.\n\n Parameters\n ----------\n module : str\n The name of the module being logged.\n production_mode : bool\n Whether or not the output should be written to the production\n environment.\n path : str\n Where to write the log if user-supplied path; default to\n working dir.\n\n Returns\n -------\n log_file : str\n The full path to where the log file will be written to.\n \"\"\"\n\n # Build filename\n timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M')\n filename = '{0}_{1}.log'.format(module, timestamp)\n\n # Determine save location\n user = pwd.getpwuid(os.getuid()).pw_name\n admin_account = get_config()['admin_account']\n log_path = get_config()['log_dir']\n\n # For production\n if user == admin_account and socket.gethostname()[0] == 'p':\n log_file = os.path.join(log_path, 'prod', module, filename)\n\n # For test\n elif user == admin_account and socket.gethostname()[0] == 't':\n log_file = os.path.join(log_path, 'test', module, filename)\n\n # For dev\n elif user == admin_account and socket.gethostname()[0] == 'd':\n log_file = os.path.join(log_path, 'dev', module, filename)\n\n # For local (also write to dev)\n else:\n log_file = os.path.join(log_path, 'dev', module, filename)\n\n # Make sure parent directory exists\n ensure_dir_exists(os.path.dirname(log_file))\n\n return log_file\n\n\ndef log_info(func):\n \"\"\"Decorator to log useful system information.\n\n This function can be used as a decorator to log user environment\n and system information. Future packages we want to track can be\n added or removed as necessary.\n\n Parameters\n ----------\n func : func\n The function to decorate.\n\n Returns\n -------\n wrapped : func\n The wrapped function.\n \"\"\"\n\n @wraps(func)\n def wrapped(*args, **kwargs):\n\n # Log environment information\n logging.info('User: ' + getpass.getuser())\n logging.info('System: ' + socket.gethostname())\n logging.info('Python Version: ' + sys.version.replace('\\n', ''))\n logging.info('Python Executable Path: ' + sys.executable)\n\n # Read in setup.py file to build list of required modules\n with open(get_config()['setup_file']) as f:\n data = f.readlines()\n\n for i, line in enumerate(data):\n if 'REQUIRES = [' in line:\n begin = i + 1\n elif 'setup(' in line:\n end = i - 2\n required_modules = data[begin:end]\n\n # Clean up the module list\n module_list = [item.strip().replace(\"'\", \"\").replace(\",\", \"\").split(\"=\")[0].split(\">\")[0].split(\"<\")[0] for item in required_modules]\n\n # Log common module version information\n for module in module_list:\n try:\n mod = importlib.import_module(module)\n logging.info(module + ' Version: ' + mod.__version__)\n logging.info(module + ' Path: ' + mod.__path__[0])\n except (ImportError, AttributeError) as err:\n logging.warning(err)\n\n logging.info('')\n\n # Call the function and time it\n t1_cpu = time.clock()\n t1_time = time.time()\n func(*args, **kwargs)\n t2_cpu = time.clock()\n t2_time = time.time()\n\n # Log execution time\n hours_cpu, remainder_cpu = divmod(t2_cpu - t1_cpu, 60 * 60)\n minutes_cpu, seconds_cpu = divmod(remainder_cpu, 60)\n hours_time, remainder_time = divmod(t2_time - t1_time, 60 * 60)\n minutes_time, seconds_time = divmod(remainder_time, 60)\n logging.info('Elapsed Real Time: {}:{}:{}'.format(int(hours_time), int(minutes_time), int(seconds_time)))\n logging.info('Elapsed CPU Time: {}:{}:{}'.format(int(hours_cpu), int(minutes_cpu), int(seconds_cpu)))\n\n return wrapped\n\n\ndef log_fail(func):\n \"\"\"Decorator to log crashes in the decorated code.\n\n Parameters\n ----------\n func : func\n The function to decorate.\n\n Returns\n -------\n wrapped : func\n The wrapped function.\n \"\"\"\n\n @wraps(func)\n def wrapped(*args, **kwargs):\n\n try:\n\n # Run the function\n func(*args, **kwargs)\n logging.info('Completed Successfully')\n\n except Exception:\n logging.critical(traceback.format_exc())\n logging.critical('CRASHED')\n\n return wrapped\n\n\ndef log_timing(func):\n \"\"\"Decorator to time a module or function within a code.\n\n Parameters\n ----------\n func : func\n The function to time.\n\n Returns\n -------\n wrapped : func\n The wrapped function. Will log the time.\"\"\"\n\n def wrapped(*args, **kwargs):\n\n # Call the function and time it\n t1_cpu = time.process_time()\n t1_time = time.time()\n func(*args, **kwargs)\n t2_cpu = time.process_time()\n t2_time = time.time()\n\n # Log execution time\n hours_cpu, remainder_cpu = divmod(t2_cpu - t1_cpu, 60 * 60)\n minutes_cpu, seconds_cpu = divmod(remainder_cpu, 60)\n hours_time, remainder_time = divmod(t2_time - t1_time, 60 * 60)\n minutes_time, seconds_time = divmod(remainder_time, 60)\n logging.info('Elapsed Real Time of {}: {}:{}:{}'.format(func.__name__, int(hours_time), int(minutes_time), int(seconds_time)))\n logging.info('Elapsed CPU Time of {}: {}:{}:{}'.format(func.__name__, int(hours_cpu), int(minutes_cpu), int(seconds_cpu)))\n\n return wrapped\n", "path": "jwql/utils/logging_functions.py"}]} | 3,594 | 235 |
gh_patches_debug_140 | rasdani/github-patches | git_diff | d2l-ai__d2l-en-2078 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[MXNet] matplotlib >=3.5 raises TypeError with ax.plot_wireframe in MXNet ndarray
With the latest version of matplotlib, multiple notebooks fail with a type error in mxnet (mxnet==1.7.0 & CUDA 10.2). Some of the affected sections include [optimization intro](https://d2l.ai/chapter_optimization/optimization-intro.html), [integral calculus](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/integral-calculus.html), [multivariable calculus](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/multivariable-calculus.html) etc.
```
TypeError: no implementation found for 'numpy.column_stack' on types that implement __array_function__: [<class 'mxnet.numpy.ndarray'>, <class 'numpy.ndarray'>]
```
Please see attached traceback and reproduction instructions below.
Steps to reproduce the issue.
1. Setup the d2l environment (using `static/build.yml`)
2. While setting up the environment, it will automatically install the latest version of matplotlib (i.e. `matplotlib==3.5.1` as of today).
Run one of the notebooks which is affected (mentioned above)
<details>
<summary>Click to expand: Error Traceback</summary>
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [7], in <module>
9 # Plot function
10 ax = d2l.plt.figure().add_subplot(111, projection='3d')
---> 11 ax.plot_wireframe(x, y, z, **{'rstride': 10, 'cstride': 10})
12 ax.plot_wireframe(x, y, w, **{'rstride': 10, 'cstride': 10}, color='purple')
13 d2l.plt.xlabel('x')
File ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/matplotlib/_api/deprecation.py:412, in delete_parameter.<locals>.wrapper(*inner_args, **inner_kwargs)
402 deprecation_addendum = (
403 f"If any parameter follows {name!r}, they should be passed as "
404 f"keyword, not positionally.")
405 warn_deprecated(
406 since,
407 name=repr(name),
(...)
410 else deprecation_addendum,
411 **kwargs)
--> 412 return func(*inner_args, **inner_kwargs)
File ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/mpl_toolkits/mplot3d/axes3d.py:1908, in Axes3D.plot_wireframe(self, X, Y, Z, *args, **kwargs)
1906 linec = art3d.Line3DCollection(lines, *args, **kwargs)
1907 self.add_collection(linec)
-> 1908 self.auto_scale_xyz(X, Y, Z, had_data)
1910 return linec
File ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/mpl_toolkits/mplot3d/axes3d.py:658, in Axes3D.auto_scale_xyz(self, X, Y, Z, had_data)
656 self.xy_dataLim.update_from_data_y(Y, not had_data)
657 if Z is not None:
--> 658 self.zz_dataLim.update_from_data_x(Z, not had_data)
659 # Let autoscale_view figure out how to use this data.
660 self.autoscale_view()
File ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/matplotlib/transforms.py:922, in Bbox.update_from_data_x(self, x, ignore)
906 """
907 Update the x-bounds of the `Bbox` based on the passed in data. After
908 updating, the bounds will have positive *width*, and *x0* will be the
(...)
919 - When ``None``, use the last value passed to :meth:`ignore`.
920 """
921 x = np.ravel(x)
--> 922 self.update_from_data_xy(np.column_stack([x, np.ones(x.size)]),
923 ignore=ignore, updatey=False)
File <__array_function__ internals>:180, in column_stack(*args, **kwargs)
TypeError: no implementation found for 'numpy.column_stack' on types that implement __array_function__: [<class 'mxnet.numpy.ndarray'>, <class 'numpy.ndarray'>]
```
</details>
This is another issue validating the need of #2044.
A simple solution for now is to pin the matplotlib version to 1.4. I'll send a PR for this.
cc @astonzhang
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 import d2l
3
4 requirements = [
5 'jupyter',
6 'numpy',
7 'matplotlib==3.4',
8 'requests',
9 'pandas',
10 'gym'
11 ]
12
13 setup(
14 name='d2l',
15 version=d2l.__version__,
16 python_requires='>=3.5',
17 author='D2L Developers',
18 author_email='[email protected]',
19 url='https://d2l.ai',
20 description='Dive into Deep Learning',
21 license='MIT-0',
22 packages=find_packages(),
23 zip_safe=True,
24 install_requires=requirements,
25 )
26
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,7 +4,7 @@
requirements = [
'jupyter',
'numpy',
- 'matplotlib==3.4',
+ 'matplotlib',
'requests',
'pandas',
'gym'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,7 +4,7 @@\n requirements = [\n 'jupyter',\n 'numpy',\n- 'matplotlib==3.4',\n+ 'matplotlib',\n 'requests',\n 'pandas',\n 'gym'\n", "issue": "[MXNet] matplotlib >=3.5 raises TypeError with ax.plot_wireframe in MXNet ndarray\nWith the latest version of matplotlib, multiple notebooks fail with a type error in mxnet (mxnet==1.7.0 & CUDA 10.2). Some of the affected sections include [optimization intro](https://d2l.ai/chapter_optimization/optimization-intro.html), [integral calculus](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/integral-calculus.html), [multivariable calculus](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/multivariable-calculus.html) etc.\r\n\r\n```\r\nTypeError: no implementation found for 'numpy.column_stack' on types that implement __array_function__: [<class 'mxnet.numpy.ndarray'>, <class 'numpy.ndarray'>]\r\n```\r\n\r\nPlease see attached traceback and reproduction instructions below.\r\n\r\nSteps to reproduce the issue.\r\n\r\n1. Setup the d2l environment (using `static/build.yml`)\r\n2. While setting up the environment, it will automatically install the latest version of matplotlib (i.e. `matplotlib==3.5.1` as of today). \r\n\r\nRun one of the notebooks which is affected (mentioned above) \r\n\r\n<details>\r\n <summary>Click to expand: Error Traceback</summary>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [7], in <module>\r\n 9 # Plot function\r\n 10 ax = d2l.plt.figure().add_subplot(111, projection='3d')\r\n---> 11 ax.plot_wireframe(x, y, z, **{'rstride': 10, 'cstride': 10})\r\n 12 ax.plot_wireframe(x, y, w, **{'rstride': 10, 'cstride': 10}, color='purple')\r\n 13 d2l.plt.xlabel('x')\r\n\r\nFile ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/matplotlib/_api/deprecation.py:412, in delete_parameter.<locals>.wrapper(*inner_args, **inner_kwargs)\r\n 402 deprecation_addendum = (\r\n 403 f\"If any parameter follows {name!r}, they should be passed as \"\r\n 404 f\"keyword, not positionally.\")\r\n 405 warn_deprecated(\r\n 406 since,\r\n 407 name=repr(name),\r\n (...)\r\n 410 else deprecation_addendum,\r\n 411 **kwargs)\r\n--> 412 return func(*inner_args, **inner_kwargs)\r\n\r\nFile ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/mpl_toolkits/mplot3d/axes3d.py:1908, in Axes3D.plot_wireframe(self, X, Y, Z, *args, **kwargs)\r\n 1906 linec = art3d.Line3DCollection(lines, *args, **kwargs)\r\n 1907 self.add_collection(linec)\r\n-> 1908 self.auto_scale_xyz(X, Y, Z, had_data)\r\n 1910 return linec\r\n\r\nFile ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/mpl_toolkits/mplot3d/axes3d.py:658, in Axes3D.auto_scale_xyz(self, X, Y, Z, had_data)\r\n 656 self.xy_dataLim.update_from_data_y(Y, not had_data)\r\n 657 if Z is not None:\r\n--> 658 self.zz_dataLim.update_from_data_x(Z, not had_data)\r\n 659 # Let autoscale_view figure out how to use this data.\r\n 660 self.autoscale_view()\r\n\r\nFile ~/miniconda3/envs/mpl_d2l/lib/python3.8/site-packages/matplotlib/transforms.py:922, in Bbox.update_from_data_x(self, x, ignore)\r\n 906 \"\"\"\r\n 907 Update the x-bounds of the `Bbox` based on the passed in data. After\r\n 908 updating, the bounds will have positive *width*, and *x0* will be the\r\n (...)\r\n 919 - When ``None``, use the last value passed to :meth:`ignore`.\r\n 920 \"\"\"\r\n 921 x = np.ravel(x)\r\n--> 922 self.update_from_data_xy(np.column_stack([x, np.ones(x.size)]),\r\n 923 ignore=ignore, updatey=False)\r\n\r\nFile <__array_function__ internals>:180, in column_stack(*args, **kwargs)\r\n\r\nTypeError: no implementation found for 'numpy.column_stack' on types that implement __array_function__: [<class 'mxnet.numpy.ndarray'>, <class 'numpy.ndarray'>]\r\n```\r\n\r\n</details>\r\n\r\nThis is another issue validating the need of #2044.\r\n\r\nA simple solution for now is to pin the matplotlib version to 1.4. I'll send a PR for this.\r\n\r\ncc @astonzhang \n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport d2l\n\nrequirements = [\n 'jupyter',\n 'numpy',\n 'matplotlib==3.4',\n 'requests',\n 'pandas',\n 'gym'\n]\n\nsetup(\n name='d2l',\n version=d2l.__version__,\n python_requires='>=3.5',\n author='D2L Developers',\n author_email='[email protected]',\n url='https://d2l.ai',\n description='Dive into Deep Learning',\n license='MIT-0',\n packages=find_packages(),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}]} | 1,831 | 70 |
gh_patches_debug_23097 | rasdani/github-patches | git_diff | CTFd__CTFd-428 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Configuration can be auto-filled potentially causing issues
</issue>
<code>
[start of CTFd/admin/__init__.py]
1 import hashlib
2 import json
3 import os
4 import datetime
5
6 from flask import current_app as app, render_template, request, redirect, jsonify, url_for, Blueprint, \
7 abort, render_template_string, send_file
8 from passlib.hash import bcrypt_sha256
9 from sqlalchemy.sql import not_
10 from sqlalchemy.exc import IntegrityError
11
12 from CTFd.utils import admins_only, is_admin, cache, export_ctf, import_ctf
13 from CTFd.models import db, Teams, Solves, Awards, Challenges, WrongKeys, Keys, Tags, Files, Tracking, Pages, Config, DatabaseError
14 from CTFd.plugins.keys import get_key_class, KEY_CLASSES
15
16 from CTFd.admin.statistics import admin_statistics
17 from CTFd.admin.challenges import admin_challenges
18 from CTFd.admin.scoreboard import admin_scoreboard
19 from CTFd.admin.pages import admin_pages
20 from CTFd.admin.keys import admin_keys
21 from CTFd.admin.teams import admin_teams
22
23 from CTFd import utils
24
25
26 admin = Blueprint('admin', __name__)
27
28
29 @admin.route('/admin', methods=['GET'])
30 def admin_view():
31 if is_admin():
32 return redirect(url_for('admin_statistics.admin_graphs'))
33
34 return redirect(url_for('auth.login'))
35
36
37 @admin.route('/admin/plugins/<plugin>', methods=['GET', 'POST'])
38 @admins_only
39 def admin_plugin_config(plugin):
40 if request.method == 'GET':
41 plugins_path = os.path.join(app.root_path, 'plugins')
42
43 config_html_plugins = [name for name in os.listdir(plugins_path)
44 if os.path.isfile(os.path.join(plugins_path, name, 'config.html'))]
45
46 if plugin in config_html_plugins:
47 config = open(os.path.join(app.root_path, 'plugins', plugin, 'config.html')).read()
48 return render_template_string(config)
49 abort(404)
50 elif request.method == 'POST':
51 for k, v in request.form.items():
52 if k == "nonce":
53 continue
54 utils.set_config(k, v)
55 with app.app_context():
56 cache.clear()
57 return '1'
58
59
60 @admin.route('/admin/import', methods=['GET', 'POST'])
61 @admins_only
62 def admin_import_ctf():
63 backup = request.files['backup']
64 segments = request.form.get('segments')
65 errors = []
66 try:
67 if segments:
68 import_ctf(backup, segments=segments.split(','))
69 else:
70 import_ctf(backup)
71 except Exception as e:
72 print(e)
73 errors.append(type(e).__name__)
74
75 if errors:
76 return errors[0], 500
77 else:
78 return redirect(url_for('admin.admin_config'))
79
80
81 @admin.route('/admin/export', methods=['GET', 'POST'])
82 @admins_only
83 def admin_export_ctf():
84 segments = request.args.get('segments')
85 if segments:
86 backup = export_ctf(segments.split(','))
87 else:
88 backup = export_ctf()
89 ctf_name = utils.ctf_name()
90 day = datetime.datetime.now().strftime("%Y-%m-%d")
91 full_name = "{}.{}.zip".format(ctf_name, day)
92 return send_file(backup, as_attachment=True, attachment_filename=full_name)
93
94
95 @admin.route('/admin/config', methods=['GET', 'POST'])
96 @admins_only
97 def admin_config():
98 if request.method == "POST":
99 start = None
100 end = None
101 freeze = None
102 if request.form.get('start'):
103 start = int(request.form['start'])
104 if request.form.get('end'):
105 end = int(request.form['end'])
106 if request.form.get('freeze'):
107 freeze = int(request.form['freeze'])
108
109 try:
110 view_challenges_unregistered = bool(request.form.get('view_challenges_unregistered', None))
111 view_scoreboard_if_authed = bool(request.form.get('view_scoreboard_if_authed', None))
112 hide_scores = bool(request.form.get('hide_scores', None))
113 prevent_registration = bool(request.form.get('prevent_registration', None))
114 prevent_name_change = bool(request.form.get('prevent_name_change', None))
115 view_after_ctf = bool(request.form.get('view_after_ctf', None))
116 verify_emails = bool(request.form.get('verify_emails', None))
117 mail_tls = bool(request.form.get('mail_tls', None))
118 mail_ssl = bool(request.form.get('mail_ssl', None))
119 mail_useauth = bool(request.form.get('mail_useauth', None))
120 except (ValueError, TypeError):
121 view_challenges_unregistered = None
122 view_scoreboard_if_authed = None
123 hide_scores = None
124 prevent_registration = None
125 prevent_name_change = None
126 view_after_ctf = None
127 verify_emails = None
128 mail_tls = None
129 mail_ssl = None
130 mail_useauth = None
131 finally:
132 view_challenges_unregistered = utils.set_config('view_challenges_unregistered', view_challenges_unregistered)
133 view_scoreboard_if_authed = utils.set_config('view_scoreboard_if_authed', view_scoreboard_if_authed)
134 hide_scores = utils.set_config('hide_scores', hide_scores)
135 prevent_registration = utils.set_config('prevent_registration', prevent_registration)
136 prevent_name_change = utils.set_config('prevent_name_change', prevent_name_change)
137 view_after_ctf = utils.set_config('view_after_ctf', view_after_ctf)
138 verify_emails = utils.set_config('verify_emails', verify_emails)
139 mail_tls = utils.set_config('mail_tls', mail_tls)
140 mail_ssl = utils.set_config('mail_ssl', mail_ssl)
141 mail_useauth = utils.set_config('mail_useauth', mail_useauth)
142
143 mail_server = utils.set_config("mail_server", request.form.get('mail_server', None))
144 mail_port = utils.set_config("mail_port", request.form.get('mail_port', None))
145
146 mail_username = utils.set_config("mail_username", request.form.get('mail_username', None))
147 mail_password = utils.set_config("mail_password", request.form.get('mail_password', None))
148
149 ctf_name = utils.set_config("ctf_name", request.form.get('ctf_name', None))
150 ctf_theme = utils.set_config("ctf_theme", request.form.get('ctf_theme', None))
151
152 mailfrom_addr = utils.set_config("mailfrom_addr", request.form.get('mailfrom_addr', None))
153 mg_base_url = utils.set_config("mg_base_url", request.form.get('mg_base_url', None))
154 mg_api_key = utils.set_config("mg_api_key", request.form.get('mg_api_key', None))
155
156 db_freeze = utils.set_config("freeze", freeze)
157
158 db_start = Config.query.filter_by(key='start').first()
159 db_start.value = start
160
161 db_end = Config.query.filter_by(key='end').first()
162 db_end.value = end
163
164 db.session.add(db_start)
165 db.session.add(db_end)
166
167 db.session.commit()
168 db.session.close()
169 with app.app_context():
170 cache.clear()
171 return redirect(url_for('admin.admin_config'))
172
173 with app.app_context():
174 cache.clear()
175 ctf_name = utils.get_config('ctf_name')
176 ctf_theme = utils.get_config('ctf_theme')
177 hide_scores = utils.get_config('hide_scores')
178
179 mail_server = utils.get_config('mail_server')
180 mail_port = utils.get_config('mail_port')
181 mail_username = utils.get_config('mail_username')
182 mail_password = utils.get_config('mail_password')
183
184 mailfrom_addr = utils.get_config('mailfrom_addr')
185 mg_api_key = utils.get_config('mg_api_key')
186 mg_base_url = utils.get_config('mg_base_url')
187
188 view_after_ctf = utils.get_config('view_after_ctf')
189 start = utils.get_config('start')
190 end = utils.get_config('end')
191 freeze = utils.get_config('freeze')
192
193 mail_tls = utils.get_config('mail_tls')
194 mail_ssl = utils.get_config('mail_ssl')
195 mail_useauth = utils.get_config('mail_useauth')
196
197 view_challenges_unregistered = utils.get_config('view_challenges_unregistered')
198 view_scoreboard_if_authed = utils.get_config('view_scoreboard_if_authed')
199 prevent_registration = utils.get_config('prevent_registration')
200 prevent_name_change = utils.get_config('prevent_name_change')
201 verify_emails = utils.get_config('verify_emails')
202
203 db.session.commit()
204 db.session.close()
205
206 themes = utils.get_themes()
207 themes.remove(ctf_theme)
208
209 return render_template('admin/config.html',
210 ctf_name=ctf_name,
211 ctf_theme_config=ctf_theme,
212 start=start,
213 end=end,
214 freeze=freeze,
215 hide_scores=hide_scores,
216 mail_server=mail_server,
217 mail_port=mail_port,
218 mail_useauth=mail_useauth,
219 mail_username=mail_username,
220 mail_password=mail_password,
221 mail_tls=mail_tls,
222 mail_ssl=mail_ssl,
223 view_challenges_unregistered=view_challenges_unregistered,
224 view_scoreboard_if_authed=view_scoreboard_if_authed,
225 prevent_registration=prevent_registration,
226 mailfrom_addr=mailfrom_addr,
227 mg_base_url=mg_base_url,
228 mg_api_key=mg_api_key,
229 prevent_name_change=prevent_name_change,
230 verify_emails=verify_emails,
231 view_after_ctf=view_after_ctf,
232 themes=themes)
233
[end of CTFd/admin/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/admin/__init__.py b/CTFd/admin/__init__.py
--- a/CTFd/admin/__init__.py
+++ b/CTFd/admin/__init__.py
@@ -143,8 +143,15 @@
mail_server = utils.set_config("mail_server", request.form.get('mail_server', None))
mail_port = utils.set_config("mail_port", request.form.get('mail_port', None))
- mail_username = utils.set_config("mail_username", request.form.get('mail_username', None))
- mail_password = utils.set_config("mail_password", request.form.get('mail_password', None))
+ if request.form.get('mail_useauth', None) and (request.form.get('mail_u', None) or request.form.get('mail_p', None)):
+ if len(request.form.get('mail_u')) > 0:
+ mail_username = utils.set_config("mail_username", request.form.get('mail_u', None))
+ if len(request.form.get('mail_p')) > 0:
+ mail_password = utils.set_config("mail_password", request.form.get('mail_p', None))
+
+ elif request.form.get('mail_useauth', None) is None:
+ utils.set_config("mail_username", None)
+ utils.set_config("mail_password", None)
ctf_name = utils.set_config("ctf_name", request.form.get('ctf_name', None))
ctf_theme = utils.set_config("ctf_theme", request.form.get('ctf_theme', None))
| {"golden_diff": "diff --git a/CTFd/admin/__init__.py b/CTFd/admin/__init__.py\n--- a/CTFd/admin/__init__.py\n+++ b/CTFd/admin/__init__.py\n@@ -143,8 +143,15 @@\n mail_server = utils.set_config(\"mail_server\", request.form.get('mail_server', None))\n mail_port = utils.set_config(\"mail_port\", request.form.get('mail_port', None))\n \n- mail_username = utils.set_config(\"mail_username\", request.form.get('mail_username', None))\n- mail_password = utils.set_config(\"mail_password\", request.form.get('mail_password', None))\n+ if request.form.get('mail_useauth', None) and (request.form.get('mail_u', None) or request.form.get('mail_p', None)):\n+ if len(request.form.get('mail_u')) > 0:\n+ mail_username = utils.set_config(\"mail_username\", request.form.get('mail_u', None))\n+ if len(request.form.get('mail_p')) > 0:\n+ mail_password = utils.set_config(\"mail_password\", request.form.get('mail_p', None))\n+\n+ elif request.form.get('mail_useauth', None) is None:\n+ utils.set_config(\"mail_username\", None)\n+ utils.set_config(\"mail_password\", None)\n \n ctf_name = utils.set_config(\"ctf_name\", request.form.get('ctf_name', None))\n ctf_theme = utils.set_config(\"ctf_theme\", request.form.get('ctf_theme', None))\n", "issue": "Configuration can be auto-filled potentially causing issues\n\n", "before_files": [{"content": "import hashlib\nimport json\nimport os\nimport datetime\n\nfrom flask import current_app as app, render_template, request, redirect, jsonify, url_for, Blueprint, \\\n abort, render_template_string, send_file\nfrom passlib.hash import bcrypt_sha256\nfrom sqlalchemy.sql import not_\nfrom sqlalchemy.exc import IntegrityError\n\nfrom CTFd.utils import admins_only, is_admin, cache, export_ctf, import_ctf\nfrom CTFd.models import db, Teams, Solves, Awards, Challenges, WrongKeys, Keys, Tags, Files, Tracking, Pages, Config, DatabaseError\nfrom CTFd.plugins.keys import get_key_class, KEY_CLASSES\n\nfrom CTFd.admin.statistics import admin_statistics\nfrom CTFd.admin.challenges import admin_challenges\nfrom CTFd.admin.scoreboard import admin_scoreboard\nfrom CTFd.admin.pages import admin_pages\nfrom CTFd.admin.keys import admin_keys\nfrom CTFd.admin.teams import admin_teams\n\nfrom CTFd import utils\n\n\nadmin = Blueprint('admin', __name__)\n\n\[email protected]('/admin', methods=['GET'])\ndef admin_view():\n if is_admin():\n return redirect(url_for('admin_statistics.admin_graphs'))\n\n return redirect(url_for('auth.login'))\n\n\[email protected]('/admin/plugins/<plugin>', methods=['GET', 'POST'])\n@admins_only\ndef admin_plugin_config(plugin):\n if request.method == 'GET':\n plugins_path = os.path.join(app.root_path, 'plugins')\n\n config_html_plugins = [name for name in os.listdir(plugins_path)\n if os.path.isfile(os.path.join(plugins_path, name, 'config.html'))]\n\n if plugin in config_html_plugins:\n config = open(os.path.join(app.root_path, 'plugins', plugin, 'config.html')).read()\n return render_template_string(config)\n abort(404)\n elif request.method == 'POST':\n for k, v in request.form.items():\n if k == \"nonce\":\n continue\n utils.set_config(k, v)\n with app.app_context():\n cache.clear()\n return '1'\n\n\[email protected]('/admin/import', methods=['GET', 'POST'])\n@admins_only\ndef admin_import_ctf():\n backup = request.files['backup']\n segments = request.form.get('segments')\n errors = []\n try:\n if segments:\n import_ctf(backup, segments=segments.split(','))\n else:\n import_ctf(backup)\n except Exception as e:\n print(e)\n errors.append(type(e).__name__)\n\n if errors:\n return errors[0], 500\n else:\n return redirect(url_for('admin.admin_config'))\n\n\[email protected]('/admin/export', methods=['GET', 'POST'])\n@admins_only\ndef admin_export_ctf():\n segments = request.args.get('segments')\n if segments:\n backup = export_ctf(segments.split(','))\n else:\n backup = export_ctf()\n ctf_name = utils.ctf_name()\n day = datetime.datetime.now().strftime(\"%Y-%m-%d\")\n full_name = \"{}.{}.zip\".format(ctf_name, day)\n return send_file(backup, as_attachment=True, attachment_filename=full_name)\n\n\[email protected]('/admin/config', methods=['GET', 'POST'])\n@admins_only\ndef admin_config():\n if request.method == \"POST\":\n start = None\n end = None\n freeze = None\n if request.form.get('start'):\n start = int(request.form['start'])\n if request.form.get('end'):\n end = int(request.form['end'])\n if request.form.get('freeze'):\n freeze = int(request.form['freeze'])\n\n try:\n view_challenges_unregistered = bool(request.form.get('view_challenges_unregistered', None))\n view_scoreboard_if_authed = bool(request.form.get('view_scoreboard_if_authed', None))\n hide_scores = bool(request.form.get('hide_scores', None))\n prevent_registration = bool(request.form.get('prevent_registration', None))\n prevent_name_change = bool(request.form.get('prevent_name_change', None))\n view_after_ctf = bool(request.form.get('view_after_ctf', None))\n verify_emails = bool(request.form.get('verify_emails', None))\n mail_tls = bool(request.form.get('mail_tls', None))\n mail_ssl = bool(request.form.get('mail_ssl', None))\n mail_useauth = bool(request.form.get('mail_useauth', None))\n except (ValueError, TypeError):\n view_challenges_unregistered = None\n view_scoreboard_if_authed = None\n hide_scores = None\n prevent_registration = None\n prevent_name_change = None\n view_after_ctf = None\n verify_emails = None\n mail_tls = None\n mail_ssl = None\n mail_useauth = None\n finally:\n view_challenges_unregistered = utils.set_config('view_challenges_unregistered', view_challenges_unregistered)\n view_scoreboard_if_authed = utils.set_config('view_scoreboard_if_authed', view_scoreboard_if_authed)\n hide_scores = utils.set_config('hide_scores', hide_scores)\n prevent_registration = utils.set_config('prevent_registration', prevent_registration)\n prevent_name_change = utils.set_config('prevent_name_change', prevent_name_change)\n view_after_ctf = utils.set_config('view_after_ctf', view_after_ctf)\n verify_emails = utils.set_config('verify_emails', verify_emails)\n mail_tls = utils.set_config('mail_tls', mail_tls)\n mail_ssl = utils.set_config('mail_ssl', mail_ssl)\n mail_useauth = utils.set_config('mail_useauth', mail_useauth)\n\n mail_server = utils.set_config(\"mail_server\", request.form.get('mail_server', None))\n mail_port = utils.set_config(\"mail_port\", request.form.get('mail_port', None))\n\n mail_username = utils.set_config(\"mail_username\", request.form.get('mail_username', None))\n mail_password = utils.set_config(\"mail_password\", request.form.get('mail_password', None))\n\n ctf_name = utils.set_config(\"ctf_name\", request.form.get('ctf_name', None))\n ctf_theme = utils.set_config(\"ctf_theme\", request.form.get('ctf_theme', None))\n\n mailfrom_addr = utils.set_config(\"mailfrom_addr\", request.form.get('mailfrom_addr', None))\n mg_base_url = utils.set_config(\"mg_base_url\", request.form.get('mg_base_url', None))\n mg_api_key = utils.set_config(\"mg_api_key\", request.form.get('mg_api_key', None))\n\n db_freeze = utils.set_config(\"freeze\", freeze)\n\n db_start = Config.query.filter_by(key='start').first()\n db_start.value = start\n\n db_end = Config.query.filter_by(key='end').first()\n db_end.value = end\n\n db.session.add(db_start)\n db.session.add(db_end)\n\n db.session.commit()\n db.session.close()\n with app.app_context():\n cache.clear()\n return redirect(url_for('admin.admin_config'))\n\n with app.app_context():\n cache.clear()\n ctf_name = utils.get_config('ctf_name')\n ctf_theme = utils.get_config('ctf_theme')\n hide_scores = utils.get_config('hide_scores')\n\n mail_server = utils.get_config('mail_server')\n mail_port = utils.get_config('mail_port')\n mail_username = utils.get_config('mail_username')\n mail_password = utils.get_config('mail_password')\n\n mailfrom_addr = utils.get_config('mailfrom_addr')\n mg_api_key = utils.get_config('mg_api_key')\n mg_base_url = utils.get_config('mg_base_url')\n\n view_after_ctf = utils.get_config('view_after_ctf')\n start = utils.get_config('start')\n end = utils.get_config('end')\n freeze = utils.get_config('freeze')\n\n mail_tls = utils.get_config('mail_tls')\n mail_ssl = utils.get_config('mail_ssl')\n mail_useauth = utils.get_config('mail_useauth')\n\n view_challenges_unregistered = utils.get_config('view_challenges_unregistered')\n view_scoreboard_if_authed = utils.get_config('view_scoreboard_if_authed')\n prevent_registration = utils.get_config('prevent_registration')\n prevent_name_change = utils.get_config('prevent_name_change')\n verify_emails = utils.get_config('verify_emails')\n\n db.session.commit()\n db.session.close()\n\n themes = utils.get_themes()\n themes.remove(ctf_theme)\n\n return render_template('admin/config.html',\n ctf_name=ctf_name,\n ctf_theme_config=ctf_theme,\n start=start,\n end=end,\n freeze=freeze,\n hide_scores=hide_scores,\n mail_server=mail_server,\n mail_port=mail_port,\n mail_useauth=mail_useauth,\n mail_username=mail_username,\n mail_password=mail_password,\n mail_tls=mail_tls,\n mail_ssl=mail_ssl,\n view_challenges_unregistered=view_challenges_unregistered,\n view_scoreboard_if_authed=view_scoreboard_if_authed,\n prevent_registration=prevent_registration,\n mailfrom_addr=mailfrom_addr,\n mg_base_url=mg_base_url,\n mg_api_key=mg_api_key,\n prevent_name_change=prevent_name_change,\n verify_emails=verify_emails,\n view_after_ctf=view_after_ctf,\n themes=themes)\n", "path": "CTFd/admin/__init__.py"}]} | 3,159 | 334 |
gh_patches_debug_31133 | rasdani/github-patches | git_diff | Mailu__Mailu-2130 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fix keyError WEBMAIL_ADDRESS
## What type of PR?
bugfix WEBMAIL_ADDRESS not initialized in admin/mailu/configuration.py, leading to lot of errors in log.
## What does this PR do?
Initialize 'WEBMAIL_ADDRESS' to None in the admin configuration
### Related issue(s)
- closes #2125
## Prerequisites
None
</issue>
<code>
[start of core/admin/mailu/limiter.py]
1 from mailu import utils
2 from flask import current_app as app
3 import base64
4 import limits
5 import limits.storage
6 import limits.strategies
7
8 import hmac
9 import secrets
10
11 class LimitWrapper(object):
12 """ Wraps a limit by providing the storage, item and identifiers
13 """
14
15 def __init__(self, limiter, limit, *identifiers):
16 self.limiter = limiter
17 self.limit = limit
18 self.base_identifiers = identifiers
19
20 def test(self, *args):
21 return self.limiter.test(self.limit, *(self.base_identifiers + args))
22
23 def hit(self, *args):
24 return self.limiter.hit(self.limit, *(self.base_identifiers + args))
25
26 def get_window_stats(self, *args):
27 return self.limiter.get_window_stats(self.limit, *(self.base_identifiers + args))
28
29
30 class LimitWraperFactory(object):
31 """ Global limiter, to be used as a factory
32 """
33
34 def init_app(self, app):
35 self.storage = limits.storage.storage_from_string(app.config["RATELIMIT_STORAGE_URL"])
36 self.limiter = limits.strategies.MovingWindowRateLimiter(self.storage)
37
38 def get_limiter(self, limit, *args):
39 return LimitWrapper(self.limiter, limits.parse(limit), *args)
40
41 def is_subject_to_rate_limits(self, ip):
42 return False if utils.is_exempt_from_ratelimits(ip) else not (self.storage.get(f'exempt-{ip}') > 0)
43
44 def exempt_ip_from_ratelimits(self, ip):
45 self.storage.incr(f'exempt-{ip}', app.config["AUTH_RATELIMIT_EXEMPTION_LENGTH"], True)
46
47 def should_rate_limit_ip(self, ip):
48 limiter = self.get_limiter(app.config["AUTH_RATELIMIT_IP"], 'auth-ip')
49 client_network = utils.extract_network_from_ip(ip)
50 is_rate_limited = self.is_subject_to_rate_limits(ip) and not limiter.test(client_network)
51 if is_rate_limited:
52 app.logger.warn(f'Authentication attempt from {ip} has been rate-limited.')
53 return is_rate_limited
54
55 def rate_limit_ip(self, ip):
56 if ip != app.config['WEBMAIL_ADDRESS']:
57 limiter = self.get_limiter(app.config["AUTH_RATELIMIT_IP"], 'auth-ip')
58 client_network = utils.extract_network_from_ip(ip)
59 if self.is_subject_to_rate_limits(ip):
60 limiter.hit(client_network)
61
62 def should_rate_limit_user(self, username, ip, device_cookie=None, device_cookie_name=None):
63 limiter = self.get_limiter(app.config["AUTH_RATELIMIT_USER"], 'auth-user')
64 is_rate_limited = self.is_subject_to_rate_limits(ip) and not limiter.test(device_cookie if device_cookie_name == username else username)
65 if is_rate_limited:
66 app.logger.warn(f'Authentication attempt from {ip} for {username} has been rate-limited.')
67 return is_rate_limited
68
69 def rate_limit_user(self, username, ip, device_cookie=None, device_cookie_name=None):
70 limiter = self.get_limiter(app.config["AUTH_RATELIMIT_USER"], 'auth-user')
71 if self.is_subject_to_rate_limits(ip):
72 limiter.hit(device_cookie if device_cookie_name == username else username)
73
74 """ Device cookies as described on:
75 https://owasp.org/www-community/Slow_Down_Online_Guessing_Attacks_with_Device_Cookies
76 """
77 def parse_device_cookie(self, cookie):
78 try:
79 login, nonce, _ = cookie.split('$')
80 if hmac.compare_digest(cookie, self.device_cookie(login, nonce)):
81 return nonce, login
82 except:
83 pass
84 return None, None
85
86 """ Device cookies don't require strong crypto:
87 72bits of nonce, 96bits of signature is more than enough
88 and these values avoid padding in most cases
89 """
90 def device_cookie(self, username, nonce=None):
91 if not nonce:
92 nonce = secrets.token_urlsafe(9)
93 sig = str(base64.urlsafe_b64encode(hmac.new(app.device_cookie_key, bytearray(f'device_cookie|{username}|{nonce}', 'utf-8'), 'sha256').digest()[20:]), 'utf-8')
94 return f'{username}${nonce}${sig}'
95
[end of core/admin/mailu/limiter.py]
[start of core/admin/mailu/internal/views/auth.py]
1 from mailu import models, utils
2 from mailu.internal import internal, nginx
3 from flask import current_app as app
4
5 import flask
6 import flask_login
7 import base64
8
9 @internal.route("/auth/email")
10 def nginx_authentication():
11 """ Main authentication endpoint for Nginx email server
12 """
13 client_ip = flask.request.headers["Client-Ip"]
14 headers = flask.request.headers
15 if headers["Auth-Port"] == '25' and headers['Auth-Method'] == 'plain':
16 response = flask.Response()
17 response.headers['Auth-Status'] = 'AUTH not supported'
18 response.headers['Auth-Error-Code'] = '502 5.5.1'
19 utils.limiter.rate_limit_ip(client_ip)
20 return response
21 if utils.limiter.should_rate_limit_ip(client_ip):
22 status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
23 response = flask.Response()
24 response.headers['Auth-Status'] = status
25 response.headers['Auth-Error-Code'] = code
26 if int(flask.request.headers['Auth-Login-Attempt']) < 10:
27 response.headers['Auth-Wait'] = '3'
28 return response
29 headers = nginx.handle_authentication(flask.request.headers)
30 response = flask.Response()
31 for key, value in headers.items():
32 response.headers[key] = str(value)
33 is_valid_user = False
34 if response.headers.get("Auth-User-Exists"):
35 username = response.headers["Auth-User"]
36 if utils.limiter.should_rate_limit_user(username, client_ip):
37 # FIXME could be done before handle_authentication()
38 status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
39 response = flask.Response()
40 response.headers['Auth-Status'] = status
41 response.headers['Auth-Error-Code'] = code
42 if int(flask.request.headers['Auth-Login-Attempt']) < 10:
43 response.headers['Auth-Wait'] = '3'
44 return response
45 is_valid_user = True
46 if headers.get("Auth-Status") == "OK":
47 utils.limiter.exempt_ip_from_ratelimits(client_ip)
48 elif is_valid_user:
49 utils.limiter.rate_limit_user(username, client_ip)
50 else:
51 utils.limiter.rate_limit_ip(client_ip)
52 return response
53
54 @internal.route("/auth/admin")
55 def admin_authentication():
56 """ Fails if the user is not an authenticated admin.
57 """
58 if (not flask_login.current_user.is_anonymous
59 and flask_login.current_user.global_admin
60 and flask_login.current_user.enabled):
61 return ""
62 return flask.abort(403)
63
64 @internal.route("/auth/user")
65 def user_authentication():
66 """ Fails if the user is not authenticated.
67 """
68 if (not flask_login.current_user.is_anonymous
69 and flask_login.current_user.enabled):
70 response = flask.Response()
71 email = flask_login.current_user.get_id()
72 response.headers["X-User"] = models.IdnaEmail.process_bind_param(flask_login, email, "")
73 response.headers["X-User-Token"] = utils.gen_temp_token(email, flask.session)
74 return response
75 return flask.abort(403)
76
77
78 @internal.route("/auth/basic")
79 def basic_authentication():
80 """ Tries to authenticate using the Authorization header.
81 """
82 client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)
83 if utils.limiter.should_rate_limit_ip(client_ip):
84 response = flask.Response(status=401)
85 response.headers["WWW-Authenticate"] = 'Basic realm="Authentication rate limit from one source exceeded"'
86 response.headers['Retry-After'] = '60'
87 return response
88 authorization = flask.request.headers.get("Authorization")
89 if authorization and authorization.startswith("Basic "):
90 encoded = authorization.replace("Basic ", "")
91 user_email, password = base64.b64decode(encoded).split(b":", 1)
92 user_email = user_email.decode("utf8")
93 if utils.limiter.should_rate_limit_user(user_email, client_ip):
94 response = flask.Response(status=401)
95 response.headers["WWW-Authenticate"] = 'Basic realm="Authentication rate limit for this username exceeded"'
96 response.headers['Retry-After'] = '60'
97 return response
98 user = models.User.query.get(user_email)
99 if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, "web"):
100 response = flask.Response()
101 response.headers["X-User"] = models.IdnaEmail.process_bind_param(flask_login, user.email, "")
102 utils.limiter.exempt_ip_from_ratelimits(client_ip)
103 return response
104 utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)
105 response = flask.Response(status=401)
106 response.headers["WWW-Authenticate"] = 'Basic realm="Login Required"'
107 return response
108
[end of core/admin/mailu/internal/views/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/views/auth.py b/core/admin/mailu/internal/views/auth.py
--- a/core/admin/mailu/internal/views/auth.py
+++ b/core/admin/mailu/internal/views/auth.py
@@ -31,6 +31,7 @@
for key, value in headers.items():
response.headers[key] = str(value)
is_valid_user = False
+ is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
if response.headers.get("Auth-User-Exists"):
username = response.headers["Auth-User"]
if utils.limiter.should_rate_limit_user(username, client_ip):
@@ -47,7 +48,7 @@
utils.limiter.exempt_ip_from_ratelimits(client_ip)
elif is_valid_user:
utils.limiter.rate_limit_user(username, client_ip)
- else:
+ elif not is_from_webmail:
utils.limiter.rate_limit_ip(client_ip)
return response
diff --git a/core/admin/mailu/limiter.py b/core/admin/mailu/limiter.py
--- a/core/admin/mailu/limiter.py
+++ b/core/admin/mailu/limiter.py
@@ -53,11 +53,10 @@
return is_rate_limited
def rate_limit_ip(self, ip):
- if ip != app.config['WEBMAIL_ADDRESS']:
- limiter = self.get_limiter(app.config["AUTH_RATELIMIT_IP"], 'auth-ip')
- client_network = utils.extract_network_from_ip(ip)
- if self.is_subject_to_rate_limits(ip):
- limiter.hit(client_network)
+ limiter = self.get_limiter(app.config["AUTH_RATELIMIT_IP"], 'auth-ip')
+ client_network = utils.extract_network_from_ip(ip)
+ if self.is_subject_to_rate_limits(ip):
+ limiter.hit(client_network)
def should_rate_limit_user(self, username, ip, device_cookie=None, device_cookie_name=None):
limiter = self.get_limiter(app.config["AUTH_RATELIMIT_USER"], 'auth-user')
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/auth.py b/core/admin/mailu/internal/views/auth.py\n--- a/core/admin/mailu/internal/views/auth.py\n+++ b/core/admin/mailu/internal/views/auth.py\n@@ -31,6 +31,7 @@\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n+ is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n@@ -47,7 +48,7 @@\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n elif is_valid_user:\n utils.limiter.rate_limit_user(username, client_ip)\n- else:\n+ elif not is_from_webmail:\n utils.limiter.rate_limit_ip(client_ip)\n return response\n \ndiff --git a/core/admin/mailu/limiter.py b/core/admin/mailu/limiter.py\n--- a/core/admin/mailu/limiter.py\n+++ b/core/admin/mailu/limiter.py\n@@ -53,11 +53,10 @@\n return is_rate_limited\n \n def rate_limit_ip(self, ip):\n- if ip != app.config['WEBMAIL_ADDRESS']:\n- limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_IP\"], 'auth-ip')\n- client_network = utils.extract_network_from_ip(ip)\n- if self.is_subject_to_rate_limits(ip):\n- limiter.hit(client_network)\n+ limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_IP\"], 'auth-ip')\n+ client_network = utils.extract_network_from_ip(ip)\n+ if self.is_subject_to_rate_limits(ip):\n+ limiter.hit(client_network)\n \n def should_rate_limit_user(self, username, ip, device_cookie=None, device_cookie_name=None):\n limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_USER\"], 'auth-user')\n", "issue": "fix keyError WEBMAIL_ADDRESS\n## What type of PR?\r\nbugfix WEBMAIL_ADDRESS not initialized in admin/mailu/configuration.py, leading to lot of errors in log.\r\n\r\n## What does this PR do?\r\nInitialize 'WEBMAIL_ADDRESS' to None in the admin configuration\r\n\r\n### Related issue(s)\r\n- closes #2125\r\n\r\n## Prerequisites\r\nNone\n", "before_files": [{"content": "from mailu import utils\nfrom flask import current_app as app\nimport base64\nimport limits\nimport limits.storage\nimport limits.strategies\n\nimport hmac\nimport secrets\n\nclass LimitWrapper(object):\n \"\"\" Wraps a limit by providing the storage, item and identifiers\n \"\"\"\n\n def __init__(self, limiter, limit, *identifiers):\n self.limiter = limiter\n self.limit = limit\n self.base_identifiers = identifiers\n\n def test(self, *args):\n return self.limiter.test(self.limit, *(self.base_identifiers + args))\n\n def hit(self, *args):\n return self.limiter.hit(self.limit, *(self.base_identifiers + args))\n\n def get_window_stats(self, *args):\n return self.limiter.get_window_stats(self.limit, *(self.base_identifiers + args))\n\n\nclass LimitWraperFactory(object):\n \"\"\" Global limiter, to be used as a factory\n \"\"\"\n\n def init_app(self, app):\n self.storage = limits.storage.storage_from_string(app.config[\"RATELIMIT_STORAGE_URL\"])\n self.limiter = limits.strategies.MovingWindowRateLimiter(self.storage)\n\n def get_limiter(self, limit, *args):\n return LimitWrapper(self.limiter, limits.parse(limit), *args)\n\n def is_subject_to_rate_limits(self, ip):\n return False if utils.is_exempt_from_ratelimits(ip) else not (self.storage.get(f'exempt-{ip}') > 0)\n\n def exempt_ip_from_ratelimits(self, ip):\n self.storage.incr(f'exempt-{ip}', app.config[\"AUTH_RATELIMIT_EXEMPTION_LENGTH\"], True)\n\n def should_rate_limit_ip(self, ip):\n limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_IP\"], 'auth-ip')\n client_network = utils.extract_network_from_ip(ip)\n is_rate_limited = self.is_subject_to_rate_limits(ip) and not limiter.test(client_network)\n if is_rate_limited:\n app.logger.warn(f'Authentication attempt from {ip} has been rate-limited.')\n return is_rate_limited\n\n def rate_limit_ip(self, ip):\n if ip != app.config['WEBMAIL_ADDRESS']:\n limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_IP\"], 'auth-ip')\n client_network = utils.extract_network_from_ip(ip)\n if self.is_subject_to_rate_limits(ip):\n limiter.hit(client_network)\n\n def should_rate_limit_user(self, username, ip, device_cookie=None, device_cookie_name=None):\n limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_USER\"], 'auth-user')\n is_rate_limited = self.is_subject_to_rate_limits(ip) and not limiter.test(device_cookie if device_cookie_name == username else username)\n if is_rate_limited:\n app.logger.warn(f'Authentication attempt from {ip} for {username} has been rate-limited.')\n return is_rate_limited\n\n def rate_limit_user(self, username, ip, device_cookie=None, device_cookie_name=None):\n limiter = self.get_limiter(app.config[\"AUTH_RATELIMIT_USER\"], 'auth-user')\n if self.is_subject_to_rate_limits(ip):\n limiter.hit(device_cookie if device_cookie_name == username else username)\n\n \"\"\" Device cookies as described on:\n https://owasp.org/www-community/Slow_Down_Online_Guessing_Attacks_with_Device_Cookies\n \"\"\"\n def parse_device_cookie(self, cookie):\n try:\n login, nonce, _ = cookie.split('$')\n if hmac.compare_digest(cookie, self.device_cookie(login, nonce)):\n return nonce, login\n except:\n pass\n return None, None\n\n \"\"\" Device cookies don't require strong crypto:\n 72bits of nonce, 96bits of signature is more than enough\n and these values avoid padding in most cases\n \"\"\"\n def device_cookie(self, username, nonce=None):\n if not nonce:\n nonce = secrets.token_urlsafe(9)\n sig = str(base64.urlsafe_b64encode(hmac.new(app.device_cookie_key, bytearray(f'device_cookie|{username}|{nonce}', 'utf-8'), 'sha256').digest()[20:]), 'utf-8')\n return f'{username}${nonce}${sig}'\n", "path": "core/admin/mailu/limiter.py"}, {"content": "from mailu import models, utils\nfrom mailu.internal import internal, nginx\nfrom flask import current_app as app\n\nimport flask\nimport flask_login\nimport base64\n\[email protected](\"/auth/email\")\ndef nginx_authentication():\n \"\"\" Main authentication endpoint for Nginx email server\n \"\"\"\n client_ip = flask.request.headers[\"Client-Ip\"]\n headers = flask.request.headers\n if headers[\"Auth-Port\"] == '25' and headers['Auth-Method'] == 'plain':\n response = flask.Response()\n response.headers['Auth-Status'] = 'AUTH not supported'\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n if utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n headers = nginx.handle_authentication(flask.request.headers)\n response = flask.Response()\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n # FIXME could be done before handle_authentication()\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n is_valid_user = True\n if headers.get(\"Auth-Status\") == \"OK\":\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n elif is_valid_user:\n utils.limiter.rate_limit_user(username, client_ip)\n else:\n utils.limiter.rate_limit_ip(client_ip)\n return response\n\[email protected](\"/auth/admin\")\ndef admin_authentication():\n \"\"\" Fails if the user is not an authenticated admin.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.global_admin\n and flask_login.current_user.enabled):\n return \"\"\n return flask.abort(403)\n\[email protected](\"/auth/user\")\ndef user_authentication():\n \"\"\" Fails if the user is not authenticated.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.enabled):\n response = flask.Response()\n email = flask_login.current_user.get_id()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, email, \"\")\n response.headers[\"X-User-Token\"] = utils.gen_temp_token(email, flask.session)\n return response\n return flask.abort(403)\n\n\[email protected](\"/auth/basic\")\ndef basic_authentication():\n \"\"\" Tries to authenticate using the Authorization header.\n \"\"\"\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n if utils.limiter.should_rate_limit_ip(client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit from one source exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n authorization = flask.request.headers.get(\"Authorization\")\n if authorization and authorization.startswith(\"Basic \"):\n encoded = authorization.replace(\"Basic \", \"\")\n user_email, password = base64.b64decode(encoded).split(b\":\", 1)\n user_email = user_email.decode(\"utf8\")\n if utils.limiter.should_rate_limit_user(user_email, client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit for this username exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n user = models.User.query.get(user_email)\n if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, \"web\"):\n response = flask.Response()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, user.email, \"\")\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n return response\n utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Login Required\"'\n return response\n", "path": "core/admin/mailu/internal/views/auth.py"}]} | 3,011 | 448 |
gh_patches_debug_929 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-5586 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SEO et signature : <a rel="nofollow" />
Dans la signature il faudrait voir si on peut facilement ajouter un attribut `rel="nofollow"` pour préserver notre SEO. https://github.com/zestedesavoir/zmarkdown/blob/1dded309a2670689a4a3353f9e38b80624c6df1a/packages/zmarkdown/server/handlers.js#L139
> limitez les liens en signatures à des no follow or lien interne.
c’est pas mal (:evil) de partager un lien, mais si A-312 répond 4 fois dans la même page, il renvoie 4 fois du jus sur son compte twitter, 4 coding game, … ca a plusieurs effet négatifs
Source: https://zestedesavoir.com/forums/sujet/12099/seo-et-spam/?page=1#p199005
</issue>
<code>
[start of zds/utils/templatetags/emarkdown.py]
1 import re
2 import json
3 import logging
4 from requests import post, HTTPError
5
6 from django import template
7 from django.conf import settings
8 from django.template.defaultfilters import stringfilter
9 from django.utils.safestring import mark_safe
10 from django.utils.translation import ugettext_lazy as _
11
12 logger = logging.getLogger(__name__)
13 register = template.Library()
14 """
15 Markdown related filters.
16 """
17
18 # Constants
19 MAX_ATTEMPTS = 3
20 MD_PARSING_ERROR = _('Une erreur est survenue dans la génération de texte Markdown. Veuillez rapporter le bug.')
21
22 FORMAT_ENDPOINTS = {
23 'html': '/html',
24 'texfile': '/latex-document',
25 'epub': '/epub',
26 'tex': '/latex',
27 }
28
29
30 def _render_markdown_once(md_input, *, output_format='html', **kwargs):
31 """
32 Returns None on error (error details are logged). No retry mechanism.
33 """
34 def log_args():
35 logger.error('md_input: {!r}'.format(md_input))
36 logger.error('kwargs: {!r}'.format(kwargs))
37
38 inline = kwargs.get('inline', False) is True
39
40 if settings.ZDS_APP['zmd']['disable_pings'] is True:
41 kwargs['disable_ping'] = True
42
43 endpoint = FORMAT_ENDPOINTS[output_format]
44
45 try:
46 timeout = 10
47 if output_format.startswith('tex'):
48 # latex may be really long to generate but it is also restrained by server configuration
49 timeout = 120
50 response = post('{}{}'.format(settings.ZDS_APP['zmd']['server'], endpoint), json={
51 'opts': kwargs,
52 'md': str(md_input),
53 }, timeout=timeout)
54 except HTTPError:
55 logger.exception('An HTTP error happened, markdown rendering failed')
56 log_args()
57 return '', {}, []
58
59 if response.status_code == 413:
60 return '', {}, [{'message': str(_('Texte trop volumineux.'))}]
61
62 if response.status_code != 200:
63 logger.error('The markdown server replied with status {} (expected 200)'.format(response.status_code))
64 log_args()
65 return '', {}, []
66
67 try:
68 content, metadata, messages = response.json()
69 logger.debug('Result %s, %s, %s', content, metadata, messages)
70 if messages:
71 logger.error('Markdown errors %s', json.dumps(messages))
72 content = content.strip()
73 if inline:
74 content = content.replace('</p>\n', '\n\n').replace('\n<p>', '\n')
75 return mark_safe(content), metadata, messages
76 except: # noqa
77 logger.exception('Unexpected exception raised')
78 log_args()
79 return '', {}, []
80
81
82 def render_markdown(md_input, *, on_error=None, **kwargs):
83 """Render a markdown string.
84
85 Returns a tuple ``(rendered_content, metadata)``, where
86 ``rendered_content`` is a string and ``metadata`` is a dict.
87
88 Handles errors gracefully by returning an user-friendly HTML
89 string which explains that the Markdown rendering has failed
90 (without any technical details).
91
92 """
93 content, metadata, messages = _render_markdown_once(md_input, **kwargs)
94 if messages and on_error:
95 on_error([m['message'] for m in messages])
96 if content is not None:
97 # Success!
98 return content, metadata, messages
99
100 # Oops, something went wrong
101
102 attempts = kwargs.get('attempts', 0)
103 inline = kwargs.get('inline', False) is True
104
105 if attempts < MAX_ATTEMPTS:
106 if not kwargs:
107 kwargs = dict()
108 return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))
109
110 logger.error('Max attempt count reached, giving up')
111 logger.error('md_input: {!r}'.format(md_input))
112 logger.error('kwargs: {!r}'.format(kwargs))
113
114 # FIXME: This cannot work with LaTeX.
115 if inline:
116 return mark_safe('<p>{}</p>'.format(json.dumps(messages))), metadata, []
117 else:
118 return mark_safe('<div class="error ico-after"><p>{}</p></div>'.format(json.dumps(messages))), metadata, []
119
120
121 @register.filter(name='epub_markdown', needs_autoescape=False)
122 def epub_markdown(md_input, image_directory):
123 return emarkdown(md_input, output_format='epub', images_download_dir=image_directory.absolute,
124 local_url_to_local_path=[settings.MEDIA_URL + 'galleries/[0-9]+', image_directory.relative])
125
126
127 @register.filter(needs_autoescape=False)
128 @stringfilter
129 def emarkdown(md_input, use_jsfiddle='', **kwargs):
130 """
131 :param str md_input: Markdown string.
132 :return: HTML string.
133 :rtype: str
134 """
135 disable_jsfiddle = (use_jsfiddle != 'js')
136
137 content, metadata, messages = render_markdown(
138 md_input,
139 on_error=lambda m: logger.error('Markdown errors %s', str(m)),
140 **dict(kwargs, disable_jsfiddle=disable_jsfiddle))
141
142 return content or ''
143
144
145 @register.filter(needs_autoescape=False)
146 @stringfilter
147 def emarkdown_preview(md_input, use_jsfiddle='', **kwargs):
148 """
149 Filter markdown string and render it to html.
150
151 :param str md_input: Markdown string.
152 :return: HTML string.
153 :rtype: str
154 """
155 disable_jsfiddle = (use_jsfiddle != 'js')
156
157 content, metadata, messages = render_markdown(
158 md_input,
159 **dict(kwargs, disable_jsfiddle=disable_jsfiddle))
160
161 if messages:
162 content = _('</div><div class="preview-error"><strong>Erreur du serveur Markdown:</strong>\n{}'
163 .format('<br>- '.join([m['message'] for m in messages])))
164 content = mark_safe(content)
165
166 return content
167
168
169 @register.filter(needs_autoescape=False)
170 @stringfilter
171 def emarkdown_inline(text):
172 """
173 Parses inline elements only and renders HTML. Mainly for member signatures.
174 Although they are inline elements, pings are disabled.
175
176 :param str text: Markdown string.
177 :return: HTML string.
178 :rtype: str
179 """
180 rendered = emarkdown(text, inline=True)
181 return rendered
182
183
184 def sub_hd(match, count):
185 """Replace header shifted."""
186 subt = match.group(1)
187 lvl = match.group('level')
188 header = match.group('header')
189 end = match.group(4)
190
191 new_content = subt + '#' * count + lvl + header + end
192
193 return new_content
194
195
196 def shift_heading(text, count):
197 """
198 Shift header in markdown document.
199
200 :param str text: Text to filter.
201 :param int count:
202 :return: Filtered text.
203 :rtype: str
204 """
205 text_by_code = re.split('(```|~~~)', text)
206 starting_code = None
207 for i, element in enumerate(text_by_code):
208 if element in ['```', '~~~'] and not starting_code:
209 starting_code = element
210 elif element == starting_code:
211 starting_code = None
212 elif starting_code is None:
213 text_by_code[i] = re.sub(r'(^|\n)(?P<level>#{1,4})(?P<header>.*?)#*(\n|$)',
214 lambda t: sub_hd(t, count), text_by_code[i])
215
216 return ''.join(text_by_code)
217
218
219 @register.filter('shift_heading_1')
220 def shift_heading_1(text):
221 return shift_heading(text, 1)
222
223
224 @register.filter('shift_heading_2')
225 def shift_heading_2(text):
226 return shift_heading(text, 2)
227
228
229 @register.filter('shift_heading_3')
230 def shift_heading_3(text):
231 return shift_heading(text, 3)
232
[end of zds/utils/templatetags/emarkdown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py
--- a/zds/utils/templatetags/emarkdown.py
+++ b/zds/utils/templatetags/emarkdown.py
@@ -178,7 +178,7 @@
:rtype: str
"""
rendered = emarkdown(text, inline=True)
- return rendered
+ return mark_safe(rendered.replace('<a href=', '<a rel="nofollow" href='))
def sub_hd(match, count):
| {"golden_diff": "diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py\n--- a/zds/utils/templatetags/emarkdown.py\n+++ b/zds/utils/templatetags/emarkdown.py\n@@ -178,7 +178,7 @@\n :rtype: str\n \"\"\"\n rendered = emarkdown(text, inline=True)\n- return rendered\n+ return mark_safe(rendered.replace('<a href=', '<a rel=\"nofollow\" href='))\n \n \n def sub_hd(match, count):\n", "issue": "SEO et signature : <a rel=\"nofollow\" />\nDans la signature il faudrait voir si on peut facilement ajouter un attribut `rel=\"nofollow\"` pour pr\u00e9server notre SEO. https://github.com/zestedesavoir/zmarkdown/blob/1dded309a2670689a4a3353f9e38b80624c6df1a/packages/zmarkdown/server/handlers.js#L139\r\n\r\n> limitez les liens en signatures \u00e0 des no follow or lien interne.\r\nc\u2019est pas mal (:evil) de partager un lien, mais si A-312 r\u00e9pond 4 fois dans la m\u00eame page, il renvoie 4 fois du jus sur son compte twitter, 4 coding game, \u2026 ca a plusieurs effet n\u00e9gatifs\r\nSource: https://zestedesavoir.com/forums/sujet/12099/seo-et-spam/?page=1#p199005\r\n\r\n\n", "before_files": [{"content": "import re\nimport json\nimport logging\nfrom requests import post, HTTPError\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext_lazy as _\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\"\"\"\nMarkdown related filters.\n\"\"\"\n\n# Constants\nMAX_ATTEMPTS = 3\nMD_PARSING_ERROR = _('Une erreur est survenue dans la g\u00e9n\u00e9ration de texte Markdown. Veuillez rapporter le bug.')\n\nFORMAT_ENDPOINTS = {\n 'html': '/html',\n 'texfile': '/latex-document',\n 'epub': '/epub',\n 'tex': '/latex',\n}\n\n\ndef _render_markdown_once(md_input, *, output_format='html', **kwargs):\n \"\"\"\n Returns None on error (error details are logged). No retry mechanism.\n \"\"\"\n def log_args():\n logger.error('md_input: {!r}'.format(md_input))\n logger.error('kwargs: {!r}'.format(kwargs))\n\n inline = kwargs.get('inline', False) is True\n\n if settings.ZDS_APP['zmd']['disable_pings'] is True:\n kwargs['disable_ping'] = True\n\n endpoint = FORMAT_ENDPOINTS[output_format]\n\n try:\n timeout = 10\n if output_format.startswith('tex'):\n # latex may be really long to generate but it is also restrained by server configuration\n timeout = 120\n response = post('{}{}'.format(settings.ZDS_APP['zmd']['server'], endpoint), json={\n 'opts': kwargs,\n 'md': str(md_input),\n }, timeout=timeout)\n except HTTPError:\n logger.exception('An HTTP error happened, markdown rendering failed')\n log_args()\n return '', {}, []\n\n if response.status_code == 413:\n return '', {}, [{'message': str(_('Texte trop volumineux.'))}]\n\n if response.status_code != 200:\n logger.error('The markdown server replied with status {} (expected 200)'.format(response.status_code))\n log_args()\n return '', {}, []\n\n try:\n content, metadata, messages = response.json()\n logger.debug('Result %s, %s, %s', content, metadata, messages)\n if messages:\n logger.error('Markdown errors %s', json.dumps(messages))\n content = content.strip()\n if inline:\n content = content.replace('</p>\\n', '\\n\\n').replace('\\n<p>', '\\n')\n return mark_safe(content), metadata, messages\n except: # noqa\n logger.exception('Unexpected exception raised')\n log_args()\n return '', {}, []\n\n\ndef render_markdown(md_input, *, on_error=None, **kwargs):\n \"\"\"Render a markdown string.\n\n Returns a tuple ``(rendered_content, metadata)``, where\n ``rendered_content`` is a string and ``metadata`` is a dict.\n\n Handles errors gracefully by returning an user-friendly HTML\n string which explains that the Markdown rendering has failed\n (without any technical details).\n\n \"\"\"\n content, metadata, messages = _render_markdown_once(md_input, **kwargs)\n if messages and on_error:\n on_error([m['message'] for m in messages])\n if content is not None:\n # Success!\n return content, metadata, messages\n\n # Oops, something went wrong\n\n attempts = kwargs.get('attempts', 0)\n inline = kwargs.get('inline', False) is True\n\n if attempts < MAX_ATTEMPTS:\n if not kwargs:\n kwargs = dict()\n return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))\n\n logger.error('Max attempt count reached, giving up')\n logger.error('md_input: {!r}'.format(md_input))\n logger.error('kwargs: {!r}'.format(kwargs))\n\n # FIXME: This cannot work with LaTeX.\n if inline:\n return mark_safe('<p>{}</p>'.format(json.dumps(messages))), metadata, []\n else:\n return mark_safe('<div class=\"error ico-after\"><p>{}</p></div>'.format(json.dumps(messages))), metadata, []\n\n\[email protected](name='epub_markdown', needs_autoescape=False)\ndef epub_markdown(md_input, image_directory):\n return emarkdown(md_input, output_format='epub', images_download_dir=image_directory.absolute,\n local_url_to_local_path=[settings.MEDIA_URL + 'galleries/[0-9]+', image_directory.relative])\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown(md_input, use_jsfiddle='', **kwargs):\n \"\"\"\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = (use_jsfiddle != 'js')\n\n content, metadata, messages = render_markdown(\n md_input,\n on_error=lambda m: logger.error('Markdown errors %s', str(m)),\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n return content or ''\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_preview(md_input, use_jsfiddle='', **kwargs):\n \"\"\"\n Filter markdown string and render it to html.\n\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = (use_jsfiddle != 'js')\n\n content, metadata, messages = render_markdown(\n md_input,\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n if messages:\n content = _('</div><div class=\"preview-error\"><strong>Erreur du serveur Markdown:</strong>\\n{}'\n .format('<br>- '.join([m['message'] for m in messages])))\n content = mark_safe(content)\n\n return content\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_inline(text):\n \"\"\"\n Parses inline elements only and renders HTML. Mainly for member signatures.\n Although they are inline elements, pings are disabled.\n\n :param str text: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n rendered = emarkdown(text, inline=True)\n return rendered\n\n\ndef sub_hd(match, count):\n \"\"\"Replace header shifted.\"\"\"\n subt = match.group(1)\n lvl = match.group('level')\n header = match.group('header')\n end = match.group(4)\n\n new_content = subt + '#' * count + lvl + header + end\n\n return new_content\n\n\ndef shift_heading(text, count):\n \"\"\"\n Shift header in markdown document.\n\n :param str text: Text to filter.\n :param int count:\n :return: Filtered text.\n :rtype: str\n \"\"\"\n text_by_code = re.split('(```|~~~)', text)\n starting_code = None\n for i, element in enumerate(text_by_code):\n if element in ['```', '~~~'] and not starting_code:\n starting_code = element\n elif element == starting_code:\n starting_code = None\n elif starting_code is None:\n text_by_code[i] = re.sub(r'(^|\\n)(?P<level>#{1,4})(?P<header>.*?)#*(\\n|$)',\n lambda t: sub_hd(t, count), text_by_code[i])\n\n return ''.join(text_by_code)\n\n\[email protected]('shift_heading_1')\ndef shift_heading_1(text):\n return shift_heading(text, 1)\n\n\[email protected]('shift_heading_2')\ndef shift_heading_2(text):\n return shift_heading(text, 2)\n\n\[email protected]('shift_heading_3')\ndef shift_heading_3(text):\n return shift_heading(text, 3)\n", "path": "zds/utils/templatetags/emarkdown.py"}]} | 3,062 | 127 |
gh_patches_debug_11902 | rasdani/github-patches | git_diff | ansible-collections__community.general-2204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't create github repo
### Summary
When I try to create a github repo with community general 2.4.0 github_repo module, I got an assertion error related to port.
### Issue Type
Bug Report
### Component Name
github_repo
### Ansible Version
ansible 2.10.7
community.general 2.4.0
PyGithub 1.54.1
### Configuration
_No response_
### OS / Environment
MacOS Mojave 10.14.6
### Steps to Reproduce
This is the task I've used
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Create a Github repository
github_repo:
access_token: "{{ lookup('env', 'GITHUB_ACESS_TOKEN') }}"
organization: xxx
name: xxx
description: "xxx"
private: yes
```
### Expected Results
it should create a new Github Repo without error
### Actual Results
```console (paste below)
The full traceback is:
File "/var/folders/kx/5h602b9s6fq3r5h7lxpxy8sr0000gn/T/ansible_github_repo_payload_8ng_7p6h/ansible_github_repo_payload.zip/ansible_collections/community/general/plugins/modules/github_repo.py", line 233, in main
File "/var/folders/kx/5h602b9s6fq3r5h7lxpxy8sr0000gn/T/ansible_github_repo_payload_8ng_7p6h/ansible_github_repo_payload.zip/ansible_collections/community/general/plugins/modules/github_repo.py", line 197, in run_module
File "/var/folders/kx/5h602b9s6fq3r5h7lxpxy8sr0000gn/T/ansible_github_repo_payload_8ng_7p6h/ansible_github_repo_payload.zip/ansible_collections/community/general/plugins/modules/github_repo.py", line 144, in create_repo
File "/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Organization.py", line 578, in create_repo
headers, data = self._requester.requestJsonAndCheck(
File "/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py", line 316, in requestJsonAndCheck
*self.requestJson(
File "/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py", line 408, in requestJson
return self.__requestEncode(cnx, verb, url, parameters, headers, input, encode)
File "/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py", line 475, in __requestEncode
url = self.__makeAbsoluteUrl(url)
File "/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py", line 557, in __makeAbsoluteUrl
assert o.port == self.__port
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"access_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"description": "xxx",
"name": "xxx",
"organization": "xxx",
"password": null,
"private": true,
"state": "present",
"username": null
}
},
"msg": "Unexpected error. AssertionError()"
}
```
### Code of Conduct
I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/source_control/github/github_repo.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2021, Álvaro Torres Cogollo
5 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
6
7 from __future__ import absolute_import, division, print_function
8 __metaclass__ = type
9
10 DOCUMENTATION = '''
11 ---
12 module: github_repo
13 short_description: Manage your repositories on Github
14 version_added: 2.2.0
15 description:
16 - Manages Github repositories using PyGithub library.
17 - Authentication can be done with I(access_token) or with I(username) and I(password).
18 options:
19 username:
20 description:
21 - Username used for authentication.
22 - This is only needed when not using I(access_token).
23 type: str
24 required: false
25 password:
26 description:
27 - Password used for authentication.
28 - This is only needed when not using I(access_token).
29 type: str
30 required: false
31 access_token:
32 description:
33 - Token parameter for authentication.
34 - This is only needed when not using I(username) and I(password).
35 type: str
36 required: false
37 name:
38 description:
39 - Repository name.
40 type: str
41 required: true
42 description:
43 description:
44 - Description for the repository.
45 - This is only used when I(state) is C(present).
46 type: str
47 default: ''
48 required: false
49 private:
50 description:
51 - Whether the new repository should be private or not.
52 - This is only used when I(state) is C(present).
53 type: bool
54 default: no
55 required: false
56 state:
57 description:
58 - Whether the repository should exist or not.
59 type: str
60 default: present
61 choices: [ absent, present ]
62 required: false
63 organization:
64 description:
65 - Organization for the repository.
66 - When I(state) is C(present), the repository will be created in the current user profile.
67 type: str
68 required: false
69 requirements:
70 - PyGithub>=1.54
71 notes:
72 - For Python 3, PyGithub>=1.54 should be used.
73 - "For Python 3.5, PyGithub==1.54 should be used. More information: U(https://pygithub.readthedocs.io/en/latest/changes.html#version-1-54-november-30-2020)."
74 - "For Python 2.7, PyGithub==1.45 should be used. More information: U(https://pygithub.readthedocs.io/en/latest/changes.html#version-1-45-december-29-2019)."
75 - Supports C(check_mode).
76 author:
77 - Álvaro Torres Cogollo (@atorrescogollo)
78 '''
79
80 EXAMPLES = '''
81 - name: Create a Github repository
82 community.general.github_repo:
83 access_token: mytoken
84 organization: MyOrganization
85 name: myrepo
86 description: "Just for fun"
87 private: yes
88 state: present
89 register: result
90
91 - name: Delete the repository
92 community.general.github_repo:
93 username: octocat
94 password: password
95 organization: MyOrganization
96 name: myrepo
97 state: absent
98 register: result
99 '''
100
101 RETURN = '''
102 repo:
103 description: Repository information as JSON. See U(https://docs.github.com/en/rest/reference/repos#get-a-repository).
104 returned: success and I(state) is C(present)
105 type: dict
106 '''
107
108 import traceback
109 from ansible.module_utils.basic import AnsibleModule, missing_required_lib
110 import sys
111
112 GITHUB_IMP_ERR = None
113 try:
114 from github import Github, GithubException
115 from github.GithubException import UnknownObjectException
116 HAS_GITHUB_PACKAGE = True
117 except Exception:
118 GITHUB_IMP_ERR = traceback.format_exc()
119 HAS_GITHUB_PACKAGE = False
120
121
122 def authenticate(username=None, password=None, access_token=None):
123 if access_token:
124 return Github(base_url="https://api.github.com:443", login_or_token=access_token)
125 else:
126 return Github(base_url="https://api.github.com:443", login_or_token=username, password=password)
127
128
129 def create_repo(gh, name, organization=None, private=False, description='', check_mode=False):
130 result = dict(
131 changed=False,
132 repo=dict())
133 if organization:
134 target = gh.get_organization(organization)
135 else:
136 target = gh.get_user()
137
138 repo = None
139 try:
140 repo = target.get_repo(name=name)
141 result['repo'] = repo.raw_data
142 except UnknownObjectException:
143 if not check_mode:
144 repo = target.create_repo(
145 name=name, private=private, description=description)
146 result['repo'] = repo.raw_data
147
148 result['changed'] = True
149
150 changes = {}
151 if repo is None or repo.raw_data['private'] != private:
152 changes['private'] = private
153 if repo is None or repo.raw_data['description'] != description:
154 changes['description'] = description
155
156 if changes:
157 if not check_mode:
158 repo.edit(**changes)
159
160 result['repo'].update({
161 'private': repo._private.value if not check_mode else private,
162 'description': repo._description.value if not check_mode else description,
163 })
164 result['changed'] = True
165
166 return result
167
168
169 def delete_repo(gh, name, organization=None, check_mode=False):
170 result = dict(changed=False)
171 if organization:
172 target = gh.get_organization(organization)
173 else:
174 target = gh.get_user()
175 try:
176 repo = target.get_repo(name=name)
177 if not check_mode:
178 repo.delete()
179 result['changed'] = True
180 except UnknownObjectException:
181 pass
182
183 return result
184
185
186 def run_module(params, check_mode=False):
187 gh = authenticate(
188 username=params['username'], password=params['password'], access_token=params['access_token'])
189 if params['state'] == "absent":
190 return delete_repo(
191 gh=gh,
192 name=params['name'],
193 organization=params['organization'],
194 check_mode=check_mode
195 )
196 else:
197 return create_repo(
198 gh=gh,
199 name=params['name'],
200 organization=params['organization'],
201 private=params['private'],
202 description=params['description'],
203 check_mode=check_mode
204 )
205
206
207 def main():
208 module_args = dict(
209 username=dict(type='str', required=False, default=None),
210 password=dict(type='str', required=False, default=None, no_log=True),
211 access_token=dict(type='str', required=False,
212 default=None, no_log=True),
213 name=dict(type='str', required=True),
214 state=dict(type='str', required=False, default="present",
215 choices=["present", "absent"]),
216 organization=dict(type='str', required=False, default=None),
217 private=dict(type='bool', required=False, default=False),
218 description=dict(type='str', required=False, default=''),
219 )
220 module = AnsibleModule(
221 argument_spec=module_args,
222 supports_check_mode=True,
223 required_together=[('username', 'password')],
224 required_one_of=[('username', 'access_token')],
225 mutually_exclusive=[('username', 'access_token')]
226 )
227
228 if not HAS_GITHUB_PACKAGE:
229 module.fail_json(msg=missing_required_lib(
230 "PyGithub"), exception=GITHUB_IMP_ERR)
231
232 try:
233 result = run_module(module.params, module.check_mode)
234 module.exit_json(**result)
235 except GithubException as e:
236 module.fail_json(msg="Github error. {0}".format(repr(e)))
237 except Exception as e:
238 module.fail_json(msg="Unexpected error. {0}".format(repr(e)))
239
240
241 if __name__ == '__main__':
242 main()
243
[end of plugins/modules/source_control/github/github_repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/source_control/github/github_repo.py b/plugins/modules/source_control/github/github_repo.py
--- a/plugins/modules/source_control/github/github_repo.py
+++ b/plugins/modules/source_control/github/github_repo.py
@@ -121,9 +121,9 @@
def authenticate(username=None, password=None, access_token=None):
if access_token:
- return Github(base_url="https://api.github.com:443", login_or_token=access_token)
+ return Github(base_url="https://api.github.com", login_or_token=access_token)
else:
- return Github(base_url="https://api.github.com:443", login_or_token=username, password=password)
+ return Github(base_url="https://api.github.com", login_or_token=username, password=password)
def create_repo(gh, name, organization=None, private=False, description='', check_mode=False):
| {"golden_diff": "diff --git a/plugins/modules/source_control/github/github_repo.py b/plugins/modules/source_control/github/github_repo.py\n--- a/plugins/modules/source_control/github/github_repo.py\n+++ b/plugins/modules/source_control/github/github_repo.py\n@@ -121,9 +121,9 @@\n \n def authenticate(username=None, password=None, access_token=None):\n if access_token:\n- return Github(base_url=\"https://api.github.com:443\", login_or_token=access_token)\n+ return Github(base_url=\"https://api.github.com\", login_or_token=access_token)\n else:\n- return Github(base_url=\"https://api.github.com:443\", login_or_token=username, password=password)\n+ return Github(base_url=\"https://api.github.com\", login_or_token=username, password=password)\n \n \n def create_repo(gh, name, organization=None, private=False, description='', check_mode=False):\n", "issue": "Can't create github repo\n### Summary\r\n\r\nWhen I try to create a github repo with community general 2.4.0 github_repo module, I got an assertion error related to port.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ngithub_repo\r\n\r\n### Ansible Version\r\n\r\nansible 2.10.7\r\ncommunity.general 2.4.0\r\nPyGithub 1.54.1\r\n\r\n### Configuration\r\n\r\n_No response_\r\n\r\n### OS / Environment\r\n\r\nMacOS Mojave 10.14.6\r\n\r\n### Steps to Reproduce\r\n\r\nThis is the task I've used\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: Create a Github repository\r\n github_repo:\r\n access_token: \"{{ lookup('env', 'GITHUB_ACESS_TOKEN') }}\"\r\n organization: xxx\r\n name: xxx\r\n description: \"xxx\"\r\n private: yes\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nit should create a new Github Repo without error\r\n\r\n### Actual Results\r\n\r\n```console (paste below)\r\nThe full traceback is:\r\n File \"/var/folders/kx/5h602b9s6fq3r5h7lxpxy8sr0000gn/T/ansible_github_repo_payload_8ng_7p6h/ansible_github_repo_payload.zip/ansible_collections/community/general/plugins/modules/github_repo.py\", line 233, in main\r\n File \"/var/folders/kx/5h602b9s6fq3r5h7lxpxy8sr0000gn/T/ansible_github_repo_payload_8ng_7p6h/ansible_github_repo_payload.zip/ansible_collections/community/general/plugins/modules/github_repo.py\", line 197, in run_module\r\n File \"/var/folders/kx/5h602b9s6fq3r5h7lxpxy8sr0000gn/T/ansible_github_repo_payload_8ng_7p6h/ansible_github_repo_payload.zip/ansible_collections/community/general/plugins/modules/github_repo.py\", line 144, in create_repo\r\n File \"/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Organization.py\", line 578, in create_repo\r\n headers, data = self._requester.requestJsonAndCheck(\r\n File \"/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py\", line 316, in requestJsonAndCheck\r\n *self.requestJson(\r\n File \"/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py\", line 408, in requestJson\r\n return self.__requestEncode(cnx, verb, url, parameters, headers, input, encode)\r\n File \"/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py\", line 475, in __requestEncode\r\n url = self.__makeAbsoluteUrl(url)\r\n File \"/Users/kakarukeys/.local/share/virtualenvs/infra_setup-AVEq_0GX/lib/python3.9/site-packages/github/Requester.py\", line 557, in __makeAbsoluteUrl\r\n assert o.port == self.__port\r\nfatal: [localhost]: FAILED! => {\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"access_token\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\",\r\n \"description\": \"xxx\",\r\n \"name\": \"xxx\",\r\n \"organization\": \"xxx\",\r\n \"password\": null,\r\n \"private\": true,\r\n \"state\": \"present\",\r\n \"username\": null\r\n }\r\n },\r\n \"msg\": \"Unexpected error. AssertionError()\"\r\n}\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\nI agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2021, \u00c1lvaro Torres Cogollo\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\n---\nmodule: github_repo\nshort_description: Manage your repositories on Github\nversion_added: 2.2.0\ndescription:\n- Manages Github repositories using PyGithub library.\n- Authentication can be done with I(access_token) or with I(username) and I(password).\noptions:\n username:\n description:\n - Username used for authentication.\n - This is only needed when not using I(access_token).\n type: str\n required: false\n password:\n description:\n - Password used for authentication.\n - This is only needed when not using I(access_token).\n type: str\n required: false\n access_token:\n description:\n - Token parameter for authentication.\n - This is only needed when not using I(username) and I(password).\n type: str\n required: false\n name:\n description:\n - Repository name.\n type: str\n required: true\n description:\n description:\n - Description for the repository.\n - This is only used when I(state) is C(present).\n type: str\n default: ''\n required: false\n private:\n description:\n - Whether the new repository should be private or not.\n - This is only used when I(state) is C(present).\n type: bool\n default: no\n required: false\n state:\n description:\n - Whether the repository should exist or not.\n type: str\n default: present\n choices: [ absent, present ]\n required: false\n organization:\n description:\n - Organization for the repository.\n - When I(state) is C(present), the repository will be created in the current user profile.\n type: str\n required: false\nrequirements:\n- PyGithub>=1.54\nnotes:\n- For Python 3, PyGithub>=1.54 should be used.\n- \"For Python 3.5, PyGithub==1.54 should be used. More information: U(https://pygithub.readthedocs.io/en/latest/changes.html#version-1-54-november-30-2020).\"\n- \"For Python 2.7, PyGithub==1.45 should be used. More information: U(https://pygithub.readthedocs.io/en/latest/changes.html#version-1-45-december-29-2019).\"\n- Supports C(check_mode).\nauthor:\n- \u00c1lvaro Torres Cogollo (@atorrescogollo)\n'''\n\nEXAMPLES = '''\n- name: Create a Github repository\n community.general.github_repo:\n access_token: mytoken\n organization: MyOrganization\n name: myrepo\n description: \"Just for fun\"\n private: yes\n state: present\n register: result\n\n- name: Delete the repository\n community.general.github_repo:\n username: octocat\n password: password\n organization: MyOrganization\n name: myrepo\n state: absent\n register: result\n'''\n\nRETURN = '''\nrepo:\n description: Repository information as JSON. See U(https://docs.github.com/en/rest/reference/repos#get-a-repository).\n returned: success and I(state) is C(present)\n type: dict\n'''\n\nimport traceback\nfrom ansible.module_utils.basic import AnsibleModule, missing_required_lib\nimport sys\n\nGITHUB_IMP_ERR = None\ntry:\n from github import Github, GithubException\n from github.GithubException import UnknownObjectException\n HAS_GITHUB_PACKAGE = True\nexcept Exception:\n GITHUB_IMP_ERR = traceback.format_exc()\n HAS_GITHUB_PACKAGE = False\n\n\ndef authenticate(username=None, password=None, access_token=None):\n if access_token:\n return Github(base_url=\"https://api.github.com:443\", login_or_token=access_token)\n else:\n return Github(base_url=\"https://api.github.com:443\", login_or_token=username, password=password)\n\n\ndef create_repo(gh, name, organization=None, private=False, description='', check_mode=False):\n result = dict(\n changed=False,\n repo=dict())\n if organization:\n target = gh.get_organization(organization)\n else:\n target = gh.get_user()\n\n repo = None\n try:\n repo = target.get_repo(name=name)\n result['repo'] = repo.raw_data\n except UnknownObjectException:\n if not check_mode:\n repo = target.create_repo(\n name=name, private=private, description=description)\n result['repo'] = repo.raw_data\n\n result['changed'] = True\n\n changes = {}\n if repo is None or repo.raw_data['private'] != private:\n changes['private'] = private\n if repo is None or repo.raw_data['description'] != description:\n changes['description'] = description\n\n if changes:\n if not check_mode:\n repo.edit(**changes)\n\n result['repo'].update({\n 'private': repo._private.value if not check_mode else private,\n 'description': repo._description.value if not check_mode else description,\n })\n result['changed'] = True\n\n return result\n\n\ndef delete_repo(gh, name, organization=None, check_mode=False):\n result = dict(changed=False)\n if organization:\n target = gh.get_organization(organization)\n else:\n target = gh.get_user()\n try:\n repo = target.get_repo(name=name)\n if not check_mode:\n repo.delete()\n result['changed'] = True\n except UnknownObjectException:\n pass\n\n return result\n\n\ndef run_module(params, check_mode=False):\n gh = authenticate(\n username=params['username'], password=params['password'], access_token=params['access_token'])\n if params['state'] == \"absent\":\n return delete_repo(\n gh=gh,\n name=params['name'],\n organization=params['organization'],\n check_mode=check_mode\n )\n else:\n return create_repo(\n gh=gh,\n name=params['name'],\n organization=params['organization'],\n private=params['private'],\n description=params['description'],\n check_mode=check_mode\n )\n\n\ndef main():\n module_args = dict(\n username=dict(type='str', required=False, default=None),\n password=dict(type='str', required=False, default=None, no_log=True),\n access_token=dict(type='str', required=False,\n default=None, no_log=True),\n name=dict(type='str', required=True),\n state=dict(type='str', required=False, default=\"present\",\n choices=[\"present\", \"absent\"]),\n organization=dict(type='str', required=False, default=None),\n private=dict(type='bool', required=False, default=False),\n description=dict(type='str', required=False, default=''),\n )\n module = AnsibleModule(\n argument_spec=module_args,\n supports_check_mode=True,\n required_together=[('username', 'password')],\n required_one_of=[('username', 'access_token')],\n mutually_exclusive=[('username', 'access_token')]\n )\n\n if not HAS_GITHUB_PACKAGE:\n module.fail_json(msg=missing_required_lib(\n \"PyGithub\"), exception=GITHUB_IMP_ERR)\n\n try:\n result = run_module(module.params, module.check_mode)\n module.exit_json(**result)\n except GithubException as e:\n module.fail_json(msg=\"Github error. {0}\".format(repr(e)))\n except Exception as e:\n module.fail_json(msg=\"Unexpected error. {0}\".format(repr(e)))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/source_control/github/github_repo.py"}]} | 3,745 | 190 |
gh_patches_debug_4192 | rasdani/github-patches | git_diff | OpenMined__PySyft-2276 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dim does not work with multipointer tensors
**Describe the bug**
Calling dim on a multipointer tensor returns a multipointer tensor where the values of the children are all ints. The return signature should be an int
**To Reproduce**
create a multipointer tensor and call .dim()
**Expected behavior**
the value returned should be an int
</issue>
<code>
[start of syft/frameworks/torch/tensors/interpreters/multi_pointer.py]
1 import torch
2 from typing import List
3 from typing import Union
4
5 import syft as sy
6 from syft.frameworks.torch.tensors.interpreters.abstract import AbstractTensor
7 from syft.frameworks.torch.tensors.interpreters import AdditiveSharingTensor
8 from syft.workers import BaseWorker
9 from syft.frameworks.torch.overload_torch import overloaded
10
11 from syft.workers import AbstractWorker
12
13
14 class MultiPointerTensor(AbstractTensor):
15 ""
16
17 def __init__(
18 self,
19 location: BaseWorker = None,
20 id_at_location: Union[str, int] = None,
21 register: bool = False,
22 owner: BaseWorker = None,
23 id: Union[str, int] = None,
24 garbage_collect_data: bool = True,
25 point_to_attr: str = None,
26 tags: List[str] = None,
27 description: str = None,
28 children: List[AbstractTensor] = [],
29 ):
30
31 super().__init__(tags, description)
32
33 self.location = location
34 self.id_at_location = id_at_location
35 self.owner = owner
36 self.id = id
37 self.garbage_collect_data = garbage_collect_data
38 self.point_to_attr = point_to_attr
39
40 self.child = {}
41 for c in children:
42 assert c.shape == children[0].shape
43 self.child[c.location.id] = c
44
45 def __str__(self):
46 type_name = type(self).__name__
47 out = f"[" f"{type_name}]"
48 for v in self.child.values():
49 out += "\n\t-> " + str(v)
50 return out
51
52 def __eq__(self, other):
53 return torch.eq(self, other)
54
55 def __add__(self, other):
56 """
57 Adding a MultiPointer (MPT) and an AdditiveShared Tensor (AST) should return an
58 AdditiveShared Tensor, so if we have this configuration, we permute self and
59 other to use the fact that other.__add__(...) return an object of type other
60
61 Else, we just redirect to .add which works well
62 """
63 if isinstance(other, AdditiveSharingTensor):
64 return other.__add__(self)
65 else:
66 return self.add(other)
67
68 def __mul__(self, other):
69 """
70 See __add__ for details but, MPT * AST should return AST
71 """
72 if isinstance(other, AdditiveSharingTensor):
73 return other.__mul__(self)
74 else:
75 return self.mul(other)
76
77 @property
78 def shape(self) -> torch.Size:
79 """This method returns the shape of the data being pointed to.
80 This shape information SHOULD be cached on self._shape, but
81 occasionally this information may not be present. If this is the
82 case, then it requests the shape information from the remote object
83 directly (which is inefficient and should be avoided)."""
84
85 return list(self.child.values())[0].shape
86
87 def get(self, sum_results: bool = False) -> torch.Tensor:
88
89 results = list()
90 for v in self.child.values():
91 results.append(v.get())
92
93 if sum_results:
94 return sum(results)
95
96 return results
97
98 def virtual_get(self, sum_results: bool = False):
99 """Get the value of the tensor without calling get - Only for VirtualWorkers"""
100
101 results = list()
102 for v in self.child.values():
103 value = v.location._objects[v.id_at_location]
104 results.append(value)
105
106 if sum_results:
107 return sum(results)
108
109 return results
110
111 @staticmethod
112 def dispatch(args, worker):
113 """
114 utility function for handle_func_command which help to select
115 shares (seen as elements of dict) in an argument set. It could
116 perhaps be put elsewhere
117
118 Args:
119 args: arguments to give to a functions
120 worker: owner of the shares to select
121
122 Return:
123 args where the MultiPointerTensor are replaced by
124 the appropriate share
125 """
126 return map(lambda x: x[worker] if isinstance(x, dict) else x, args)
127
128 @classmethod
129 def handle_func_command(cls, command):
130 """
131 Receive an instruction for a function to be applied on a Syft Tensor,
132 Replace in the args all the LogTensors with
133 their child attribute, forward the command instruction to the
134 handle_function_command of the type of the child attributes, get the
135 response and replace a Syft Tensor on top of all tensors found in
136 the response.
137
138 Args:
139 command: instruction of a function command: (command name,
140 <no self>, arguments[, kwargs])
141
142 Returns:
143 the response of the function command
144 """
145
146 cmd, _, args, kwargs = command
147
148 tensor = args[0]
149
150 # Check that the function has not been overwritten
151 try:
152 # Try to get recursively the attributes in cmd = "<attr1>.<attr2>.<attr3>..."
153 cmd = cls.rgetattr(cls, cmd)
154 return cmd(*args, **kwargs)
155 except AttributeError:
156 pass
157
158 # TODO: I can't manage the import issue, can you?
159 # Replace all LoggingTensor with their child attribute
160 new_args, new_kwargs, new_type = sy.frameworks.torch.hook_args.hook_function_args(
161 cmd, args, kwargs
162 )
163
164 results = {}
165 for worker, share in new_args[0].items():
166 new_type = type(share)
167 new_args_worker = tuple(MultiPointerTensor.dispatch(new_args, worker))
168
169 # build the new command
170 new_command = (cmd, None, new_args_worker, new_kwargs)
171
172 # Send it to the appropriate class and get the response
173 results[worker] = new_type.handle_func_command(new_command)
174
175 # Put back MultiPointerTensor on the tensors found in the response
176 response = sy.frameworks.torch.hook_args.hook_response(
177 cmd, results, wrap_type=cls, wrap_args=tensor.get_class_attributes()
178 )
179
180 return response
181
182 def set_garbage_collect_data(self, value):
183 shares = self.child
184 for _, share in shares.items():
185 share.child.garbage_collect_data = value
186
187 @staticmethod
188 def simplify(tensor: "MultiPointerTensor") -> tuple:
189 """
190 This function takes the attributes of a MultiPointerTensor and saves them in a tuple
191 Args:
192 tensor (MultiPointerTensor): a MultiPointerTensor
193 Returns:
194 tuple: a tuple holding the unique attributes of the additive shared tensor
195 Examples:
196 data = simplify(tensor)
197 """
198
199 chain = None
200 if hasattr(tensor, "child"):
201 chain = sy.serde.simplify(tensor.child)
202 return (tensor.id, chain)
203
204 @staticmethod
205 def detail(worker: AbstractWorker, tensor_tuple: tuple) -> "MultiPointerTensor":
206 """
207 This function reconstructs a MultiPointerTensor given it's attributes in form of a tuple.
208 Args:
209 worker: the worker doing the deserialization
210 tensor_tuple: a tuple holding the attributes of the MultiPointerTensor
211 Returns:
212 MultiPointerTensor: a MultiPointerTensor
213 Examples:
214 multi_pointer_tensor = detail(data)
215 """
216
217 tensor_id, chain = tensor_tuple
218
219 tensor = sy.MultiPointerTensor(owner=worker, id=tensor_id)
220
221 if chain is not None:
222 chain = sy.serde._detail(worker, chain)
223 tensor.child = chain
224
225 return tensor
226
[end of syft/frameworks/torch/tensors/interpreters/multi_pointer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
--- a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
+++ b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
@@ -84,6 +84,12 @@
return list(self.child.values())[0].shape
+ def dim(self) -> int:
+ """This method fixes the error that the result of dim was a list of ints
+ stored inside a multipointer tensor"""
+
+ return len(self.shape)
+
def get(self, sum_results: bool = False) -> torch.Tensor:
results = list()
| {"golden_diff": "diff --git a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py\n--- a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py\n+++ b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py\n@@ -84,6 +84,12 @@\n \n return list(self.child.values())[0].shape\n \n+ def dim(self) -> int:\n+ \"\"\"This method fixes the error that the result of dim was a list of ints\n+ stored inside a multipointer tensor\"\"\"\n+\n+ return len(self.shape)\n+\n def get(self, sum_results: bool = False) -> torch.Tensor:\n \n results = list()\n", "issue": "Dim does not work with multipointer tensors\n**Describe the bug**\r\nCalling dim on a multipointer tensor returns a multipointer tensor where the values of the children are all ints. The return signature should be an int\r\n\r\n**To Reproduce**\r\ncreate a multipointer tensor and call .dim()\r\n\r\n**Expected behavior**\r\nthe value returned should be an int\r\n\n", "before_files": [{"content": "import torch\nfrom typing import List\nfrom typing import Union\n\nimport syft as sy\nfrom syft.frameworks.torch.tensors.interpreters.abstract import AbstractTensor\nfrom syft.frameworks.torch.tensors.interpreters import AdditiveSharingTensor\nfrom syft.workers import BaseWorker\nfrom syft.frameworks.torch.overload_torch import overloaded\n\nfrom syft.workers import AbstractWorker\n\n\nclass MultiPointerTensor(AbstractTensor):\n \"\"\n\n def __init__(\n self,\n location: BaseWorker = None,\n id_at_location: Union[str, int] = None,\n register: bool = False,\n owner: BaseWorker = None,\n id: Union[str, int] = None,\n garbage_collect_data: bool = True,\n point_to_attr: str = None,\n tags: List[str] = None,\n description: str = None,\n children: List[AbstractTensor] = [],\n ):\n\n super().__init__(tags, description)\n\n self.location = location\n self.id_at_location = id_at_location\n self.owner = owner\n self.id = id\n self.garbage_collect_data = garbage_collect_data\n self.point_to_attr = point_to_attr\n\n self.child = {}\n for c in children:\n assert c.shape == children[0].shape\n self.child[c.location.id] = c\n\n def __str__(self):\n type_name = type(self).__name__\n out = f\"[\" f\"{type_name}]\"\n for v in self.child.values():\n out += \"\\n\\t-> \" + str(v)\n return out\n\n def __eq__(self, other):\n return torch.eq(self, other)\n\n def __add__(self, other):\n \"\"\"\n Adding a MultiPointer (MPT) and an AdditiveShared Tensor (AST) should return an\n AdditiveShared Tensor, so if we have this configuration, we permute self and\n other to use the fact that other.__add__(...) return an object of type other\n\n Else, we just redirect to .add which works well\n \"\"\"\n if isinstance(other, AdditiveSharingTensor):\n return other.__add__(self)\n else:\n return self.add(other)\n\n def __mul__(self, other):\n \"\"\"\n See __add__ for details but, MPT * AST should return AST\n \"\"\"\n if isinstance(other, AdditiveSharingTensor):\n return other.__mul__(self)\n else:\n return self.mul(other)\n\n @property\n def shape(self) -> torch.Size:\n \"\"\"This method returns the shape of the data being pointed to.\n This shape information SHOULD be cached on self._shape, but\n occasionally this information may not be present. If this is the\n case, then it requests the shape information from the remote object\n directly (which is inefficient and should be avoided).\"\"\"\n\n return list(self.child.values())[0].shape\n\n def get(self, sum_results: bool = False) -> torch.Tensor:\n\n results = list()\n for v in self.child.values():\n results.append(v.get())\n\n if sum_results:\n return sum(results)\n\n return results\n\n def virtual_get(self, sum_results: bool = False):\n \"\"\"Get the value of the tensor without calling get - Only for VirtualWorkers\"\"\"\n\n results = list()\n for v in self.child.values():\n value = v.location._objects[v.id_at_location]\n results.append(value)\n\n if sum_results:\n return sum(results)\n\n return results\n\n @staticmethod\n def dispatch(args, worker):\n \"\"\"\n utility function for handle_func_command which help to select\n shares (seen as elements of dict) in an argument set. It could\n perhaps be put elsewhere\n\n Args:\n args: arguments to give to a functions\n worker: owner of the shares to select\n\n Return:\n args where the MultiPointerTensor are replaced by\n the appropriate share\n \"\"\"\n return map(lambda x: x[worker] if isinstance(x, dict) else x, args)\n\n @classmethod\n def handle_func_command(cls, command):\n \"\"\"\n Receive an instruction for a function to be applied on a Syft Tensor,\n Replace in the args all the LogTensors with\n their child attribute, forward the command instruction to the\n handle_function_command of the type of the child attributes, get the\n response and replace a Syft Tensor on top of all tensors found in\n the response.\n\n Args:\n command: instruction of a function command: (command name,\n <no self>, arguments[, kwargs])\n\n Returns:\n the response of the function command\n \"\"\"\n\n cmd, _, args, kwargs = command\n\n tensor = args[0]\n\n # Check that the function has not been overwritten\n try:\n # Try to get recursively the attributes in cmd = \"<attr1>.<attr2>.<attr3>...\"\n cmd = cls.rgetattr(cls, cmd)\n return cmd(*args, **kwargs)\n except AttributeError:\n pass\n\n # TODO: I can't manage the import issue, can you?\n # Replace all LoggingTensor with their child attribute\n new_args, new_kwargs, new_type = sy.frameworks.torch.hook_args.hook_function_args(\n cmd, args, kwargs\n )\n\n results = {}\n for worker, share in new_args[0].items():\n new_type = type(share)\n new_args_worker = tuple(MultiPointerTensor.dispatch(new_args, worker))\n\n # build the new command\n new_command = (cmd, None, new_args_worker, new_kwargs)\n\n # Send it to the appropriate class and get the response\n results[worker] = new_type.handle_func_command(new_command)\n\n # Put back MultiPointerTensor on the tensors found in the response\n response = sy.frameworks.torch.hook_args.hook_response(\n cmd, results, wrap_type=cls, wrap_args=tensor.get_class_attributes()\n )\n\n return response\n\n def set_garbage_collect_data(self, value):\n shares = self.child\n for _, share in shares.items():\n share.child.garbage_collect_data = value\n\n @staticmethod\n def simplify(tensor: \"MultiPointerTensor\") -> tuple:\n \"\"\"\n This function takes the attributes of a MultiPointerTensor and saves them in a tuple\n Args:\n tensor (MultiPointerTensor): a MultiPointerTensor\n Returns:\n tuple: a tuple holding the unique attributes of the additive shared tensor\n Examples:\n data = simplify(tensor)\n \"\"\"\n\n chain = None\n if hasattr(tensor, \"child\"):\n chain = sy.serde.simplify(tensor.child)\n return (tensor.id, chain)\n\n @staticmethod\n def detail(worker: AbstractWorker, tensor_tuple: tuple) -> \"MultiPointerTensor\":\n \"\"\"\n This function reconstructs a MultiPointerTensor given it's attributes in form of a tuple.\n Args:\n worker: the worker doing the deserialization\n tensor_tuple: a tuple holding the attributes of the MultiPointerTensor\n Returns:\n MultiPointerTensor: a MultiPointerTensor\n Examples:\n multi_pointer_tensor = detail(data)\n \"\"\"\n\n tensor_id, chain = tensor_tuple\n\n tensor = sy.MultiPointerTensor(owner=worker, id=tensor_id)\n\n if chain is not None:\n chain = sy.serde._detail(worker, chain)\n tensor.child = chain\n\n return tensor\n", "path": "syft/frameworks/torch/tensors/interpreters/multi_pointer.py"}]} | 2,815 | 170 |
gh_patches_debug_37794 | rasdani/github-patches | git_diff | urllib3__urllib3-3311 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add http_version property to BaseHTTPResponse
Now that HTTP/2 is coming we should add a value to HTTPResponse to provide this value. The normal HTTPResponse will either be HTTP/1.1 or HTTP/1.0. Check `_http_vsn_str` for that value.
</issue>
<code>
[start of src/urllib3/http2.py]
1 from __future__ import annotations
2
3 import contextlib
4 import threading
5 import typing
6
7 import h2.config # type: ignore[import]
8 import h2.connection # type: ignore[import]
9 import h2.events # type: ignore[import]
10
11 import urllib3.connection
12 import urllib3.util.ssl_
13
14 from ._collections import HTTPHeaderDict
15 from .connection import HTTPSConnection
16 from .connectionpool import HTTPSConnectionPool
17
18 orig_HTTPSConnection = HTTPSConnection
19
20
21 class HTTP2Connection(HTTPSConnection):
22 def __init__(
23 self, host: str, port: int | None = None, **kwargs: typing.Any
24 ) -> None:
25 self._h2_lock = threading.RLock()
26 self._h2_conn = h2.connection.H2Connection(
27 config=h2.config.H2Configuration(client_side=True)
28 )
29 self._h2_stream: int | None = None
30 self._h2_headers: list[tuple[bytes, bytes]] = []
31
32 if "proxy" in kwargs or "proxy_config" in kwargs: # Defensive:
33 raise NotImplementedError("Proxies aren't supported with HTTP/2")
34
35 super().__init__(host, port, **kwargs)
36
37 @contextlib.contextmanager
38 def _lock_h2_conn(self) -> typing.Generator[h2.connection.H2Connection, None, None]:
39 with self._h2_lock:
40 yield self._h2_conn
41
42 def connect(self) -> None:
43 super().connect()
44
45 with self._lock_h2_conn() as h2_conn:
46 h2_conn.initiate_connection()
47 self.sock.sendall(h2_conn.data_to_send())
48
49 def putrequest(
50 self,
51 method: str,
52 url: str,
53 skip_host: bool = False,
54 skip_accept_encoding: bool = False,
55 ) -> None:
56 with self._lock_h2_conn() as h2_conn:
57 self._h2_stream = h2_conn.get_next_available_stream_id()
58
59 if ":" in self.host:
60 authority = f"[{self.host}]:{self.port or 443}"
61 else:
62 authority = f"{self.host}:{self.port or 443}"
63
64 self._h2_headers.extend(
65 (
66 (b":scheme", b"https"),
67 (b":method", method.encode()),
68 (b":authority", authority.encode()),
69 (b":path", url.encode()),
70 )
71 )
72
73 def putheader(self, header: str, *values: str) -> None:
74 for value in values:
75 self._h2_headers.append(
76 (header.encode("utf-8").lower(), value.encode("utf-8"))
77 )
78
79 def endheaders(self) -> None: # type: ignore[override]
80 with self._lock_h2_conn() as h2_conn:
81 h2_conn.send_headers(
82 stream_id=self._h2_stream,
83 headers=self._h2_headers,
84 end_stream=True,
85 )
86 if data_to_send := h2_conn.data_to_send():
87 self.sock.sendall(data_to_send)
88
89 def send(self, data: bytes) -> None: # type: ignore[override] # Defensive:
90 if not data:
91 return
92 raise NotImplementedError("Sending data isn't supported yet")
93
94 def getresponse( # type: ignore[override]
95 self,
96 ) -> HTTP2Response:
97 status = None
98 data = bytearray()
99 with self._lock_h2_conn() as h2_conn:
100 end_stream = False
101 while not end_stream:
102 # TODO: Arbitrary read value.
103 if received_data := self.sock.recv(65535):
104 events = h2_conn.receive_data(received_data)
105 for event in events:
106 if isinstance(
107 event, h2.events.InformationalResponseReceived
108 ): # Defensive:
109 continue # TODO: Does the stdlib do anything with these responses?
110
111 elif isinstance(event, h2.events.ResponseReceived):
112 headers = HTTPHeaderDict()
113 for header, value in event.headers:
114 if header == b":status":
115 status = int(value.decode())
116 else:
117 headers.add(
118 header.decode("ascii"), value.decode("ascii")
119 )
120
121 elif isinstance(event, h2.events.DataReceived):
122 data += event.data
123 h2_conn.acknowledge_received_data(
124 event.flow_controlled_length, event.stream_id
125 )
126
127 elif isinstance(event, h2.events.StreamEnded):
128 end_stream = True
129
130 if data_to_send := h2_conn.data_to_send():
131 self.sock.sendall(data_to_send)
132
133 # We always close to not have to handle connection management.
134 self.close()
135
136 assert status is not None
137 return HTTP2Response(status=status, headers=headers, data=bytes(data))
138
139 def close(self) -> None:
140 with self._lock_h2_conn() as h2_conn:
141 try:
142 self._h2_conn.close_connection()
143 if data := h2_conn.data_to_send():
144 self.sock.sendall(data)
145 except Exception:
146 pass
147
148 # Reset all our HTTP/2 connection state.
149 self._h2_conn = h2.connection.H2Connection(
150 config=h2.config.H2Configuration(client_side=True)
151 )
152 self._h2_stream = None
153 self._h2_headers = []
154
155 super().close()
156
157
158 class HTTP2Response:
159 # TODO: This is a woefully incomplete response object, but works for non-streaming.
160 def __init__(self, status: int, headers: HTTPHeaderDict, data: bytes) -> None:
161 self.status = status
162 self.headers = headers
163 self.data = data
164 self.length_remaining = 0
165
166 def get_redirect_location(self) -> None:
167 return None
168
169
170 def inject_into_urllib3() -> None:
171 HTTPSConnectionPool.ConnectionCls = HTTP2Connection # type: ignore[assignment]
172 urllib3.connection.HTTPSConnection = HTTP2Connection # type: ignore[misc]
173
174 # TODO: Offer 'http/1.1' as well, but for testing purposes this is handy.
175 urllib3.util.ssl_.ALPN_PROTOCOLS = ["h2"]
176
177
178 def extract_from_urllib3() -> None:
179 HTTPSConnectionPool.ConnectionCls = orig_HTTPSConnection
180 urllib3.connection.HTTPSConnection = orig_HTTPSConnection # type: ignore[misc]
181
182 urllib3.util.ssl_.ALPN_PROTOCOLS = ["http/1.1"]
183
[end of src/urllib3/http2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/urllib3/http2.py b/src/urllib3/http2.py
--- a/src/urllib3/http2.py
+++ b/src/urllib3/http2.py
@@ -10,6 +10,7 @@
import urllib3.connection
import urllib3.util.ssl_
+from urllib3.response import BaseHTTPResponse
from ._collections import HTTPHeaderDict
from .connection import HTTPSConnection
@@ -54,6 +55,7 @@
skip_accept_encoding: bool = False,
) -> None:
with self._lock_h2_conn() as h2_conn:
+ self._request_url = url
self._h2_stream = h2_conn.get_next_available_stream_id()
if ":" in self.host:
@@ -134,7 +136,12 @@
self.close()
assert status is not None
- return HTTP2Response(status=status, headers=headers, data=bytes(data))
+ return HTTP2Response(
+ status=status,
+ headers=headers,
+ request_url=self._request_url,
+ data=bytes(data),
+ )
def close(self) -> None:
with self._lock_h2_conn() as h2_conn:
@@ -155,20 +162,39 @@
super().close()
-class HTTP2Response:
+class HTTP2Response(BaseHTTPResponse):
# TODO: This is a woefully incomplete response object, but works for non-streaming.
- def __init__(self, status: int, headers: HTTPHeaderDict, data: bytes) -> None:
- self.status = status
- self.headers = headers
- self.data = data
+ def __init__(
+ self,
+ status: int,
+ headers: HTTPHeaderDict,
+ request_url: str,
+ data: bytes,
+ decode_content: bool = False, # TODO: support decoding
+ ) -> None:
+ super().__init__(
+ status=status,
+ headers=headers,
+ # Following CPython, we map HTTP versions to major * 10 + minor integers
+ version=20,
+ # No reason phrase in HTTP/2
+ reason=None,
+ decode_content=decode_content,
+ request_url=request_url,
+ )
+ self._data = data
self.length_remaining = 0
+ @property
+ def data(self) -> bytes:
+ return self._data
+
def get_redirect_location(self) -> None:
return None
def inject_into_urllib3() -> None:
- HTTPSConnectionPool.ConnectionCls = HTTP2Connection # type: ignore[assignment]
+ HTTPSConnectionPool.ConnectionCls = HTTP2Connection
urllib3.connection.HTTPSConnection = HTTP2Connection # type: ignore[misc]
# TODO: Offer 'http/1.1' as well, but for testing purposes this is handy.
| {"golden_diff": "diff --git a/src/urllib3/http2.py b/src/urllib3/http2.py\n--- a/src/urllib3/http2.py\n+++ b/src/urllib3/http2.py\n@@ -10,6 +10,7 @@\n \n import urllib3.connection\n import urllib3.util.ssl_\n+from urllib3.response import BaseHTTPResponse\n \n from ._collections import HTTPHeaderDict\n from .connection import HTTPSConnection\n@@ -54,6 +55,7 @@\n skip_accept_encoding: bool = False,\n ) -> None:\n with self._lock_h2_conn() as h2_conn:\n+ self._request_url = url\n self._h2_stream = h2_conn.get_next_available_stream_id()\n \n if \":\" in self.host:\n@@ -134,7 +136,12 @@\n self.close()\n \n assert status is not None\n- return HTTP2Response(status=status, headers=headers, data=bytes(data))\n+ return HTTP2Response(\n+ status=status,\n+ headers=headers,\n+ request_url=self._request_url,\n+ data=bytes(data),\n+ )\n \n def close(self) -> None:\n with self._lock_h2_conn() as h2_conn:\n@@ -155,20 +162,39 @@\n super().close()\n \n \n-class HTTP2Response:\n+class HTTP2Response(BaseHTTPResponse):\n # TODO: This is a woefully incomplete response object, but works for non-streaming.\n- def __init__(self, status: int, headers: HTTPHeaderDict, data: bytes) -> None:\n- self.status = status\n- self.headers = headers\n- self.data = data\n+ def __init__(\n+ self,\n+ status: int,\n+ headers: HTTPHeaderDict,\n+ request_url: str,\n+ data: bytes,\n+ decode_content: bool = False, # TODO: support decoding\n+ ) -> None:\n+ super().__init__(\n+ status=status,\n+ headers=headers,\n+ # Following CPython, we map HTTP versions to major * 10 + minor integers\n+ version=20,\n+ # No reason phrase in HTTP/2\n+ reason=None,\n+ decode_content=decode_content,\n+ request_url=request_url,\n+ )\n+ self._data = data\n self.length_remaining = 0\n \n+ @property\n+ def data(self) -> bytes:\n+ return self._data\n+\n def get_redirect_location(self) -> None:\n return None\n \n \n def inject_into_urllib3() -> None:\n- HTTPSConnectionPool.ConnectionCls = HTTP2Connection # type: ignore[assignment]\n+ HTTPSConnectionPool.ConnectionCls = HTTP2Connection\n urllib3.connection.HTTPSConnection = HTTP2Connection # type: ignore[misc]\n \n # TODO: Offer 'http/1.1' as well, but for testing purposes this is handy.\n", "issue": "Add http_version property to BaseHTTPResponse\nNow that HTTP/2 is coming we should add a value to HTTPResponse to provide this value. The normal HTTPResponse will either be HTTP/1.1 or HTTP/1.0. Check `_http_vsn_str` for that value.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport contextlib\nimport threading\nimport typing\n\nimport h2.config # type: ignore[import]\nimport h2.connection # type: ignore[import]\nimport h2.events # type: ignore[import]\n\nimport urllib3.connection\nimport urllib3.util.ssl_\n\nfrom ._collections import HTTPHeaderDict\nfrom .connection import HTTPSConnection\nfrom .connectionpool import HTTPSConnectionPool\n\norig_HTTPSConnection = HTTPSConnection\n\n\nclass HTTP2Connection(HTTPSConnection):\n def __init__(\n self, host: str, port: int | None = None, **kwargs: typing.Any\n ) -> None:\n self._h2_lock = threading.RLock()\n self._h2_conn = h2.connection.H2Connection(\n config=h2.config.H2Configuration(client_side=True)\n )\n self._h2_stream: int | None = None\n self._h2_headers: list[tuple[bytes, bytes]] = []\n\n if \"proxy\" in kwargs or \"proxy_config\" in kwargs: # Defensive:\n raise NotImplementedError(\"Proxies aren't supported with HTTP/2\")\n\n super().__init__(host, port, **kwargs)\n\n @contextlib.contextmanager\n def _lock_h2_conn(self) -> typing.Generator[h2.connection.H2Connection, None, None]:\n with self._h2_lock:\n yield self._h2_conn\n\n def connect(self) -> None:\n super().connect()\n\n with self._lock_h2_conn() as h2_conn:\n h2_conn.initiate_connection()\n self.sock.sendall(h2_conn.data_to_send())\n\n def putrequest(\n self,\n method: str,\n url: str,\n skip_host: bool = False,\n skip_accept_encoding: bool = False,\n ) -> None:\n with self._lock_h2_conn() as h2_conn:\n self._h2_stream = h2_conn.get_next_available_stream_id()\n\n if \":\" in self.host:\n authority = f\"[{self.host}]:{self.port or 443}\"\n else:\n authority = f\"{self.host}:{self.port or 443}\"\n\n self._h2_headers.extend(\n (\n (b\":scheme\", b\"https\"),\n (b\":method\", method.encode()),\n (b\":authority\", authority.encode()),\n (b\":path\", url.encode()),\n )\n )\n\n def putheader(self, header: str, *values: str) -> None:\n for value in values:\n self._h2_headers.append(\n (header.encode(\"utf-8\").lower(), value.encode(\"utf-8\"))\n )\n\n def endheaders(self) -> None: # type: ignore[override]\n with self._lock_h2_conn() as h2_conn:\n h2_conn.send_headers(\n stream_id=self._h2_stream,\n headers=self._h2_headers,\n end_stream=True,\n )\n if data_to_send := h2_conn.data_to_send():\n self.sock.sendall(data_to_send)\n\n def send(self, data: bytes) -> None: # type: ignore[override] # Defensive:\n if not data:\n return\n raise NotImplementedError(\"Sending data isn't supported yet\")\n\n def getresponse( # type: ignore[override]\n self,\n ) -> HTTP2Response:\n status = None\n data = bytearray()\n with self._lock_h2_conn() as h2_conn:\n end_stream = False\n while not end_stream:\n # TODO: Arbitrary read value.\n if received_data := self.sock.recv(65535):\n events = h2_conn.receive_data(received_data)\n for event in events:\n if isinstance(\n event, h2.events.InformationalResponseReceived\n ): # Defensive:\n continue # TODO: Does the stdlib do anything with these responses?\n\n elif isinstance(event, h2.events.ResponseReceived):\n headers = HTTPHeaderDict()\n for header, value in event.headers:\n if header == b\":status\":\n status = int(value.decode())\n else:\n headers.add(\n header.decode(\"ascii\"), value.decode(\"ascii\")\n )\n\n elif isinstance(event, h2.events.DataReceived):\n data += event.data\n h2_conn.acknowledge_received_data(\n event.flow_controlled_length, event.stream_id\n )\n\n elif isinstance(event, h2.events.StreamEnded):\n end_stream = True\n\n if data_to_send := h2_conn.data_to_send():\n self.sock.sendall(data_to_send)\n\n # We always close to not have to handle connection management.\n self.close()\n\n assert status is not None\n return HTTP2Response(status=status, headers=headers, data=bytes(data))\n\n def close(self) -> None:\n with self._lock_h2_conn() as h2_conn:\n try:\n self._h2_conn.close_connection()\n if data := h2_conn.data_to_send():\n self.sock.sendall(data)\n except Exception:\n pass\n\n # Reset all our HTTP/2 connection state.\n self._h2_conn = h2.connection.H2Connection(\n config=h2.config.H2Configuration(client_side=True)\n )\n self._h2_stream = None\n self._h2_headers = []\n\n super().close()\n\n\nclass HTTP2Response:\n # TODO: This is a woefully incomplete response object, but works for non-streaming.\n def __init__(self, status: int, headers: HTTPHeaderDict, data: bytes) -> None:\n self.status = status\n self.headers = headers\n self.data = data\n self.length_remaining = 0\n\n def get_redirect_location(self) -> None:\n return None\n\n\ndef inject_into_urllib3() -> None:\n HTTPSConnectionPool.ConnectionCls = HTTP2Connection # type: ignore[assignment]\n urllib3.connection.HTTPSConnection = HTTP2Connection # type: ignore[misc]\n\n # TODO: Offer 'http/1.1' as well, but for testing purposes this is handy.\n urllib3.util.ssl_.ALPN_PROTOCOLS = [\"h2\"]\n\n\ndef extract_from_urllib3() -> None:\n HTTPSConnectionPool.ConnectionCls = orig_HTTPSConnection\n urllib3.connection.HTTPSConnection = orig_HTTPSConnection # type: ignore[misc]\n\n urllib3.util.ssl_.ALPN_PROTOCOLS = [\"http/1.1\"]\n", "path": "src/urllib3/http2.py"}]} | 2,437 | 645 |
gh_patches_debug_43259 | rasdani/github-patches | git_diff | doccano__doccano-1783 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Export data will get nothing, if I choose only approved documents.
if I choose only approved documents, the zip file contains nothing
if I don't choose it, the zip file contains "all.json".
But those txt I have checked.
</issue>
<code>
[start of backend/data_export/pipeline/repositories.py]
1 import abc
2 import itertools
3 from collections import defaultdict
4 from typing import Any, Dict, Iterator, List, Tuple
5
6 from .data import Record
7 from examples.models import Example
8 from projects.models import Project
9
10 SpanType = Tuple[int, int, str]
11
12
13 class BaseRepository:
14 def __init__(self, project: Project):
15 self.project = project
16
17 def list(self, export_approved=False) -> Iterator[Record]:
18 raise NotImplementedError()
19
20
21 class FileRepository(BaseRepository):
22 def list(self, export_approved=False) -> Iterator[Record]:
23 examples = self.project.examples.all()
24 if export_approved:
25 examples = examples.exclude(annotations_approved_by=None)
26
27 for example in examples:
28 label_per_user = self.label_per_user(example)
29 if self.project.collaborative_annotation:
30 label_per_user = self.reduce_user(label_per_user)
31 for user, label in label_per_user.items():
32 yield Record(
33 data_id=example.id,
34 data=example.upload_name,
35 label=label,
36 user=user,
37 metadata=example.meta,
38 )
39 # todo:
40 # If there is no label, export the doc with `unknown` user.
41 # This is a quick solution.
42 # In the future, the doc without label will be exported
43 # with the user who approved the doc.
44 # This means I will allow each user to be able to approve the doc.
45 if len(label_per_user) == 0:
46 yield Record(data_id=example.id, data=example.upload_name, label=[], user="unknown", metadata={})
47
48 def label_per_user(self, example) -> Dict:
49 label_per_user = defaultdict(list)
50 for a in example.categories.all():
51 label_per_user[a.user.username].append(a.label.text)
52 return label_per_user
53
54 def reduce_user(self, label_per_user: Dict[str, Any]):
55 value = list(itertools.chain(*label_per_user.values()))
56 return {"all": value}
57
58
59 class Speech2TextRepository(FileRepository):
60 def label_per_user(self, example) -> Dict:
61 label_per_user = defaultdict(list)
62 for a in example.texts.all():
63 label_per_user[a.user.username].append(a.text)
64 return label_per_user
65
66
67 class TextRepository(BaseRepository):
68 @property
69 def docs(self):
70 return Example.objects.filter(project=self.project)
71
72 def list(self, export_approved=False):
73 docs = self.docs
74 if export_approved:
75 docs = docs.exclude(annotations_approved_by=None)
76
77 for doc in docs:
78 label_per_user = self.label_per_user(doc)
79 if self.project.collaborative_annotation:
80 label_per_user = self.reduce_user(label_per_user)
81 for user, label in label_per_user.items():
82 yield Record(data_id=doc.id, data=doc.text, label=label, user=user, metadata=doc.meta)
83 # todo:
84 # If there is no label, export the doc with `unknown` user.
85 # This is a quick solution.
86 # In the future, the doc without label will be exported
87 # with the user who approved the doc.
88 # This means I will allow each user to be able to approve the doc.
89 if len(label_per_user) == 0:
90 yield Record(data_id=doc.id, data=doc.text, label=[], user="unknown", metadata={})
91
92 @abc.abstractmethod
93 def label_per_user(self, doc) -> Dict:
94 raise NotImplementedError()
95
96 def reduce_user(self, label_per_user: Dict[str, Any]):
97 value = list(itertools.chain(*label_per_user.values()))
98 return {"all": value}
99
100
101 class TextClassificationRepository(TextRepository):
102 @property
103 def docs(self):
104 return Example.objects.filter(project=self.project).prefetch_related("categories__user", "categories__label")
105
106 def label_per_user(self, doc) -> Dict:
107 label_per_user = defaultdict(list)
108 for a in doc.categories.all():
109 label_per_user[a.user.username].append(a.label.text)
110 return label_per_user
111
112
113 class SequenceLabelingRepository(TextRepository):
114 @property
115 def docs(self):
116 return Example.objects.filter(project=self.project).prefetch_related("spans__user", "spans__label")
117
118 def label_per_user(self, doc) -> Dict:
119 label_per_user = defaultdict(list)
120 for a in doc.spans.all():
121 label = (a.start_offset, a.end_offset, a.label.text)
122 label_per_user[a.user.username].append(label)
123 return label_per_user
124
125
126 class RelationExtractionRepository(TextRepository):
127 @property
128 def docs(self):
129 return Example.objects.filter(project=self.project).prefetch_related(
130 "spans__user", "spans__label", "relations__user", "relations__type"
131 )
132
133 def label_per_user(self, doc) -> Dict:
134 relation_per_user: Dict = defaultdict(list)
135 span_per_user: Dict = defaultdict(list)
136 label_per_user: Dict = defaultdict(dict)
137 for relation in doc.relations.all():
138 relation_per_user[relation.user.username].append(
139 {
140 "id": relation.id,
141 "from_id": relation.from_id.id,
142 "to_id": relation.to_id.id,
143 "type": relation.type.text,
144 }
145 )
146 for span in doc.spans.all():
147 span_per_user[span.user.username].append(
148 {
149 "id": span.id,
150 "start_offset": span.start_offset,
151 "end_offset": span.end_offset,
152 "label": span.label.text,
153 }
154 )
155 for user, relations in relation_per_user.items():
156 label_per_user[user]["relations"] = relations
157 for user, span in span_per_user.items():
158 label_per_user[user]["entities"] = span
159 return label_per_user
160
161 def reduce_user(self, label_per_user: Dict[str, Any]):
162 entities = []
163 relations = []
164 for user, label in label_per_user.items():
165 entities.extend(label.get("entities", []))
166 relations.extend(label.get("relations", []))
167 return {"all": {"entities": entities, "relations": relations}}
168
169
170 class Seq2seqRepository(TextRepository):
171 @property
172 def docs(self):
173 return Example.objects.filter(project=self.project).prefetch_related("texts__user")
174
175 def label_per_user(self, doc) -> Dict:
176 label_per_user = defaultdict(list)
177 for a in doc.texts.all():
178 label_per_user[a.user.username].append(a.text)
179 return label_per_user
180
181
182 class IntentDetectionSlotFillingRepository(TextRepository):
183 @property
184 def docs(self):
185 return Example.objects.filter(project=self.project).prefetch_related(
186 "categories__user", "categories__label", "spans__user", "spans__label"
187 )
188
189 def label_per_user(self, doc) -> Dict:
190 category_per_user: Dict[str, List[str]] = defaultdict(list)
191 span_per_user: Dict[str, List[SpanType]] = defaultdict(list)
192 label_per_user: Dict[str, Dict[str, List]] = defaultdict(dict)
193 for a in doc.categories.all():
194 category_per_user[a.user.username].append(a.label.text)
195 for a in doc.spans.all():
196 span_per_user[a.user.username].append((a.start_offset, a.end_offset, a.label.text))
197 for user, cats in category_per_user.items():
198 label_per_user[user]["cats"] = cats
199 for user, span in span_per_user.items():
200 label_per_user[user]["entities"] = span
201 for label in label_per_user.values():
202 label.setdefault("cats", [])
203 label.setdefault("entities", [])
204 return label_per_user
205
206 def reduce_user(self, label_per_user: Dict[str, Any]):
207 cats = []
208 entities = []
209 for user, label in label_per_user.items():
210 cats.extend(label.get("cats", []))
211 entities.extend(label.get("entities", []))
212 return {"all": {"entities": entities, "cats": cats}}
213
[end of backend/data_export/pipeline/repositories.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/data_export/pipeline/repositories.py b/backend/data_export/pipeline/repositories.py
--- a/backend/data_export/pipeline/repositories.py
+++ b/backend/data_export/pipeline/repositories.py
@@ -17,12 +17,15 @@
def list(self, export_approved=False) -> Iterator[Record]:
raise NotImplementedError()
+ def create_unlabeled_record(self, example: Example) -> Record:
+ raise NotImplementedError()
+
class FileRepository(BaseRepository):
def list(self, export_approved=False) -> Iterator[Record]:
examples = self.project.examples.all()
if export_approved:
- examples = examples.exclude(annotations_approved_by=None)
+ examples = examples.exclude(states=None)
for example in examples:
label_per_user = self.label_per_user(example)
@@ -43,7 +46,10 @@
# with the user who approved the doc.
# This means I will allow each user to be able to approve the doc.
if len(label_per_user) == 0:
- yield Record(data_id=example.id, data=example.upload_name, label=[], user="unknown", metadata={})
+ yield self.create_unlabeled_record(example)
+
+ def create_unlabeled_record(self, example: Example) -> Record:
+ return Record(data_id=example.id, data=example.upload_name, label=[], user="unknown", metadata=example.meta)
def label_per_user(self, example) -> Dict:
label_per_user = defaultdict(list)
@@ -72,7 +78,7 @@
def list(self, export_approved=False):
docs = self.docs
if export_approved:
- docs = docs.exclude(annotations_approved_by=None)
+ docs = docs.exclude(states=None)
for doc in docs:
label_per_user = self.label_per_user(doc)
@@ -87,7 +93,10 @@
# with the user who approved the doc.
# This means I will allow each user to be able to approve the doc.
if len(label_per_user) == 0:
- yield Record(data_id=doc.id, data=doc.text, label=[], user="unknown", metadata={})
+ yield self.create_unlabeled_record(doc)
+
+ def create_unlabeled_record(self, example: Example) -> Record:
+ return Record(data_id=example.id, data=example.text, label=[], user="unknown", metadata=example.meta)
@abc.abstractmethod
def label_per_user(self, doc) -> Dict:
@@ -130,6 +139,15 @@
"spans__user", "spans__label", "relations__user", "relations__type"
)
+ def create_unlabeled_record(self, example: Example) -> Record:
+ return Record(
+ data_id=example.id,
+ data=example.text,
+ label={"entities": [], "relations": []},
+ user="unknown",
+ metadata=example.meta,
+ )
+
def label_per_user(self, doc) -> Dict:
relation_per_user: Dict = defaultdict(list)
span_per_user: Dict = defaultdict(list)
@@ -186,6 +204,15 @@
"categories__user", "categories__label", "spans__user", "spans__label"
)
+ def create_unlabeled_record(self, example: Example) -> Record:
+ return Record(
+ data_id=example.id,
+ data=example.text,
+ label={"entities": [], "cats": []},
+ user="unknown",
+ metadata=example.meta,
+ )
+
def label_per_user(self, doc) -> Dict:
category_per_user: Dict[str, List[str]] = defaultdict(list)
span_per_user: Dict[str, List[SpanType]] = defaultdict(list)
| {"golden_diff": "diff --git a/backend/data_export/pipeline/repositories.py b/backend/data_export/pipeline/repositories.py\n--- a/backend/data_export/pipeline/repositories.py\n+++ b/backend/data_export/pipeline/repositories.py\n@@ -17,12 +17,15 @@\n def list(self, export_approved=False) -> Iterator[Record]:\n raise NotImplementedError()\n \n+ def create_unlabeled_record(self, example: Example) -> Record:\n+ raise NotImplementedError()\n+\n \n class FileRepository(BaseRepository):\n def list(self, export_approved=False) -> Iterator[Record]:\n examples = self.project.examples.all()\n if export_approved:\n- examples = examples.exclude(annotations_approved_by=None)\n+ examples = examples.exclude(states=None)\n \n for example in examples:\n label_per_user = self.label_per_user(example)\n@@ -43,7 +46,10 @@\n # with the user who approved the doc.\n # This means I will allow each user to be able to approve the doc.\n if len(label_per_user) == 0:\n- yield Record(data_id=example.id, data=example.upload_name, label=[], user=\"unknown\", metadata={})\n+ yield self.create_unlabeled_record(example)\n+\n+ def create_unlabeled_record(self, example: Example) -> Record:\n+ return Record(data_id=example.id, data=example.upload_name, label=[], user=\"unknown\", metadata=example.meta)\n \n def label_per_user(self, example) -> Dict:\n label_per_user = defaultdict(list)\n@@ -72,7 +78,7 @@\n def list(self, export_approved=False):\n docs = self.docs\n if export_approved:\n- docs = docs.exclude(annotations_approved_by=None)\n+ docs = docs.exclude(states=None)\n \n for doc in docs:\n label_per_user = self.label_per_user(doc)\n@@ -87,7 +93,10 @@\n # with the user who approved the doc.\n # This means I will allow each user to be able to approve the doc.\n if len(label_per_user) == 0:\n- yield Record(data_id=doc.id, data=doc.text, label=[], user=\"unknown\", metadata={})\n+ yield self.create_unlabeled_record(doc)\n+\n+ def create_unlabeled_record(self, example: Example) -> Record:\n+ return Record(data_id=example.id, data=example.text, label=[], user=\"unknown\", metadata=example.meta)\n \n @abc.abstractmethod\n def label_per_user(self, doc) -> Dict:\n@@ -130,6 +139,15 @@\n \"spans__user\", \"spans__label\", \"relations__user\", \"relations__type\"\n )\n \n+ def create_unlabeled_record(self, example: Example) -> Record:\n+ return Record(\n+ data_id=example.id,\n+ data=example.text,\n+ label={\"entities\": [], \"relations\": []},\n+ user=\"unknown\",\n+ metadata=example.meta,\n+ )\n+\n def label_per_user(self, doc) -> Dict:\n relation_per_user: Dict = defaultdict(list)\n span_per_user: Dict = defaultdict(list)\n@@ -186,6 +204,15 @@\n \"categories__user\", \"categories__label\", \"spans__user\", \"spans__label\"\n )\n \n+ def create_unlabeled_record(self, example: Example) -> Record:\n+ return Record(\n+ data_id=example.id,\n+ data=example.text,\n+ label={\"entities\": [], \"cats\": []},\n+ user=\"unknown\",\n+ metadata=example.meta,\n+ )\n+\n def label_per_user(self, doc) -> Dict:\n category_per_user: Dict[str, List[str]] = defaultdict(list)\n span_per_user: Dict[str, List[SpanType]] = defaultdict(list)\n", "issue": "Export data will get nothing, if I choose only approved documents.\nif I choose only approved documents, the zip file contains nothing\r\n\r\nif I don't choose it, the zip file contains \"all.json\".\r\n\r\nBut those txt I have checked.\n", "before_files": [{"content": "import abc\nimport itertools\nfrom collections import defaultdict\nfrom typing import Any, Dict, Iterator, List, Tuple\n\nfrom .data import Record\nfrom examples.models import Example\nfrom projects.models import Project\n\nSpanType = Tuple[int, int, str]\n\n\nclass BaseRepository:\n def __init__(self, project: Project):\n self.project = project\n\n def list(self, export_approved=False) -> Iterator[Record]:\n raise NotImplementedError()\n\n\nclass FileRepository(BaseRepository):\n def list(self, export_approved=False) -> Iterator[Record]:\n examples = self.project.examples.all()\n if export_approved:\n examples = examples.exclude(annotations_approved_by=None)\n\n for example in examples:\n label_per_user = self.label_per_user(example)\n if self.project.collaborative_annotation:\n label_per_user = self.reduce_user(label_per_user)\n for user, label in label_per_user.items():\n yield Record(\n data_id=example.id,\n data=example.upload_name,\n label=label,\n user=user,\n metadata=example.meta,\n )\n # todo:\n # If there is no label, export the doc with `unknown` user.\n # This is a quick solution.\n # In the future, the doc without label will be exported\n # with the user who approved the doc.\n # This means I will allow each user to be able to approve the doc.\n if len(label_per_user) == 0:\n yield Record(data_id=example.id, data=example.upload_name, label=[], user=\"unknown\", metadata={})\n\n def label_per_user(self, example) -> Dict:\n label_per_user = defaultdict(list)\n for a in example.categories.all():\n label_per_user[a.user.username].append(a.label.text)\n return label_per_user\n\n def reduce_user(self, label_per_user: Dict[str, Any]):\n value = list(itertools.chain(*label_per_user.values()))\n return {\"all\": value}\n\n\nclass Speech2TextRepository(FileRepository):\n def label_per_user(self, example) -> Dict:\n label_per_user = defaultdict(list)\n for a in example.texts.all():\n label_per_user[a.user.username].append(a.text)\n return label_per_user\n\n\nclass TextRepository(BaseRepository):\n @property\n def docs(self):\n return Example.objects.filter(project=self.project)\n\n def list(self, export_approved=False):\n docs = self.docs\n if export_approved:\n docs = docs.exclude(annotations_approved_by=None)\n\n for doc in docs:\n label_per_user = self.label_per_user(doc)\n if self.project.collaborative_annotation:\n label_per_user = self.reduce_user(label_per_user)\n for user, label in label_per_user.items():\n yield Record(data_id=doc.id, data=doc.text, label=label, user=user, metadata=doc.meta)\n # todo:\n # If there is no label, export the doc with `unknown` user.\n # This is a quick solution.\n # In the future, the doc without label will be exported\n # with the user who approved the doc.\n # This means I will allow each user to be able to approve the doc.\n if len(label_per_user) == 0:\n yield Record(data_id=doc.id, data=doc.text, label=[], user=\"unknown\", metadata={})\n\n @abc.abstractmethod\n def label_per_user(self, doc) -> Dict:\n raise NotImplementedError()\n\n def reduce_user(self, label_per_user: Dict[str, Any]):\n value = list(itertools.chain(*label_per_user.values()))\n return {\"all\": value}\n\n\nclass TextClassificationRepository(TextRepository):\n @property\n def docs(self):\n return Example.objects.filter(project=self.project).prefetch_related(\"categories__user\", \"categories__label\")\n\n def label_per_user(self, doc) -> Dict:\n label_per_user = defaultdict(list)\n for a in doc.categories.all():\n label_per_user[a.user.username].append(a.label.text)\n return label_per_user\n\n\nclass SequenceLabelingRepository(TextRepository):\n @property\n def docs(self):\n return Example.objects.filter(project=self.project).prefetch_related(\"spans__user\", \"spans__label\")\n\n def label_per_user(self, doc) -> Dict:\n label_per_user = defaultdict(list)\n for a in doc.spans.all():\n label = (a.start_offset, a.end_offset, a.label.text)\n label_per_user[a.user.username].append(label)\n return label_per_user\n\n\nclass RelationExtractionRepository(TextRepository):\n @property\n def docs(self):\n return Example.objects.filter(project=self.project).prefetch_related(\n \"spans__user\", \"spans__label\", \"relations__user\", \"relations__type\"\n )\n\n def label_per_user(self, doc) -> Dict:\n relation_per_user: Dict = defaultdict(list)\n span_per_user: Dict = defaultdict(list)\n label_per_user: Dict = defaultdict(dict)\n for relation in doc.relations.all():\n relation_per_user[relation.user.username].append(\n {\n \"id\": relation.id,\n \"from_id\": relation.from_id.id,\n \"to_id\": relation.to_id.id,\n \"type\": relation.type.text,\n }\n )\n for span in doc.spans.all():\n span_per_user[span.user.username].append(\n {\n \"id\": span.id,\n \"start_offset\": span.start_offset,\n \"end_offset\": span.end_offset,\n \"label\": span.label.text,\n }\n )\n for user, relations in relation_per_user.items():\n label_per_user[user][\"relations\"] = relations\n for user, span in span_per_user.items():\n label_per_user[user][\"entities\"] = span\n return label_per_user\n\n def reduce_user(self, label_per_user: Dict[str, Any]):\n entities = []\n relations = []\n for user, label in label_per_user.items():\n entities.extend(label.get(\"entities\", []))\n relations.extend(label.get(\"relations\", []))\n return {\"all\": {\"entities\": entities, \"relations\": relations}}\n\n\nclass Seq2seqRepository(TextRepository):\n @property\n def docs(self):\n return Example.objects.filter(project=self.project).prefetch_related(\"texts__user\")\n\n def label_per_user(self, doc) -> Dict:\n label_per_user = defaultdict(list)\n for a in doc.texts.all():\n label_per_user[a.user.username].append(a.text)\n return label_per_user\n\n\nclass IntentDetectionSlotFillingRepository(TextRepository):\n @property\n def docs(self):\n return Example.objects.filter(project=self.project).prefetch_related(\n \"categories__user\", \"categories__label\", \"spans__user\", \"spans__label\"\n )\n\n def label_per_user(self, doc) -> Dict:\n category_per_user: Dict[str, List[str]] = defaultdict(list)\n span_per_user: Dict[str, List[SpanType]] = defaultdict(list)\n label_per_user: Dict[str, Dict[str, List]] = defaultdict(dict)\n for a in doc.categories.all():\n category_per_user[a.user.username].append(a.label.text)\n for a in doc.spans.all():\n span_per_user[a.user.username].append((a.start_offset, a.end_offset, a.label.text))\n for user, cats in category_per_user.items():\n label_per_user[user][\"cats\"] = cats\n for user, span in span_per_user.items():\n label_per_user[user][\"entities\"] = span\n for label in label_per_user.values():\n label.setdefault(\"cats\", [])\n label.setdefault(\"entities\", [])\n return label_per_user\n\n def reduce_user(self, label_per_user: Dict[str, Any]):\n cats = []\n entities = []\n for user, label in label_per_user.items():\n cats.extend(label.get(\"cats\", []))\n entities.extend(label.get(\"entities\", []))\n return {\"all\": {\"entities\": entities, \"cats\": cats}}\n", "path": "backend/data_export/pipeline/repositories.py"}]} | 2,823 | 836 |
gh_patches_debug_34829 | rasdani/github-patches | git_diff | liqd__adhocracy4-168 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
improve accessibility of image upload
* add alt attribute with the filename
* if there is no image uploaded the image tag should not be there
* the label's `for` attribute doesn't reference the file input's id.
The first part causes the HTML to be invalid, which is part of the BITV Test "4.1.1a Valides HTML".
The third part is part of the BITV Test "3.3.2a Formularfelder richtig beschriftet".
</issue>
<code>
[start of adhocracy4/images/widgets.py]
1 from os.path import basename
2
3 from django.contrib.staticfiles.storage import staticfiles_storage
4 from django.forms import widgets
5 from django.template import loader
6 from django.utils.html import conditional_escape
7 from django.utils.translation import ugettext
8
9
10 class ImageInputWidget(widgets.ClearableFileInput):
11
12 """
13 A project-specific improved version of the clearable file upload.
14
15 Allows to upload and delete uploaded files. It doesn't passing attributes
16 using the positional `attrs` argument and hard codes css files.
17 """
18 class Media:
19 js = (staticfiles_storage.url('a4images/imageUploader.js'),)
20
21 def render(self, name, value, attrs=None):
22
23 has_image_set = self.is_initial(value)
24 is_required = self.is_required
25
26 file_placeholder = ugettext('Select a picture from your local folder.')
27 file_input = super().render(name, None, {
28 'id': name,
29 'class': 'form-control form-control-file'
30 })
31
32 if has_image_set:
33 file_name = basename(value.name)
34 file_url = conditional_escape(value.url)
35 else:
36 file_name = ""
37 file_url = ""
38
39 text_input = widgets.TextInput().render('__noname__', file_name, {
40 'class': 'form-control form-control-file-dummy',
41 'placeholder': file_placeholder,
42 'tabindex': '-1'
43 })
44
45 checkbox_id = self.clear_checkbox_id(name)
46 checkbox_name = self.clear_checkbox_name(name)
47 checkbox_input = widgets.CheckboxInput().render(checkbox_name, False, {
48 'id': checkbox_id,
49 'class': 'clear-image',
50 'data-upload-clear': name,
51 })
52
53 context = {
54 'name': name,
55 'has_image_set': has_image_set,
56 'is_required': is_required,
57 'file_url': file_url,
58 'file_input': file_input,
59 'file_id': name + '-file',
60 'text_input': text_input,
61 'checkbox_input': checkbox_input,
62 'checkbox_id': checkbox_id
63 }
64
65 return loader.render_to_string(
66 'a4images/image_upload_widget.html',
67 context
68 )
69
70 def value_from_datadict(self, data, files, name):
71 """
72 Modify value_from_datadict, so that delete takes precedence over
73 upload.
74 """
75 file_value = super(widgets.ClearableFileInput, self)\
76 .value_from_datadict(data, files, name)
77 checkbox_value = widgets.CheckboxInput()\
78 .value_from_datadict(data, files, self.clear_checkbox_name(name))
79 if not self.is_required and checkbox_value:
80 return False
81 return file_value
82
[end of adhocracy4/images/widgets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/adhocracy4/images/widgets.py b/adhocracy4/images/widgets.py
--- a/adhocracy4/images/widgets.py
+++ b/adhocracy4/images/widgets.py
@@ -19,13 +19,13 @@
js = (staticfiles_storage.url('a4images/imageUploader.js'),)
def render(self, name, value, attrs=None):
-
+ html_id = attrs and attrs.get('id', name) or name
has_image_set = self.is_initial(value)
is_required = self.is_required
file_placeholder = ugettext('Select a picture from your local folder.')
file_input = super().render(name, None, {
- 'id': name,
+ 'id': html_id,
'class': 'form-control form-control-file'
})
@@ -39,7 +39,8 @@
text_input = widgets.TextInput().render('__noname__', file_name, {
'class': 'form-control form-control-file-dummy',
'placeholder': file_placeholder,
- 'tabindex': '-1'
+ 'tabindex': '-1',
+ 'id': 'text-{}'.format(html_id)
})
checkbox_id = self.clear_checkbox_id(name)
@@ -47,16 +48,16 @@
checkbox_input = widgets.CheckboxInput().render(checkbox_name, False, {
'id': checkbox_id,
'class': 'clear-image',
- 'data-upload-clear': name,
+ 'data-upload-clear': html_id,
})
context = {
- 'name': name,
+ 'id': html_id,
'has_image_set': has_image_set,
'is_required': is_required,
'file_url': file_url,
'file_input': file_input,
- 'file_id': name + '-file',
+ 'file_id': html_id + '-file',
'text_input': text_input,
'checkbox_input': checkbox_input,
'checkbox_id': checkbox_id
| {"golden_diff": "diff --git a/adhocracy4/images/widgets.py b/adhocracy4/images/widgets.py\n--- a/adhocracy4/images/widgets.py\n+++ b/adhocracy4/images/widgets.py\n@@ -19,13 +19,13 @@\n js = (staticfiles_storage.url('a4images/imageUploader.js'),)\n \n def render(self, name, value, attrs=None):\n-\n+ html_id = attrs and attrs.get('id', name) or name\n has_image_set = self.is_initial(value)\n is_required = self.is_required\n \n file_placeholder = ugettext('Select a picture from your local folder.')\n file_input = super().render(name, None, {\n- 'id': name,\n+ 'id': html_id,\n 'class': 'form-control form-control-file'\n })\n \n@@ -39,7 +39,8 @@\n text_input = widgets.TextInput().render('__noname__', file_name, {\n 'class': 'form-control form-control-file-dummy',\n 'placeholder': file_placeholder,\n- 'tabindex': '-1'\n+ 'tabindex': '-1',\n+ 'id': 'text-{}'.format(html_id)\n })\n \n checkbox_id = self.clear_checkbox_id(name)\n@@ -47,16 +48,16 @@\n checkbox_input = widgets.CheckboxInput().render(checkbox_name, False, {\n 'id': checkbox_id,\n 'class': 'clear-image',\n- 'data-upload-clear': name,\n+ 'data-upload-clear': html_id,\n })\n \n context = {\n- 'name': name,\n+ 'id': html_id,\n 'has_image_set': has_image_set,\n 'is_required': is_required,\n 'file_url': file_url,\n 'file_input': file_input,\n- 'file_id': name + '-file',\n+ 'file_id': html_id + '-file',\n 'text_input': text_input,\n 'checkbox_input': checkbox_input,\n 'checkbox_id': checkbox_id\n", "issue": "improve accessibility of image upload\n* add alt attribute with the filename\r\n* if there is no image uploaded the image tag should not be there\r\n* the label's `for` attribute doesn't reference the file input's id.\r\n\r\nThe first part causes the HTML to be invalid, which is part of the BITV Test \"4.1.1a Valides HTML\".\r\nThe third part is part of the BITV Test \"3.3.2a Formularfelder richtig beschriftet\".\n", "before_files": [{"content": "from os.path import basename\n\nfrom django.contrib.staticfiles.storage import staticfiles_storage\nfrom django.forms import widgets\nfrom django.template import loader\nfrom django.utils.html import conditional_escape\nfrom django.utils.translation import ugettext\n\n\nclass ImageInputWidget(widgets.ClearableFileInput):\n\n \"\"\"\n A project-specific improved version of the clearable file upload.\n\n Allows to upload and delete uploaded files. It doesn't passing attributes\n using the positional `attrs` argument and hard codes css files.\n \"\"\"\n class Media:\n js = (staticfiles_storage.url('a4images/imageUploader.js'),)\n\n def render(self, name, value, attrs=None):\n\n has_image_set = self.is_initial(value)\n is_required = self.is_required\n\n file_placeholder = ugettext('Select a picture from your local folder.')\n file_input = super().render(name, None, {\n 'id': name,\n 'class': 'form-control form-control-file'\n })\n\n if has_image_set:\n file_name = basename(value.name)\n file_url = conditional_escape(value.url)\n else:\n file_name = \"\"\n file_url = \"\"\n\n text_input = widgets.TextInput().render('__noname__', file_name, {\n 'class': 'form-control form-control-file-dummy',\n 'placeholder': file_placeholder,\n 'tabindex': '-1'\n })\n\n checkbox_id = self.clear_checkbox_id(name)\n checkbox_name = self.clear_checkbox_name(name)\n checkbox_input = widgets.CheckboxInput().render(checkbox_name, False, {\n 'id': checkbox_id,\n 'class': 'clear-image',\n 'data-upload-clear': name,\n })\n\n context = {\n 'name': name,\n 'has_image_set': has_image_set,\n 'is_required': is_required,\n 'file_url': file_url,\n 'file_input': file_input,\n 'file_id': name + '-file',\n 'text_input': text_input,\n 'checkbox_input': checkbox_input,\n 'checkbox_id': checkbox_id\n }\n\n return loader.render_to_string(\n 'a4images/image_upload_widget.html',\n context\n )\n\n def value_from_datadict(self, data, files, name):\n \"\"\"\n Modify value_from_datadict, so that delete takes precedence over\n upload.\n \"\"\"\n file_value = super(widgets.ClearableFileInput, self)\\\n .value_from_datadict(data, files, name)\n checkbox_value = widgets.CheckboxInput()\\\n .value_from_datadict(data, files, self.clear_checkbox_name(name))\n if not self.is_required and checkbox_value:\n return False\n return file_value\n", "path": "adhocracy4/images/widgets.py"}]} | 1,366 | 436 |
gh_patches_debug_15533 | rasdani/github-patches | git_diff | voxel51__fiftyone-1660 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Support Fortran ordered masks in the App
Currently fortran ordered masks are flipped.
```py
import fiftyone as fo
import fiftyone.zoo as foz
import numpy as np
dataset = foz.load_zoo_dataset("quickstart", max_samples=1).select_fields().clone()
sample = dataset.first()
contiguous = np.asarray([[True, False], [True, False]])
sample["contiguous"] = fo.Segmentation(mask=contiguous)
sample["fortran"] = fo.Segmentation(mask=np.asfortranarray(contiguous))
sample.save()
session = fo.Session(dataset)
```
<img width="1792" alt="flipped" src="https://user-images.githubusercontent.com/19821840/159953546-5eef71bc-d111-4667-a271-6c4e34e1b7da.png">
</issue>
<code>
[start of fiftyone/server/json_util.py]
1 """
2 FiftyOne server json utilies.
3
4 | Copyright 2017-2022, Voxel51, Inc.
5 | `voxel51.com <https://voxel51.com/>`_
6 |
7 """
8 from bson import ObjectId, json_util
9 from collections import OrderedDict
10 from datetime import date, datetime
11 from json import JSONEncoder
12 import math
13
14 from fiftyone.core.sample import Sample, SampleView
15 from fiftyone.core.stages import ViewStage
16 import fiftyone.core.utils as fou
17
18
19 _MASK_CLASSES = {"Detection", "Heatmap", "Segmentation"}
20
21
22 def _handle_bytes(o):
23 for k, v in o.items():
24 if isinstance(v, bytes):
25 o[k] = str(fou.deserialize_numpy_array(v).shape)
26 elif isinstance(v, dict):
27 o[k] = _handle_bytes(v)
28
29 return o
30
31
32 def _handle_numpy_array(raw, _cls=None):
33 if _cls not in _MASK_CLASSES:
34 return str(fou.deserialize_numpy_array(raw).shape)
35
36 return fou.serialize_numpy_array(
37 fou.deserialize_numpy_array(raw), ascii=True
38 )
39
40
41 def _handle_date(dt):
42 return {
43 "_cls": "DateTime",
44 "datetime": fou.datetime_to_timestamp(dt),
45 }
46
47
48 def _is_invalid_number(value):
49 if not isinstance(value, float):
50 return False
51
52 return math.isnan(value) or math.isinf(value)
53
54
55 def convert(d):
56 if isinstance(d, (dict, OrderedDict)):
57 for k, v in d.items():
58 if isinstance(v, bytes):
59 d[k] = _handle_numpy_array(v, d.get("_cls", None))
60 elif isinstance(v, (date, datetime)):
61 d[k] = _handle_date(v)
62 elif isinstance(v, ObjectId):
63 d[k] = str(v)
64 elif isinstance(v, (dict, OrderedDict, list)):
65 convert(v)
66 elif _is_invalid_number(v):
67 d[k] = str(v)
68
69 if isinstance(d, list):
70 for idx, i in enumerate(d):
71 if isinstance(i, tuple):
72 d[idx] = list(i)
73 i = d[idx]
74
75 if isinstance(i, bytes):
76 d[idx] = _handle_numpy_array(i)
77 elif isinstance(i, (date, datetime)):
78 d[idx] = _handle_date(i)
79 elif isinstance(i, ObjectId):
80 d[idx] = str(i)
81 elif isinstance(i, (dict, OrderedDict, list)):
82 convert(i)
83 elif _is_invalid_number(i):
84 d[idx] = str(i)
85
86
87 class FiftyOneJSONEncoder(JSONEncoder):
88 """JSON encoder for the FiftyOne server.
89
90 Any classes with non-standard serialization methods should
91 be accounted for in the `default()` method.
92 """
93
94 def default(self, o): # pylint: disable=E0202
95 """Returns the serialized representation of the objects
96
97 Args:
98 o: the object
99
100 Returns:
101 str
102 """
103 if isinstance(o, (Sample, SampleView)):
104 return _handle_bytes(o.to_mongo_dict(include_id=True))
105 if issubclass(type(o), ViewStage):
106 return o._serialize()
107 if isinstance(o, ObjectId):
108 return str(o)
109 if isinstance(o, float):
110 return json_util.dumps(o)
111 return super().default(o)
112
113 @staticmethod
114 def dumps(*args, **kwargs):
115 """Defined for overriding the default SocketIO `json` interface"""
116 kwargs["cls"] = FiftyOneJSONEncoder
117 return json_util.dumps(
118 json_util.loads(
119 json_util.dumps(*args, **kwargs), parse_constant=lambda c: c
120 ),
121 **kwargs
122 )
123
124 @staticmethod
125 def loads(*args, **kwargs):
126 """Defined for overriding the default SocketIO `json` interface"""
127 return json_util.loads(*args, **kwargs)
128
[end of fiftyone/server/json_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/fiftyone/server/json_util.py b/fiftyone/server/json_util.py
--- a/fiftyone/server/json_util.py
+++ b/fiftyone/server/json_util.py
@@ -10,6 +10,7 @@
from datetime import date, datetime
from json import JSONEncoder
import math
+import numpy as np
from fiftyone.core.sample import Sample, SampleView
from fiftyone.core.stages import ViewStage
@@ -33,9 +34,12 @@
if _cls not in _MASK_CLASSES:
return str(fou.deserialize_numpy_array(raw).shape)
- return fou.serialize_numpy_array(
- fou.deserialize_numpy_array(raw), ascii=True
- )
+ array = fou.deserialize_numpy_array(raw)
+
+ if np.isfortran(array):
+ array = np.ascontiguousarray(array)
+
+ return fou.serialize_numpy_array(array, ascii=True)
def _handle_date(dt):
| {"golden_diff": "diff --git a/fiftyone/server/json_util.py b/fiftyone/server/json_util.py\n--- a/fiftyone/server/json_util.py\n+++ b/fiftyone/server/json_util.py\n@@ -10,6 +10,7 @@\n from datetime import date, datetime\n from json import JSONEncoder\n import math\n+import numpy as np\n \n from fiftyone.core.sample import Sample, SampleView\n from fiftyone.core.stages import ViewStage\n@@ -33,9 +34,12 @@\n if _cls not in _MASK_CLASSES:\n return str(fou.deserialize_numpy_array(raw).shape)\n \n- return fou.serialize_numpy_array(\n- fou.deserialize_numpy_array(raw), ascii=True\n- )\n+ array = fou.deserialize_numpy_array(raw)\n+\n+ if np.isfortran(array):\n+ array = np.ascontiguousarray(array)\n+\n+ return fou.serialize_numpy_array(array, ascii=True)\n \n \n def _handle_date(dt):\n", "issue": "[BUG] Support Fortran ordered masks in the App\nCurrently fortran ordered masks are flipped.\r\n\r\n```py\r\nimport fiftyone as fo\r\nimport fiftyone.zoo as foz\r\nimport numpy as np\r\n\r\ndataset = foz.load_zoo_dataset(\"quickstart\", max_samples=1).select_fields().clone()\r\nsample = dataset.first()\r\n\r\ncontiguous = np.asarray([[True, False], [True, False]])\r\nsample[\"contiguous\"] = fo.Segmentation(mask=contiguous)\r\nsample[\"fortran\"] = fo.Segmentation(mask=np.asfortranarray(contiguous))\r\nsample.save()\r\n\r\nsession = fo.Session(dataset)\r\n```\r\n<img width=\"1792\" alt=\"flipped\" src=\"https://user-images.githubusercontent.com/19821840/159953546-5eef71bc-d111-4667-a271-6c4e34e1b7da.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nFiftyOne server json utilies.\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nfrom bson import ObjectId, json_util\nfrom collections import OrderedDict\nfrom datetime import date, datetime\nfrom json import JSONEncoder\nimport math\n\nfrom fiftyone.core.sample import Sample, SampleView\nfrom fiftyone.core.stages import ViewStage\nimport fiftyone.core.utils as fou\n\n\n_MASK_CLASSES = {\"Detection\", \"Heatmap\", \"Segmentation\"}\n\n\ndef _handle_bytes(o):\n for k, v in o.items():\n if isinstance(v, bytes):\n o[k] = str(fou.deserialize_numpy_array(v).shape)\n elif isinstance(v, dict):\n o[k] = _handle_bytes(v)\n\n return o\n\n\ndef _handle_numpy_array(raw, _cls=None):\n if _cls not in _MASK_CLASSES:\n return str(fou.deserialize_numpy_array(raw).shape)\n\n return fou.serialize_numpy_array(\n fou.deserialize_numpy_array(raw), ascii=True\n )\n\n\ndef _handle_date(dt):\n return {\n \"_cls\": \"DateTime\",\n \"datetime\": fou.datetime_to_timestamp(dt),\n }\n\n\ndef _is_invalid_number(value):\n if not isinstance(value, float):\n return False\n\n return math.isnan(value) or math.isinf(value)\n\n\ndef convert(d):\n if isinstance(d, (dict, OrderedDict)):\n for k, v in d.items():\n if isinstance(v, bytes):\n d[k] = _handle_numpy_array(v, d.get(\"_cls\", None))\n elif isinstance(v, (date, datetime)):\n d[k] = _handle_date(v)\n elif isinstance(v, ObjectId):\n d[k] = str(v)\n elif isinstance(v, (dict, OrderedDict, list)):\n convert(v)\n elif _is_invalid_number(v):\n d[k] = str(v)\n\n if isinstance(d, list):\n for idx, i in enumerate(d):\n if isinstance(i, tuple):\n d[idx] = list(i)\n i = d[idx]\n\n if isinstance(i, bytes):\n d[idx] = _handle_numpy_array(i)\n elif isinstance(i, (date, datetime)):\n d[idx] = _handle_date(i)\n elif isinstance(i, ObjectId):\n d[idx] = str(i)\n elif isinstance(i, (dict, OrderedDict, list)):\n convert(i)\n elif _is_invalid_number(i):\n d[idx] = str(i)\n\n\nclass FiftyOneJSONEncoder(JSONEncoder):\n \"\"\"JSON encoder for the FiftyOne server.\n\n Any classes with non-standard serialization methods should\n be accounted for in the `default()` method.\n \"\"\"\n\n def default(self, o): # pylint: disable=E0202\n \"\"\"Returns the serialized representation of the objects\n\n Args:\n o: the object\n\n Returns:\n str\n \"\"\"\n if isinstance(o, (Sample, SampleView)):\n return _handle_bytes(o.to_mongo_dict(include_id=True))\n if issubclass(type(o), ViewStage):\n return o._serialize()\n if isinstance(o, ObjectId):\n return str(o)\n if isinstance(o, float):\n return json_util.dumps(o)\n return super().default(o)\n\n @staticmethod\n def dumps(*args, **kwargs):\n \"\"\"Defined for overriding the default SocketIO `json` interface\"\"\"\n kwargs[\"cls\"] = FiftyOneJSONEncoder\n return json_util.dumps(\n json_util.loads(\n json_util.dumps(*args, **kwargs), parse_constant=lambda c: c\n ),\n **kwargs\n )\n\n @staticmethod\n def loads(*args, **kwargs):\n \"\"\"Defined for overriding the default SocketIO `json` interface\"\"\"\n return json_util.loads(*args, **kwargs)\n", "path": "fiftyone/server/json_util.py"}]} | 1,846 | 204 |
gh_patches_debug_1764 | rasdani/github-patches | git_diff | apple__coremltools-298 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Why is six pinned to 1.10.0?
Is there any reason for [six to be pinned to version 1.10.0](https://github.com/apple/coremltools/blob/master/setup.py#L44). This gives transitive dependency issues sometimes.
/cc @mats-claassen
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 from setuptools import setup
5
6 README = os.path.join(os.getcwd(), "README.rst")
7
8
9 with open(README) as f:
10 long_description = f.read()
11
12 setup(name='coremltools',
13 version='2.0',
14 description='Community Tools for CoreML',
15 long_description=long_description,
16 author='Apple Inc.',
17 author_email='[email protected]',
18 url='',
19 packages=[
20 'coremltools',
21 'coremltools._deps',
22 'coremltools.converters',
23 'coremltools.converters.caffe',
24 'coremltools.converters.sklearn',
25 'coremltools.converters.xgboost',
26 'coremltools.converters.libsvm',
27 'coremltools.converters.keras',
28 'coremltools.graph_visualization',
29 'coremltools.models',
30 'coremltools.models.neural_network',
31 'coremltools.proto',
32 'coremltools._scripts'
33 ],
34 package_data={'': ['LICENSE.txt', 'README.rst', 'libcaffeconverter.so', 'libcoremlpython.so'],
35 'coremltools': ['graph_visualization/__init__.py',
36 'graph_visualization/app.js',
37 'graph_visualization/index.html',
38 'graph_visualization/style.css',
39 'graph_visualization/assets/*',
40 'graph_visualization/icons/*']
41 },
42 install_requires=[
43 'numpy >= 1.10.0',
44 'protobuf >= 3.1.0',
45 'six==1.10.0'
46 ],
47 entry_points = {
48 'console_scripts': ['coremlconverter = coremltools:_main']
49 },
50 classifiers=[
51 'Development Status :: 4 - Beta',
52 'Intended Audience :: End Users/Desktop',
53 'Intended Audience :: Developers',
54 'Operating System :: MacOS :: MacOS X',
55 'Programming Language :: Python :: 2.7',
56 'Programming Language :: Python :: 3.5',
57 'Programming Language :: Python :: 3.6',
58 'Topic :: Scientific/Engineering',
59 'Topic :: Software Development'
60 ],
61 license='BSD'
62 )
63
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,7 +42,7 @@
install_requires=[
'numpy >= 1.10.0',
'protobuf >= 3.1.0',
- 'six==1.10.0'
+ 'six>=1.10.0'
],
entry_points = {
'console_scripts': ['coremlconverter = coremltools:_main']
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,7 +42,7 @@\n install_requires=[\n 'numpy >= 1.10.0',\n 'protobuf >= 3.1.0',\n- 'six==1.10.0'\n+ 'six>=1.10.0'\n ],\n entry_points = {\n 'console_scripts': ['coremlconverter = coremltools:_main']\n", "issue": "Why is six pinned to 1.10.0?\nIs there any reason for [six to be pinned to version 1.10.0](https://github.com/apple/coremltools/blob/master/setup.py#L44). This gives transitive dependency issues sometimes.\r\n\r\n/cc @mats-claassen\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nfrom setuptools import setup\n\nREADME = os.path.join(os.getcwd(), \"README.rst\")\n\n\nwith open(README) as f:\n long_description = f.read()\n\nsetup(name='coremltools',\n version='2.0',\n description='Community Tools for CoreML',\n long_description=long_description,\n author='Apple Inc.',\n author_email='[email protected]',\n url='',\n packages=[\n 'coremltools',\n 'coremltools._deps',\n 'coremltools.converters',\n 'coremltools.converters.caffe',\n 'coremltools.converters.sklearn',\n 'coremltools.converters.xgboost',\n 'coremltools.converters.libsvm',\n 'coremltools.converters.keras',\n 'coremltools.graph_visualization',\n 'coremltools.models',\n 'coremltools.models.neural_network',\n 'coremltools.proto',\n 'coremltools._scripts'\n ],\n package_data={'': ['LICENSE.txt', 'README.rst', 'libcaffeconverter.so', 'libcoremlpython.so'],\n 'coremltools': ['graph_visualization/__init__.py',\n 'graph_visualization/app.js',\n 'graph_visualization/index.html',\n 'graph_visualization/style.css',\n 'graph_visualization/assets/*',\n 'graph_visualization/icons/*']\n },\n install_requires=[\n 'numpy >= 1.10.0',\n 'protobuf >= 3.1.0',\n 'six==1.10.0'\n ],\n entry_points = {\n 'console_scripts': ['coremlconverter = coremltools:_main']\n },\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: End Users/Desktop',\n 'Intended Audience :: Developers',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development'\n ],\n license='BSD'\n)\n", "path": "setup.py"}]} | 1,175 | 106 |
gh_patches_debug_36300 | rasdani/github-patches | git_diff | fidals__shopelectro-767 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use SiteDriver class instead of seleniumrequests.Remote
It will bring ability to use `shopelectro.selenium` classes in tests.
</issue>
<code>
[start of shopelectro/selenium/pages/order.py]
1 from shopelectro.models import PaymentOptions
2 from shopelectro.selenium.elements import Input, Button
3 from shopelectro.selenium.pages import Page
4
5 from selenium.webdriver.common.by import By
6
7 from pages.models import CustomPage
8
9 # @todo #682:120m Implement and reuse shopelectro.selenium.OrderPage for selenium tests.
10
11
12 class OrderPage(Page):
13
14 def __init__(self, driver):
15 super().__init__(driver)
16 self.submit_button = Button(self.driver, (By.ID, 'submit-order'))
17
18 @property
19 def path(self):
20 return CustomPage.objects.get(slug='order').url
21
22 def fill_contacts(
23 self, name='Name', city='Санкт-Петербург', phone='2222222222', email='[email protected]',
24 ):
25 contacts = {
26 'id_name': name,
27 'id_city': city,
28 'id_phone': phone,
29 'id_email': email,
30 }
31
32 for id_, value in contacts.items():
33 Input(self.driver, (By.ID, id_)).send_keys(value)
34
35 def make_order(self):
36 self.submit_button.click()
37
38 def select_payment_type(self, payment_option: PaymentOptions):
39 if payment_option not in PaymentOptions:
40 raise ValueError(
41 'An invalid payment type provided.'
42 f'It should be one of: {PaymentOptions}'
43 )
44
45 item = Button(
46 self.driver,
47 (By.CSS, f'input[name="payment_type"][value="{payment_option.name}"]'),
48 )
49 item.click()
50
[end of shopelectro/selenium/pages/order.py]
[start of shopelectro/selenium/pages/page.py]
1 from shopelectro.selenium import SiteDriver
2
3 from selenium.webdriver.common.by import By
4 from selenium.webdriver.support import expected_conditions as EC
5
6
7 class Page:
8 """
9 Represent a typical Shopelectro's page.
10
11 Contains cross-page elements: header, footer, ...
12 """
13
14 def __init__(self, driver: SiteDriver):
15 if not isinstance(driver, SiteDriver):
16 raise TypeError('Driver must be an instance of shopelectro.selenium.SiteDriver')
17 self.driver = driver
18 self.path: str
19
20 def load(self):
21 if not self.path:
22 raise ValueError(f'Set a page path to {self.__class__.__name__}')
23 self.driver.get(self.path)
24 self.driver.wait.until(EC.visibility_of_element_located(
25 (By.TAG_NAME, 'body')
26 ))
27
[end of shopelectro/selenium/pages/page.py]
[start of shopelectro/selenium/pages/success.py]
1 from shopelectro.selenium.pages import Page
2
3 from pages.models import CustomPage
4
5
6 class SuccessPage(Page):
7
8 @property
9 def path(self):
10 CustomPage.objects.get(slug='order-success').url
11
12 def is_success(self):
13 return 'Заказ принят' in self.driver.find_element_by_tag_name('h1').text
14
[end of shopelectro/selenium/pages/success.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/selenium/pages/order.py b/shopelectro/selenium/pages/order.py
--- a/shopelectro/selenium/pages/order.py
+++ b/shopelectro/selenium/pages/order.py
@@ -3,6 +3,7 @@
from shopelectro.selenium.pages import Page
from selenium.webdriver.common.by import By
+from selenium.webdriver.support import expected_conditions as EC
from pages.models import CustomPage
@@ -34,6 +35,7 @@
def make_order(self):
self.submit_button.click()
+ self.driver.wait.until(EC.url_changes(self.path))
def select_payment_type(self, payment_option: PaymentOptions):
if payment_option not in PaymentOptions:
diff --git a/shopelectro/selenium/pages/page.py b/shopelectro/selenium/pages/page.py
--- a/shopelectro/selenium/pages/page.py
+++ b/shopelectro/selenium/pages/page.py
@@ -1,3 +1,5 @@
+from functools import wraps
+
from shopelectro.selenium import SiteDriver
from selenium.webdriver.common.by import By
@@ -17,10 +19,17 @@
self.driver = driver
self.path: str
+ def wait_loaded(self):
+ def loaded(driver):
+ is_sync = EC.url_contains(self.path)
+ is_rendered = EC.visibility_of_element_located(
+ (By.TAG_NAME, 'body')
+ )
+ return is_sync(driver) and is_rendered(driver)
+ self.driver.wait.until(loaded)
+
def load(self):
if not self.path:
raise ValueError(f'Set a page path to {self.__class__.__name__}')
self.driver.get(self.path)
- self.driver.wait.until(EC.visibility_of_element_located(
- (By.TAG_NAME, 'body')
- ))
+ self.wait_loaded()
diff --git a/shopelectro/selenium/pages/success.py b/shopelectro/selenium/pages/success.py
--- a/shopelectro/selenium/pages/success.py
+++ b/shopelectro/selenium/pages/success.py
@@ -1,3 +1,6 @@
+from selenium.webdriver.common.by import By
+from selenium.webdriver.support import expected_conditions as EC
+
from shopelectro.selenium.pages import Page
from pages.models import CustomPage
@@ -7,7 +10,10 @@
@property
def path(self):
- CustomPage.objects.get(slug='order-success').url
+ return CustomPage.objects.get(slug='order-success').url
def is_success(self):
- return 'Заказ принят' in self.driver.find_element_by_tag_name('h1').text
+ h1 = self.driver.wait.until(
+ EC.visibility_of_element_located((By.TAG_NAME, 'h1'))
+ ).text
+ return 'Заказ принят' in h1
| {"golden_diff": "diff --git a/shopelectro/selenium/pages/order.py b/shopelectro/selenium/pages/order.py\n--- a/shopelectro/selenium/pages/order.py\n+++ b/shopelectro/selenium/pages/order.py\n@@ -3,6 +3,7 @@\n from shopelectro.selenium.pages import Page\n \n from selenium.webdriver.common.by import By\n+from selenium.webdriver.support import expected_conditions as EC\n \n from pages.models import CustomPage\n \n@@ -34,6 +35,7 @@\n \n def make_order(self):\n self.submit_button.click()\n+ self.driver.wait.until(EC.url_changes(self.path))\n \n def select_payment_type(self, payment_option: PaymentOptions):\n if payment_option not in PaymentOptions:\ndiff --git a/shopelectro/selenium/pages/page.py b/shopelectro/selenium/pages/page.py\n--- a/shopelectro/selenium/pages/page.py\n+++ b/shopelectro/selenium/pages/page.py\n@@ -1,3 +1,5 @@\n+from functools import wraps\n+\n from shopelectro.selenium import SiteDriver\n \n from selenium.webdriver.common.by import By\n@@ -17,10 +19,17 @@\n self.driver = driver\n self.path: str\n \n+ def wait_loaded(self):\n+ def loaded(driver):\n+ is_sync = EC.url_contains(self.path)\n+ is_rendered = EC.visibility_of_element_located(\n+ (By.TAG_NAME, 'body')\n+ )\n+ return is_sync(driver) and is_rendered(driver)\n+ self.driver.wait.until(loaded)\n+\n def load(self):\n if not self.path:\n raise ValueError(f'Set a page path to {self.__class__.__name__}')\n self.driver.get(self.path)\n- self.driver.wait.until(EC.visibility_of_element_located(\n- (By.TAG_NAME, 'body')\n- ))\n+ self.wait_loaded()\ndiff --git a/shopelectro/selenium/pages/success.py b/shopelectro/selenium/pages/success.py\n--- a/shopelectro/selenium/pages/success.py\n+++ b/shopelectro/selenium/pages/success.py\n@@ -1,3 +1,6 @@\n+from selenium.webdriver.common.by import By\n+from selenium.webdriver.support import expected_conditions as EC\n+\n from shopelectro.selenium.pages import Page\n \n from pages.models import CustomPage\n@@ -7,7 +10,10 @@\n \n @property\n def path(self):\n- CustomPage.objects.get(slug='order-success').url\n+ return CustomPage.objects.get(slug='order-success').url\n \n def is_success(self):\n- return '\u0417\u0430\u043a\u0430\u0437 \u043f\u0440\u0438\u043d\u044f\u0442' in self.driver.find_element_by_tag_name('h1').text\n+ h1 = self.driver.wait.until(\n+ EC.visibility_of_element_located((By.TAG_NAME, 'h1'))\n+ ).text\n+ return '\u0417\u0430\u043a\u0430\u0437 \u043f\u0440\u0438\u043d\u044f\u0442' in h1\n", "issue": "Use SiteDriver class instead of seleniumrequests.Remote\nIt will bring ability to use `shopelectro.selenium` classes in tests. \n", "before_files": [{"content": "from shopelectro.models import PaymentOptions\nfrom shopelectro.selenium.elements import Input, Button\nfrom shopelectro.selenium.pages import Page\n\nfrom selenium.webdriver.common.by import By\n\nfrom pages.models import CustomPage\n\n# @todo #682:120m Implement and reuse shopelectro.selenium.OrderPage for selenium tests.\n\n\nclass OrderPage(Page):\n\n def __init__(self, driver):\n super().__init__(driver)\n self.submit_button = Button(self.driver, (By.ID, 'submit-order'))\n\n @property\n def path(self):\n return CustomPage.objects.get(slug='order').url\n\n def fill_contacts(\n self, name='Name', city='\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433', phone='2222222222', email='[email protected]',\n ):\n contacts = {\n 'id_name': name,\n 'id_city': city,\n 'id_phone': phone,\n 'id_email': email,\n }\n\n for id_, value in contacts.items():\n Input(self.driver, (By.ID, id_)).send_keys(value)\n\n def make_order(self):\n self.submit_button.click()\n\n def select_payment_type(self, payment_option: PaymentOptions):\n if payment_option not in PaymentOptions:\n raise ValueError(\n 'An invalid payment type provided.'\n f'It should be one of: {PaymentOptions}'\n )\n\n item = Button(\n self.driver,\n (By.CSS, f'input[name=\"payment_type\"][value=\"{payment_option.name}\"]'),\n )\n item.click()\n", "path": "shopelectro/selenium/pages/order.py"}, {"content": "from shopelectro.selenium import SiteDriver\n\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\n\nclass Page:\n \"\"\"\n Represent a typical Shopelectro's page.\n\n Contains cross-page elements: header, footer, ...\n \"\"\"\n\n def __init__(self, driver: SiteDriver):\n if not isinstance(driver, SiteDriver):\n raise TypeError('Driver must be an instance of shopelectro.selenium.SiteDriver')\n self.driver = driver\n self.path: str\n\n def load(self):\n if not self.path:\n raise ValueError(f'Set a page path to {self.__class__.__name__}')\n self.driver.get(self.path)\n self.driver.wait.until(EC.visibility_of_element_located(\n (By.TAG_NAME, 'body')\n ))\n", "path": "shopelectro/selenium/pages/page.py"}, {"content": "from shopelectro.selenium.pages import Page\n\nfrom pages.models import CustomPage\n\n\nclass SuccessPage(Page):\n\n @property\n def path(self):\n CustomPage.objects.get(slug='order-success').url\n\n def is_success(self):\n return '\u0417\u0430\u043a\u0430\u0437 \u043f\u0440\u0438\u043d\u044f\u0442' in self.driver.find_element_by_tag_name('h1').text\n", "path": "shopelectro/selenium/pages/success.py"}]} | 1,368 | 629 |
gh_patches_debug_564 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
✨ Memory Pool Optimizations
### Thanks for stopping by to let us know something could be better!
**Is your feature request related to a problem? Please describe.** _A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]_
**Describe the solution you'd like** _A clear and concise description of what you want to happen._
**Describe alternatives you've considered** _A clear and concise description of any alternative solutions or features you've considered._
**Additional context** _Add any other context or screenshots about the feature request here._
</issue>
<code>
[start of opteryx/__version__.py]
1 __build__ = 527
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 16
31 _revision = 0
32 _status = VersionStatus.ALPHA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
[end of opteryx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 527
+__build__ = 532
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 527\n+__build__ = 532\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n", "issue": "\u2728 Memory Pool Optimizations\n### Thanks for stopping by to let us know something could be better!\r\n\r\n**Is your feature request related to a problem? Please describe.** _A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]_\r\n\r\n**Describe the solution you'd like** _A clear and concise description of what you want to happen._\r\n\r\n**Describe alternatives you've considered** _A clear and concise description of any alternative solutions or features you've considered._\r\n\r\n**Additional context** _Add any other context or screenshots about the feature request here._\r\n\n", "before_files": [{"content": "__build__ = 527\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 16\n_revision = 0\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 1,012 | 101 |
gh_patches_debug_308 | rasdani/github-patches | git_diff | zulip__zulip-13077 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade pip from 19.1.1 and pip-tools from 3.8.0
Followup issue from #13067. pip-tools 3.9.0 or 4.0.0 fails to resolve dependencies from Git URLs (jazzband/pip-tools#851):
`pip._internal.exceptions.DistributionNotFound: No matching distribution found for zulip==0.6.1_git (from -r requirements/common.in (line 135))`
while pip 19.2 breaks pip-tools 3.8.0 (jazzband/pip-tools#853):
`TypeError: __init__() got an unexpected keyword argument 'find_links'`
</issue>
<code>
[start of version.py]
1 import os
2
3 ZULIP_VERSION = "2.0.4+git"
4 # Add information on number of commits and commit hash to version, if available
5 zulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')
6 if os.path.exists(zulip_git_version_file):
7 with open(zulip_git_version_file) as f:
8 version = f.read().strip()
9 if version:
10 ZULIP_VERSION = version
11
12 LATEST_MAJOR_VERSION = "2.0"
13 LATEST_RELEASE_VERSION = "2.0.4"
14 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.org/2019/03/01/zulip-2-0-released/"
15
16 # Bump the minor PROVISION_VERSION to indicate that folks should provision
17 # only when going from an old version of the code to a newer version. Bump
18 # the major version to indicate that folks should provision in both
19 # directions.
20
21 # Typically,
22 # * adding a dependency only requires a minor version bump;
23 # * removing a dependency requires a major version bump;
24 # * upgrading a dependency requires a major version bump, unless the
25 # upgraded dependency is backwards compatible with all of our
26 # historical commits sharing the same major version, in which case a
27 # minor version bump suffices.
28
29 PROVISION_VERSION = '49.2'
30
[end of version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -26,4 +26,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '49.2'
+PROVISION_VERSION = '49.3'
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -26,4 +26,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = '49.2'\n+PROVISION_VERSION = '49.3'\n", "issue": "Upgrade pip from 19.1.1 and pip-tools from 3.8.0\nFollowup issue from #13067. pip-tools 3.9.0 or 4.0.0 fails to resolve dependencies from Git URLs (jazzband/pip-tools#851):\r\n\r\n`pip._internal.exceptions.DistributionNotFound: No matching distribution found for zulip==0.6.1_git (from -r requirements/common.in (line 135))`\r\n\r\nwhile pip 19.2 breaks pip-tools 3.8.0 (jazzband/pip-tools#853):\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'find_links'`\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"2.0.4+git\"\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n version = f.read().strip()\n if version:\n ZULIP_VERSION = version\n\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.4\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = '49.2'\n", "path": "version.py"}]} | 1,033 | 78 |
gh_patches_debug_4166 | rasdani/github-patches | git_diff | ray-project__ray-9517 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tune] Population-based training: broken when using keep_checkpoint_num
When using **population-based** training TUNE stops after some times throwing the following error:
`There are paused trials, but no more pending trials with sufficient resources.`
This is caused by not finding the latest checkpoint:
```
Failure # 1 (occurred at 2020-06-19_11-26-36)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 294, in start_trial
self._start_trial(trial, checkpoint, train=train)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 235, in _start_trial
self.restore(trial, checkpoint)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 673, in restore
data_dict = TrainableUtil.pickle_checkpoint(value)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 62, in pickle_checkpoint
checkpoint_dir = TrainableUtil.find_checkpoint_dir(checkpoint_path)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 87, in find_checkpoint_dir
raise FileNotFoundError("Path does not exist", checkpoint_path)
FileNotFoundError: [Errno Path does not exist] /content/TRASH_TUNE_PBT_oversampling_mimic_densenet121/TUNE_Model_0_2020-06-19_11-24-215xncry9c/checkpoint_6/
```
The error appears to be somewhat random since it only appears after quite some iterations
The error can be reproduced in this [colab notebook](https://colab.research.google.com/drive/1-o896bEUm7DTvS24Do0btlqbSHre49MH?usp=sharing). **It is not a COLAB related issue since the same problem arises on our own server.**
@richardliaw Is this related to #8772 ?
</issue>
<code>
[start of python/ray/tune/checkpoint_manager.py]
1 # coding: utf-8
2 import heapq
3 import logging
4
5 from ray.tune.result import TRAINING_ITERATION
6
7 logger = logging.getLogger(__name__)
8
9
10 class Checkpoint:
11 """Describes a checkpoint of trial state.
12
13 Checkpoint may be saved in different storage.
14
15 Attributes:
16 storage (str): Storage type.
17 value (str): If storage==MEMORY, it is a Python object.
18 If storage==PERSISTENT, it is a path to persistent storage,
19 or a future that will be resolved to such a path.
20 """
21
22 MEMORY = "memory"
23 PERSISTENT = "persistent"
24
25 def __init__(self, storage, value, result=None):
26 self.storage = storage
27 self.value = value
28 self.result = result or {}
29
30 @staticmethod
31 def from_object(value=None):
32 """Creates a checkpoint from a Python object."""
33 return Checkpoint(Checkpoint.MEMORY, value)
34
35 @property
36 def is_ready(self):
37 """Returns whether the checkpoint is ready to be used for restoration.
38
39 A PERSISTENT checkpoint is considered ready once its value is resolved
40 to an actual path. MEMORY checkpoints are always considered ready since
41 they are transient.
42 """
43 if self.storage == Checkpoint.PERSISTENT:
44 return isinstance(self.value, str)
45 return self.storage == Checkpoint.MEMORY
46
47
48 class QueueItem:
49 def __init__(self, priority, value):
50 self.priority = priority
51 self.value = value
52
53 def __lt__(self, other):
54 return self.priority < other.priority
55
56
57 class CheckpointManager:
58 """Manages checkpoints on the driver for a trial."""
59
60 def __init__(self, keep_checkpoints_num, checkpoint_score_attr, delete_fn):
61 """Initializes a new CheckpointManager.
62
63 `newest_persistent_checkpoint` and `newest_memory_checkpoint` are
64 initialized to Checkpoint objects with values of None.
65
66 Args:
67 keep_checkpoints_num (int): Keep at least this many checkpoints.
68 checkpoint_score_attr (str): Attribute to use to determine which
69 checkpoints to keep.
70 delete_fn (function): Function that deletes checkpoints. Must be
71 idempotent.
72 """
73 self.keep_checkpoints_num = keep_checkpoints_num or float("inf")
74 assert self.keep_checkpoints_num > 0, (
75 "keep_checkpoints_num must be greater than 0.")
76 self._checkpoint_score_desc = checkpoint_score_attr.startswith("min-")
77 if self._checkpoint_score_desc:
78 self._checkpoint_score_attr = checkpoint_score_attr[4:]
79 else:
80 self._checkpoint_score_attr = checkpoint_score_attr
81
82 self.delete = delete_fn
83 self.newest_persistent_checkpoint = Checkpoint(Checkpoint.PERSISTENT,
84 None)
85 self.newest_memory_checkpoint = Checkpoint(Checkpoint.MEMORY, None)
86 self._best_checkpoints = []
87 self._membership = set()
88
89 @property
90 def newest_checkpoint(self):
91 """Returns the newest checkpoint (based on training iteration)."""
92 newest_checkpoint = max(
93 [self.newest_persistent_checkpoint, self.newest_memory_checkpoint],
94 key=lambda c: c.result.get(TRAINING_ITERATION, -1))
95 return newest_checkpoint
96
97 def on_checkpoint(self, checkpoint):
98 """Starts tracking checkpoint metadata on checkpoint.
99
100 Sets the newest checkpoint. For PERSISTENT checkpoints: Deletes
101 previous checkpoint as long as it isn't one of the best ones. Also
102 deletes the worst checkpoint if at capacity.
103
104 Args:
105 checkpoint (Checkpoint): Trial state checkpoint.
106 """
107 if checkpoint.storage == Checkpoint.MEMORY:
108 self.newest_memory_checkpoint = checkpoint
109 return
110
111 old_checkpoint = self.newest_persistent_checkpoint
112 self.newest_persistent_checkpoint = checkpoint
113
114 # Remove the old checkpoint if it isn't one of the best ones.
115 if old_checkpoint.value and old_checkpoint not in self._membership:
116 self.delete(old_checkpoint)
117
118 try:
119 queue_item = QueueItem(self._priority(checkpoint), checkpoint)
120 except KeyError:
121 logger.error("Result dict has no key: {}. "
122 "checkpoint_score_attr must be set to a key in the "
123 "result dict.".format(self._checkpoint_score_attr))
124 return
125
126 if len(self._best_checkpoints) < self.keep_checkpoints_num:
127 heapq.heappush(self._best_checkpoints, queue_item)
128 self._membership.add(checkpoint)
129 elif queue_item.priority >= self._best_checkpoints[0].priority:
130 worst = heapq.heappushpop(self._best_checkpoints, queue_item).value
131 self._membership.add(checkpoint)
132 if worst in self._membership:
133 self._membership.remove(worst)
134 # Don't delete the newest checkpoint. It will be deleted on the
135 # next on_checkpoint() call since it isn't in self._membership.
136 if worst != checkpoint:
137 self.delete(worst)
138
139 def best_checkpoints(self):
140 """Returns best PERSISTENT checkpoints, sorted by score."""
141 checkpoints = sorted(self._best_checkpoints, key=lambda c: c.priority)
142 return [queue_item.value for queue_item in checkpoints]
143
144 def _priority(self, checkpoint):
145 priority = checkpoint.result[self._checkpoint_score_attr]
146 return -priority if self._checkpoint_score_desc else priority
147
148 def __getstate__(self):
149 state = self.__dict__.copy()
150 # Avoid serializing lambda since it may capture cyclical dependencies.
151 state.pop("delete")
152 return state
153
154 def __setstate__(self, state):
155 self.__dict__.update(state)
156 self.delete = None
157
[end of python/ray/tune/checkpoint_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/ray/tune/checkpoint_manager.py b/python/ray/tune/checkpoint_manager.py
--- a/python/ray/tune/checkpoint_manager.py
+++ b/python/ray/tune/checkpoint_manager.py
@@ -109,6 +109,10 @@
return
old_checkpoint = self.newest_persistent_checkpoint
+
+ if old_checkpoint.value == checkpoint.value:
+ return
+
self.newest_persistent_checkpoint = checkpoint
# Remove the old checkpoint if it isn't one of the best ones.
| {"golden_diff": "diff --git a/python/ray/tune/checkpoint_manager.py b/python/ray/tune/checkpoint_manager.py\n--- a/python/ray/tune/checkpoint_manager.py\n+++ b/python/ray/tune/checkpoint_manager.py\n@@ -109,6 +109,10 @@\n return\n \n old_checkpoint = self.newest_persistent_checkpoint\n+\n+ if old_checkpoint.value == checkpoint.value:\n+ return\n+\n self.newest_persistent_checkpoint = checkpoint\n \n # Remove the old checkpoint if it isn't one of the best ones.\n", "issue": "[tune] Population-based training: broken when using keep_checkpoint_num\nWhen using **population-based** training TUNE stops after some times throwing the following error:\r\n\r\n`There are paused trials, but no more pending trials with sufficient resources.`\r\n\r\nThis is caused by not finding the latest checkpoint:\r\n\r\n```\r\nFailure # 1 (occurred at 2020-06-19_11-26-36)\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py\", line 294, in start_trial\r\n self._start_trial(trial, checkpoint, train=train)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py\", line 235, in _start_trial\r\n self.restore(trial, checkpoint)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py\", line 673, in restore\r\n data_dict = TrainableUtil.pickle_checkpoint(value)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 62, in pickle_checkpoint\r\n checkpoint_dir = TrainableUtil.find_checkpoint_dir(checkpoint_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 87, in find_checkpoint_dir\r\n raise FileNotFoundError(\"Path does not exist\", checkpoint_path)\r\nFileNotFoundError: [Errno Path does not exist] /content/TRASH_TUNE_PBT_oversampling_mimic_densenet121/TUNE_Model_0_2020-06-19_11-24-215xncry9c/checkpoint_6/\r\n```\r\n\r\nThe error appears to be somewhat random since it only appears after quite some iterations\r\n\r\nThe error can be reproduced in this [colab notebook](https://colab.research.google.com/drive/1-o896bEUm7DTvS24Do0btlqbSHre49MH?usp=sharing). **It is not a COLAB related issue since the same problem arises on our own server.**\r\n\r\n@richardliaw Is this related to #8772 ?\n", "before_files": [{"content": "# coding: utf-8\nimport heapq\nimport logging\n\nfrom ray.tune.result import TRAINING_ITERATION\n\nlogger = logging.getLogger(__name__)\n\n\nclass Checkpoint:\n \"\"\"Describes a checkpoint of trial state.\n\n Checkpoint may be saved in different storage.\n\n Attributes:\n storage (str): Storage type.\n value (str): If storage==MEMORY, it is a Python object.\n If storage==PERSISTENT, it is a path to persistent storage,\n or a future that will be resolved to such a path.\n \"\"\"\n\n MEMORY = \"memory\"\n PERSISTENT = \"persistent\"\n\n def __init__(self, storage, value, result=None):\n self.storage = storage\n self.value = value\n self.result = result or {}\n\n @staticmethod\n def from_object(value=None):\n \"\"\"Creates a checkpoint from a Python object.\"\"\"\n return Checkpoint(Checkpoint.MEMORY, value)\n\n @property\n def is_ready(self):\n \"\"\"Returns whether the checkpoint is ready to be used for restoration.\n\n A PERSISTENT checkpoint is considered ready once its value is resolved\n to an actual path. MEMORY checkpoints are always considered ready since\n they are transient.\n \"\"\"\n if self.storage == Checkpoint.PERSISTENT:\n return isinstance(self.value, str)\n return self.storage == Checkpoint.MEMORY\n\n\nclass QueueItem:\n def __init__(self, priority, value):\n self.priority = priority\n self.value = value\n\n def __lt__(self, other):\n return self.priority < other.priority\n\n\nclass CheckpointManager:\n \"\"\"Manages checkpoints on the driver for a trial.\"\"\"\n\n def __init__(self, keep_checkpoints_num, checkpoint_score_attr, delete_fn):\n \"\"\"Initializes a new CheckpointManager.\n\n `newest_persistent_checkpoint` and `newest_memory_checkpoint` are\n initialized to Checkpoint objects with values of None.\n\n Args:\n keep_checkpoints_num (int): Keep at least this many checkpoints.\n checkpoint_score_attr (str): Attribute to use to determine which\n checkpoints to keep.\n delete_fn (function): Function that deletes checkpoints. Must be\n idempotent.\n \"\"\"\n self.keep_checkpoints_num = keep_checkpoints_num or float(\"inf\")\n assert self.keep_checkpoints_num > 0, (\n \"keep_checkpoints_num must be greater than 0.\")\n self._checkpoint_score_desc = checkpoint_score_attr.startswith(\"min-\")\n if self._checkpoint_score_desc:\n self._checkpoint_score_attr = checkpoint_score_attr[4:]\n else:\n self._checkpoint_score_attr = checkpoint_score_attr\n\n self.delete = delete_fn\n self.newest_persistent_checkpoint = Checkpoint(Checkpoint.PERSISTENT,\n None)\n self.newest_memory_checkpoint = Checkpoint(Checkpoint.MEMORY, None)\n self._best_checkpoints = []\n self._membership = set()\n\n @property\n def newest_checkpoint(self):\n \"\"\"Returns the newest checkpoint (based on training iteration).\"\"\"\n newest_checkpoint = max(\n [self.newest_persistent_checkpoint, self.newest_memory_checkpoint],\n key=lambda c: c.result.get(TRAINING_ITERATION, -1))\n return newest_checkpoint\n\n def on_checkpoint(self, checkpoint):\n \"\"\"Starts tracking checkpoint metadata on checkpoint.\n\n Sets the newest checkpoint. For PERSISTENT checkpoints: Deletes\n previous checkpoint as long as it isn't one of the best ones. Also\n deletes the worst checkpoint if at capacity.\n\n Args:\n checkpoint (Checkpoint): Trial state checkpoint.\n \"\"\"\n if checkpoint.storage == Checkpoint.MEMORY:\n self.newest_memory_checkpoint = checkpoint\n return\n\n old_checkpoint = self.newest_persistent_checkpoint\n self.newest_persistent_checkpoint = checkpoint\n\n # Remove the old checkpoint if it isn't one of the best ones.\n if old_checkpoint.value and old_checkpoint not in self._membership:\n self.delete(old_checkpoint)\n\n try:\n queue_item = QueueItem(self._priority(checkpoint), checkpoint)\n except KeyError:\n logger.error(\"Result dict has no key: {}. \"\n \"checkpoint_score_attr must be set to a key in the \"\n \"result dict.\".format(self._checkpoint_score_attr))\n return\n\n if len(self._best_checkpoints) < self.keep_checkpoints_num:\n heapq.heappush(self._best_checkpoints, queue_item)\n self._membership.add(checkpoint)\n elif queue_item.priority >= self._best_checkpoints[0].priority:\n worst = heapq.heappushpop(self._best_checkpoints, queue_item).value\n self._membership.add(checkpoint)\n if worst in self._membership:\n self._membership.remove(worst)\n # Don't delete the newest checkpoint. It will be deleted on the\n # next on_checkpoint() call since it isn't in self._membership.\n if worst != checkpoint:\n self.delete(worst)\n\n def best_checkpoints(self):\n \"\"\"Returns best PERSISTENT checkpoints, sorted by score.\"\"\"\n checkpoints = sorted(self._best_checkpoints, key=lambda c: c.priority)\n return [queue_item.value for queue_item in checkpoints]\n\n def _priority(self, checkpoint):\n priority = checkpoint.result[self._checkpoint_score_attr]\n return -priority if self._checkpoint_score_desc else priority\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # Avoid serializing lambda since it may capture cyclical dependencies.\n state.pop(\"delete\")\n return state\n\n def __setstate__(self, state):\n self.__dict__.update(state)\n self.delete = None\n", "path": "python/ray/tune/checkpoint_manager.py"}]} | 2,599 | 119 |
gh_patches_debug_17290 | rasdani/github-patches | git_diff | joke2k__faker-919 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Brazilian RG (identity card)
Add Generator to Brazilian RG (identity card)
### Steps to reproduce
fake = Faker('pt_Br')
fake.rg()
### Expected behavior
return like this rules:
https://www.ngmatematica.com/2014/02/como-determinar-o-digito-verificador-do.html
8 digits + 1 checksum digit
### Actual behavior
New feature
</issue>
<code>
[start of faker/providers/ssn/pt_BR/__init__.py]
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4 from .. import Provider as SsnProvider
5
6
7 def checksum(digits):
8 """
9 Returns the checksum of CPF digits.
10 References to the algorithm:
11 https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo
12 https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm
13 """
14 s = 0
15 p = len(digits) + 1
16 for i in range(0, len(digits)):
17 s += digits[i] * p
18 p -= 1
19
20 reminder = s % 11
21 if reminder == 0 or reminder == 1:
22 return 0
23 else:
24 return 11 - reminder
25
26
27 class Provider(SsnProvider):
28 """
29 Provider for Brazilian SSN also known in Brazil as CPF.
30 There are two methods Provider.ssn and Provider.cpf
31 The snn returns a valid number with numbers only
32 The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn
33 """
34
35 def ssn(self):
36 digits = self.generator.random.sample(range(10), 9)
37
38 dv = checksum(digits)
39 digits.append(dv)
40 digits.append(checksum(digits))
41
42 return ''.join(map(str, digits))
43
44 def cpf(self):
45 c = self.ssn()
46 return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]
47
[end of faker/providers/ssn/pt_BR/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/ssn/pt_BR/__init__.py b/faker/providers/ssn/pt_BR/__init__.py
--- a/faker/providers/ssn/pt_BR/__init__.py
+++ b/faker/providers/ssn/pt_BR/__init__.py
@@ -1,6 +1,7 @@
# coding=utf-8
from __future__ import unicode_literals
+
from .. import Provider as SsnProvider
@@ -44,3 +45,22 @@
def cpf(self):
c = self.ssn()
return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]
+
+ def rg(self):
+ """
+ Brazilian RG, return plain numbers.
+ Check: https://www.ngmatematica.com/2014/02/como-determinar-o-digito-verificador-do.html
+ """
+
+ digits = self.generator.random.sample(range(0, 9), 8)
+ checksum = sum(i * digits[i - 2] for i in range(2, 10))
+ last_digit = 11 - (checksum % 11)
+
+ if last_digit == 10:
+ digits.append('X')
+ elif last_digit == 11:
+ digits.append(0)
+ else:
+ digits.append(last_digit)
+
+ return ''.join(map(str, digits))
| {"golden_diff": "diff --git a/faker/providers/ssn/pt_BR/__init__.py b/faker/providers/ssn/pt_BR/__init__.py\n--- a/faker/providers/ssn/pt_BR/__init__.py\n+++ b/faker/providers/ssn/pt_BR/__init__.py\n@@ -1,6 +1,7 @@\n # coding=utf-8\n \n from __future__ import unicode_literals\n+\n from .. import Provider as SsnProvider\n \n \n@@ -44,3 +45,22 @@\n def cpf(self):\n c = self.ssn()\n return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]\n+\n+ def rg(self):\n+ \"\"\"\n+ Brazilian RG, return plain numbers.\n+ Check: https://www.ngmatematica.com/2014/02/como-determinar-o-digito-verificador-do.html\n+ \"\"\"\n+\n+ digits = self.generator.random.sample(range(0, 9), 8)\n+ checksum = sum(i * digits[i - 2] for i in range(2, 10))\n+ last_digit = 11 - (checksum % 11)\n+\n+ if last_digit == 10:\n+ digits.append('X')\n+ elif last_digit == 11:\n+ digits.append(0)\n+ else:\n+ digits.append(last_digit)\n+\n+ return ''.join(map(str, digits))\n", "issue": "Brazilian RG (identity card)\nAdd Generator to Brazilian RG (identity card)\r\n\r\n### Steps to reproduce\r\nfake = Faker('pt_Br')\r\nfake.rg()\r\n\r\n### Expected behavior\r\nreturn like this rules:\r\nhttps://www.ngmatematica.com/2014/02/como-determinar-o-digito-verificador-do.html\r\n8 digits + 1 checksum digit\r\n### Actual behavior\r\nNew feature\r\n\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as SsnProvider\n\n\ndef checksum(digits):\n \"\"\"\n Returns the checksum of CPF digits.\n References to the algorithm:\n https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo\n https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm\n \"\"\"\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n s += digits[i] * p\n p -= 1\n\n reminder = s % 11\n if reminder == 0 or reminder == 1:\n return 0\n else:\n return 11 - reminder\n\n\nclass Provider(SsnProvider):\n \"\"\"\n Provider for Brazilian SSN also known in Brazil as CPF.\n There are two methods Provider.ssn and Provider.cpf\n The snn returns a valid number with numbers only\n The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn\n \"\"\"\n\n def ssn(self):\n digits = self.generator.random.sample(range(10), 9)\n\n dv = checksum(digits)\n digits.append(dv)\n digits.append(checksum(digits))\n\n return ''.join(map(str, digits))\n\n def cpf(self):\n c = self.ssn()\n return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]\n", "path": "faker/providers/ssn/pt_BR/__init__.py"}]} | 1,079 | 324 |
gh_patches_debug_34130 | rasdani/github-patches | git_diff | azavea__raster-vision-1560 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve Dataset.from_uris methods
When using the `from_uris` methods (such as in `SemanticSegmentationSlidingWindowGeoDataset`), it's easy to forget to pass in an important argument due to the use of kwargs. For example, size and stride are needed, and `label_vector_default_class_id` defaults to None which counterintuitively removes all the vectors. We should fix these and related problems.
This issue was originally noted in https://github.com/azavea/raster-vision/pull/1476
</issue>
<code>
[start of rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py]
1 from typing import TYPE_CHECKING, Dict, Optional
2 from copy import deepcopy
3
4 from rastervision.core.data.vector_transformer import VectorTransformer
5 from rastervision.core.data.vector_transformer.label_maker.filter import (
6 create_filter)
7 from rastervision.core.data.utils.geojson import features_to_geojson
8
9 if TYPE_CHECKING:
10 from rastervision.core.data import ClassConfig, CRSTransformer
11
12
13 class ClassInferenceTransformer(VectorTransformer):
14 """Infers missing class_ids from GeoJSON features.
15
16 Rules:
17 1) If class_id is in feature['properties'], use it.
18 2) If class_config is set and class_name or label are in
19 feature['properties'] and in class_config, use corresponding
20 class_id.
21 3) If class_id_to_filter is set and filter is true when applied to
22 feature, use corresponding class_id.
23 4) Otherwise, return the default_class_id
24 """
25
26 def __init__(self,
27 default_class_id: Optional[int],
28 class_config: Optional['ClassConfig'] = None,
29 class_id_to_filter: Optional[Dict[int, list]] = None):
30 self.class_config = class_config
31 self.class_id_to_filter = class_id_to_filter
32 self.default_class_id = default_class_id
33
34 if self.class_id_to_filter is not None:
35 self.class_id_to_filter = {}
36 for class_id, filter_exp in class_id_to_filter.items():
37 self.class_id_to_filter[int(class_id)] = create_filter(
38 filter_exp)
39
40 @staticmethod
41 def infer_feature_class_id(
42 feature: dict,
43 default_class_id: Optional[int],
44 class_config: Optional['ClassConfig'] = None,
45 class_id_to_filter: Optional[Dict[int, list]] = None
46 ) -> Optional[int]:
47 """Infer the class_id for a GeoJSON feature.
48
49 Rules:
50 1) If class_id is in feature['properties'], use it.
51 2) If class_config is set and class_name or label are in
52 feature['properties'] and in class_config, use corresponding
53 class_id.
54 3) If class_id_to_filter is set and filter is true when applied to
55 feature, use corresponding class_id.
56 4) Otherwise, return the default_class_id.
57
58 Args:
59 feature (dict): GeoJSON feature.
60
61 Returns:
62 Optional[int]: Inferred class ID.
63 """
64 class_id = feature.get('properties', {}).get('class_id')
65 if class_id is not None:
66 return class_id
67
68 if class_config is not None:
69 class_name = feature.get('properties', {}).get('class_name')
70 if class_name in class_config.names:
71 return class_config.names.index(class_name)
72
73 label = feature.get('properties', {}).get('label')
74 if label in class_config.names:
75 return class_config.names.index(label)
76
77 if class_id_to_filter is not None:
78 for class_id, filter_fn in class_id_to_filter.items():
79 if filter_fn(feature):
80 return class_id
81
82 return default_class_id
83
84 def transform(self,
85 geojson: dict,
86 crs_transformer: Optional['CRSTransformer'] = None) -> dict:
87 """Add class_id to feature properties and drop features with no class.
88
89 For each feature in geojson, the class_id is inferred and is set into
90 feature['properties']. If the class_id is None (because none of the
91 rules apply and the default_class_id is None), the feature is dropped.
92 """
93 new_features = []
94 for feature in geojson['features']:
95 class_id = self.infer_feature_class_id(
96 feature,
97 default_class_id=self.default_class_id,
98 class_config=self.class_config,
99 class_id_to_filter=self.class_id_to_filter)
100 if class_id is not None:
101 feature = deepcopy(feature)
102 properties = feature.get('properties', {})
103 properties['class_id'] = class_id
104 feature['properties'] = properties
105 new_features.append(feature)
106 new_geojson = features_to_geojson(new_features)
107 return new_geojson
108
[end of rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py b/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py
--- a/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py
+++ b/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py
@@ -1,5 +1,6 @@
from typing import TYPE_CHECKING, Dict, Optional
from copy import deepcopy
+import logging
from rastervision.core.data.vector_transformer import VectorTransformer
from rastervision.core.data.vector_transformer.label_maker.filter import (
@@ -9,6 +10,8 @@
if TYPE_CHECKING:
from rastervision.core.data import ClassConfig, CRSTransformer
+log = logging.getLogger(__name__)
+
class ClassInferenceTransformer(VectorTransformer):
"""Infers missing class_ids from GeoJSON features.
@@ -91,6 +94,7 @@
rules apply and the default_class_id is None), the feature is dropped.
"""
new_features = []
+ warned = False
for feature in geojson['features']:
class_id = self.infer_feature_class_id(
feature,
@@ -103,5 +107,13 @@
properties['class_id'] = class_id
feature['properties'] = properties
new_features.append(feature)
+ elif not warned:
+ log.warning(
+ 'ClassInferenceTransformer is dropping vector features because '
+ 'class_id cannot be inferred. To avoid this behavior, '
+ 'set default_class_id to a non-None value in '
+ 'ClassInferenceTransformer.')
+ warned = True
+
new_geojson = features_to_geojson(new_features)
return new_geojson
| {"golden_diff": "diff --git a/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py b/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py\n--- a/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py\n+++ b/rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py\n@@ -1,5 +1,6 @@\n from typing import TYPE_CHECKING, Dict, Optional\n from copy import deepcopy\n+import logging\n \n from rastervision.core.data.vector_transformer import VectorTransformer\n from rastervision.core.data.vector_transformer.label_maker.filter import (\n@@ -9,6 +10,8 @@\n if TYPE_CHECKING:\n from rastervision.core.data import ClassConfig, CRSTransformer\n \n+log = logging.getLogger(__name__)\n+\n \n class ClassInferenceTransformer(VectorTransformer):\n \"\"\"Infers missing class_ids from GeoJSON features.\n@@ -91,6 +94,7 @@\n rules apply and the default_class_id is None), the feature is dropped.\n \"\"\"\n new_features = []\n+ warned = False\n for feature in geojson['features']:\n class_id = self.infer_feature_class_id(\n feature,\n@@ -103,5 +107,13 @@\n properties['class_id'] = class_id\n feature['properties'] = properties\n new_features.append(feature)\n+ elif not warned:\n+ log.warning(\n+ 'ClassInferenceTransformer is dropping vector features because '\n+ 'class_id cannot be inferred. To avoid this behavior, '\n+ 'set default_class_id to a non-None value in '\n+ 'ClassInferenceTransformer.')\n+ warned = True\n+\n new_geojson = features_to_geojson(new_features)\n return new_geojson\n", "issue": "Improve Dataset.from_uris methods\nWhen using the `from_uris` methods (such as in `SemanticSegmentationSlidingWindowGeoDataset`), it's easy to forget to pass in an important argument due to the use of kwargs. For example, size and stride are needed, and `label_vector_default_class_id` defaults to None which counterintuitively removes all the vectors. We should fix these and related problems.\r\n\r\nThis issue was originally noted in https://github.com/azavea/raster-vision/pull/1476\r\n\r\n\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Dict, Optional\nfrom copy import deepcopy\n\nfrom rastervision.core.data.vector_transformer import VectorTransformer\nfrom rastervision.core.data.vector_transformer.label_maker.filter import (\n create_filter)\nfrom rastervision.core.data.utils.geojson import features_to_geojson\n\nif TYPE_CHECKING:\n from rastervision.core.data import ClassConfig, CRSTransformer\n\n\nclass ClassInferenceTransformer(VectorTransformer):\n \"\"\"Infers missing class_ids from GeoJSON features.\n\n Rules:\n 1) If class_id is in feature['properties'], use it.\n 2) If class_config is set and class_name or label are in\n feature['properties'] and in class_config, use corresponding\n class_id.\n 3) If class_id_to_filter is set and filter is true when applied to\n feature, use corresponding class_id.\n 4) Otherwise, return the default_class_id\n \"\"\"\n\n def __init__(self,\n default_class_id: Optional[int],\n class_config: Optional['ClassConfig'] = None,\n class_id_to_filter: Optional[Dict[int, list]] = None):\n self.class_config = class_config\n self.class_id_to_filter = class_id_to_filter\n self.default_class_id = default_class_id\n\n if self.class_id_to_filter is not None:\n self.class_id_to_filter = {}\n for class_id, filter_exp in class_id_to_filter.items():\n self.class_id_to_filter[int(class_id)] = create_filter(\n filter_exp)\n\n @staticmethod\n def infer_feature_class_id(\n feature: dict,\n default_class_id: Optional[int],\n class_config: Optional['ClassConfig'] = None,\n class_id_to_filter: Optional[Dict[int, list]] = None\n ) -> Optional[int]:\n \"\"\"Infer the class_id for a GeoJSON feature.\n\n Rules:\n 1) If class_id is in feature['properties'], use it.\n 2) If class_config is set and class_name or label are in\n feature['properties'] and in class_config, use corresponding\n class_id.\n 3) If class_id_to_filter is set and filter is true when applied to\n feature, use corresponding class_id.\n 4) Otherwise, return the default_class_id.\n\n Args:\n feature (dict): GeoJSON feature.\n\n Returns:\n Optional[int]: Inferred class ID.\n \"\"\"\n class_id = feature.get('properties', {}).get('class_id')\n if class_id is not None:\n return class_id\n\n if class_config is not None:\n class_name = feature.get('properties', {}).get('class_name')\n if class_name in class_config.names:\n return class_config.names.index(class_name)\n\n label = feature.get('properties', {}).get('label')\n if label in class_config.names:\n return class_config.names.index(label)\n\n if class_id_to_filter is not None:\n for class_id, filter_fn in class_id_to_filter.items():\n if filter_fn(feature):\n return class_id\n\n return default_class_id\n\n def transform(self,\n geojson: dict,\n crs_transformer: Optional['CRSTransformer'] = None) -> dict:\n \"\"\"Add class_id to feature properties and drop features with no class.\n\n For each feature in geojson, the class_id is inferred and is set into\n feature['properties']. If the class_id is None (because none of the\n rules apply and the default_class_id is None), the feature is dropped.\n \"\"\"\n new_features = []\n for feature in geojson['features']:\n class_id = self.infer_feature_class_id(\n feature,\n default_class_id=self.default_class_id,\n class_config=self.class_config,\n class_id_to_filter=self.class_id_to_filter)\n if class_id is not None:\n feature = deepcopy(feature)\n properties = feature.get('properties', {})\n properties['class_id'] = class_id\n feature['properties'] = properties\n new_features.append(feature)\n new_geojson = features_to_geojson(new_features)\n return new_geojson\n", "path": "rastervision_core/rastervision/core/data/vector_transformer/class_inference_transformer.py"}]} | 1,785 | 404 |
gh_patches_debug_29012 | rasdani/github-patches | git_diff | mlflow__mlflow-4002 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FR] AzureBlobArtifactRepository connection using Service Principal credentials
## Willingness to contribute
The MLflow Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature (either as an MLflow Plugin or an enhancement to the MLflow code base)?
- [x] Yes. I can contribute this feature independently.
- [x] Yes. I would be willing to contribute this feature with guidance from the MLflow community.
- [ ] No. I cannot contribute this feature at this time.
## Proposal Summary
Allow the user to connect with the AzureBlobArtifactRepository using service principal credentials taken from the following environment variables:
* AZURE_TENANT_ID
* AZURE_CLIENT_ID
* AZURE_CLIENT_SECRET
## Motivation
- What is the use case for this feature?
Having the AZURE_STORAGE_CONNECTION_STRING or AZURE_STORAGE_ACCESS_KEY variables set will give the user of that environment complete control over the whole storage account. This is not always desirable in a situation where a storage account has multiple blob containers and the users of one blob container should not have access to another user's container.
- Why is this use case valuable to support for MLflow users in general?
It gives the user more options to choose from while connecting with their artifact repository.
- Why is this use case valuable to support for your project(s) or organization
My organizations hosts multiple tracking servers (with corresponding artifact repositories) in which groups of users should be separated. Allowing the connection via service principal credentials gives us the ability assign permissions on a container to a service principal
- Why is it currently difficult to achieve this use case? (please be as specific as possible about why related MLflow features and components are insufficient)
The AzureBlobArtifactRepository requires either the AZURE_STORAGE_CONNECTION_STRING or AZURE_STORAGE_ACCESS_KEY to be set, which gives the user full control of the whole storage account.
### What component(s), interfaces, languages, and integrations does this feature affect?
Components
- [x] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [x] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs
- [ ] `area/server-infra`: MLflow server, JavaScript dev server
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
Interfaces
- [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
Languages
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
Integrations
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
## Details
The solution would entail adding another elif branch in the AzureBlobArtifactRepository __init__ function, that checks for the AZURE_TENANT_ID, AZURE_CLIENT_ID and AZURE_CLIENT_SECRET environment variables, creates a [ClientSecretCredential](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.clientsecretcredential?view=azure-python) and uses it to create the BlobServiceClient.
</issue>
<code>
[start of mlflow/store/artifact/azure_blob_artifact_repo.py]
1 import os
2 import posixpath
3 import re
4 import urllib.parse
5
6 from mlflow.entities import FileInfo
7 from mlflow.exceptions import MlflowException
8 from mlflow.store.artifact.artifact_repo import ArtifactRepository
9
10
11 class AzureBlobArtifactRepository(ArtifactRepository):
12 """
13 Stores artifacts on Azure Blob Storage.
14
15 This repository is used with URIs of the form
16 ``wasbs://<container-name>@<ystorage-account-name>.blob.core.windows.net/<path>``,
17 following the same URI scheme as Hadoop on Azure blob storage. It requires that your Azure
18 storage access key be available in the environment variable ``AZURE_STORAGE_ACCESS_KEY``.
19 """
20
21 def __init__(self, artifact_uri, client=None):
22 super().__init__(artifact_uri)
23
24 # Allow override for testing
25 if client:
26 self.client = client
27 return
28
29 from azure.storage.blob import BlobServiceClient
30
31 (_, account, _) = AzureBlobArtifactRepository.parse_wasbs_uri(artifact_uri)
32 if "AZURE_STORAGE_CONNECTION_STRING" in os.environ:
33 self.client = BlobServiceClient.from_connection_string(
34 conn_str=os.environ.get("AZURE_STORAGE_CONNECTION_STRING")
35 )
36 elif "AZURE_STORAGE_ACCESS_KEY" in os.environ:
37 account_url = "https://{account}.blob.core.windows.net".format(account=account)
38 self.client = BlobServiceClient(
39 account_url=account_url, credential=os.environ.get("AZURE_STORAGE_ACCESS_KEY")
40 )
41 else:
42 raise Exception(
43 "You need to set one of AZURE_STORAGE_CONNECTION_STRING or "
44 "AZURE_STORAGE_ACCESS_KEY to access Azure storage."
45 )
46
47 @staticmethod
48 def parse_wasbs_uri(uri):
49 """Parse a wasbs:// URI, returning (container, storage_account, path)."""
50 parsed = urllib.parse.urlparse(uri)
51 if parsed.scheme != "wasbs":
52 raise Exception("Not a WASBS URI: %s" % uri)
53 match = re.match("([^@]+)@([^.]+)\\.blob\\.core\\.windows\\.net", parsed.netloc)
54 if match is None:
55 raise Exception(
56 "WASBS URI must be of the form " "<container>@<account>.blob.core.windows.net"
57 )
58 container = match.group(1)
59 storage_account = match.group(2)
60 path = parsed.path
61 if path.startswith("/"):
62 path = path[1:]
63 return container, storage_account, path
64
65 def log_artifact(self, local_file, artifact_path=None):
66 (container, _, dest_path) = self.parse_wasbs_uri(self.artifact_uri)
67 container_client = self.client.get_container_client(container)
68 if artifact_path:
69 dest_path = posixpath.join(dest_path, artifact_path)
70 dest_path = posixpath.join(dest_path, os.path.basename(local_file))
71 with open(local_file, "rb") as file:
72 container_client.upload_blob(dest_path, file)
73
74 def log_artifacts(self, local_dir, artifact_path=None):
75 (container, _, dest_path) = self.parse_wasbs_uri(self.artifact_uri)
76 container_client = self.client.get_container_client(container)
77 if artifact_path:
78 dest_path = posixpath.join(dest_path, artifact_path)
79 local_dir = os.path.abspath(local_dir)
80 for (root, _, filenames) in os.walk(local_dir):
81 upload_path = dest_path
82 if root != local_dir:
83 rel_path = os.path.relpath(root, local_dir)
84 upload_path = posixpath.join(dest_path, rel_path)
85 for f in filenames:
86 remote_file_path = posixpath.join(upload_path, f)
87 local_file_path = os.path.join(root, f)
88 with open(local_file_path, "rb") as file:
89 container_client.upload_blob(remote_file_path, file)
90
91 def list_artifacts(self, path=None):
92 # Newer versions of `azure-storage-blob` (>= 12.4.0) provide a public
93 # `azure.storage.blob.BlobPrefix` object to signify that a blob is a directory,
94 # while older versions only expose this API internally as
95 # `azure.storage.blob._models.BlobPrefix`
96 try:
97 from azure.storage.blob import BlobPrefix
98 except ImportError:
99 from azure.storage.blob._models import BlobPrefix
100
101 (container, _, artifact_path) = self.parse_wasbs_uri(self.artifact_uri)
102 container_client = self.client.get_container_client(container)
103 dest_path = artifact_path
104 if path:
105 dest_path = posixpath.join(dest_path, path)
106 infos = []
107 prefix = dest_path if dest_path.endswith("/") else dest_path + "/"
108 results = container_client.walk_blobs(name_starts_with=prefix)
109 for r in results:
110 if not r.name.startswith(artifact_path):
111 raise MlflowException(
112 "The name of the listed Azure blob does not begin with the specified"
113 " artifact path. Artifact path: {artifact_path}. Blob name:"
114 " {blob_name}".format(artifact_path=artifact_path, blob_name=r.name)
115 )
116 if isinstance(r, BlobPrefix): # This is a prefix for items in a subdirectory
117 subdir = posixpath.relpath(path=r.name, start=artifact_path)
118 if subdir.endswith("/"):
119 subdir = subdir[:-1]
120 infos.append(FileInfo(subdir, True, None))
121 else: # Just a plain old blob
122 file_name = posixpath.relpath(path=r.name, start=artifact_path)
123 infos.append(FileInfo(file_name, False, r.size))
124 # The list_artifacts API expects us to return an empty list if the
125 # the path references a single file.
126 rel_path = dest_path[len(artifact_path) + 1 :]
127 if (len(infos) == 1) and not infos[0].is_dir and (infos[0].path == rel_path):
128 return []
129 return sorted(infos, key=lambda f: f.path)
130
131 def _download_file(self, remote_file_path, local_path):
132 (container, _, remote_root_path) = self.parse_wasbs_uri(self.artifact_uri)
133 container_client = self.client.get_container_client(container)
134 remote_full_path = posixpath.join(remote_root_path, remote_file_path)
135 with open(local_path, "wb") as file:
136 container_client.download_blob(remote_full_path).readinto(file)
137
138 def delete_artifacts(self, artifact_path=None):
139 raise MlflowException("Not implemented yet")
140
[end of mlflow/store/artifact/azure_blob_artifact_repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/store/artifact/azure_blob_artifact_repo.py b/mlflow/store/artifact/azure_blob_artifact_repo.py
--- a/mlflow/store/artifact/azure_blob_artifact_repo.py
+++ b/mlflow/store/artifact/azure_blob_artifact_repo.py
@@ -14,8 +14,10 @@
This repository is used with URIs of the form
``wasbs://<container-name>@<ystorage-account-name>.blob.core.windows.net/<path>``,
- following the same URI scheme as Hadoop on Azure blob storage. It requires that your Azure
- storage access key be available in the environment variable ``AZURE_STORAGE_ACCESS_KEY``.
+ following the same URI scheme as Hadoop on Azure blob storage. It requires either that:
+ - Azure storage connection string is in the env var ``AZURE_STORAGE_CONNECTION_STRING``
+ - Azure storage access key is in the env var ``AZURE_STORAGE_ACCESS_KEY``
+ - DefaultAzureCredential is configured
"""
def __init__(self, artifact_uri, client=None):
@@ -39,9 +41,17 @@
account_url=account_url, credential=os.environ.get("AZURE_STORAGE_ACCESS_KEY")
)
else:
- raise Exception(
- "You need to set one of AZURE_STORAGE_CONNECTION_STRING or "
- "AZURE_STORAGE_ACCESS_KEY to access Azure storage."
+ try:
+ from azure.identity import DefaultAzureCredential
+ except ImportError as exc:
+ raise ImportError(
+ "Using DefaultAzureCredential requires the azure-identity package. "
+ "Please install it via: pip install azure-identity"
+ ) from exc
+
+ account_url = "https://{account}.blob.core.windows.net".format(account=account)
+ self.client = BlobServiceClient(
+ account_url=account_url, credential=DefaultAzureCredential()
)
@staticmethod
| {"golden_diff": "diff --git a/mlflow/store/artifact/azure_blob_artifact_repo.py b/mlflow/store/artifact/azure_blob_artifact_repo.py\n--- a/mlflow/store/artifact/azure_blob_artifact_repo.py\n+++ b/mlflow/store/artifact/azure_blob_artifact_repo.py\n@@ -14,8 +14,10 @@\n \n This repository is used with URIs of the form\n ``wasbs://<container-name>@<ystorage-account-name>.blob.core.windows.net/<path>``,\n- following the same URI scheme as Hadoop on Azure blob storage. It requires that your Azure\n- storage access key be available in the environment variable ``AZURE_STORAGE_ACCESS_KEY``.\n+ following the same URI scheme as Hadoop on Azure blob storage. It requires either that:\n+ - Azure storage connection string is in the env var ``AZURE_STORAGE_CONNECTION_STRING``\n+ - Azure storage access key is in the env var ``AZURE_STORAGE_ACCESS_KEY``\n+ - DefaultAzureCredential is configured\n \"\"\"\n \n def __init__(self, artifact_uri, client=None):\n@@ -39,9 +41,17 @@\n account_url=account_url, credential=os.environ.get(\"AZURE_STORAGE_ACCESS_KEY\")\n )\n else:\n- raise Exception(\n- \"You need to set one of AZURE_STORAGE_CONNECTION_STRING or \"\n- \"AZURE_STORAGE_ACCESS_KEY to access Azure storage.\"\n+ try:\n+ from azure.identity import DefaultAzureCredential\n+ except ImportError as exc:\n+ raise ImportError(\n+ \"Using DefaultAzureCredential requires the azure-identity package. \"\n+ \"Please install it via: pip install azure-identity\"\n+ ) from exc\n+\n+ account_url = \"https://{account}.blob.core.windows.net\".format(account=account)\n+ self.client = BlobServiceClient(\n+ account_url=account_url, credential=DefaultAzureCredential()\n )\n \n @staticmethod\n", "issue": "[FR] AzureBlobArtifactRepository connection using Service Principal credentials\n## Willingness to contribute\r\nThe MLflow Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature (either as an MLflow Plugin or an enhancement to the MLflow code base)?\r\n\r\n- [x] Yes. I can contribute this feature independently.\r\n- [x] Yes. I would be willing to contribute this feature with guidance from the MLflow community.\r\n- [ ] No. I cannot contribute this feature at this time.\r\n\r\n## Proposal Summary\r\n\r\nAllow the user to connect with the AzureBlobArtifactRepository using service principal credentials taken from the following environment variables:\r\n* AZURE_TENANT_ID\r\n* AZURE_CLIENT_ID\r\n* AZURE_CLIENT_SECRET\r\n\r\n## Motivation\r\n- What is the use case for this feature?\r\nHaving the AZURE_STORAGE_CONNECTION_STRING or AZURE_STORAGE_ACCESS_KEY variables set will give the user of that environment complete control over the whole storage account. This is not always desirable in a situation where a storage account has multiple blob containers and the users of one blob container should not have access to another user's container. \r\n\r\n- Why is this use case valuable to support for MLflow users in general?\r\nIt gives the user more options to choose from while connecting with their artifact repository.\r\n\r\n- Why is this use case valuable to support for your project(s) or organization\r\nMy organizations hosts multiple tracking servers (with corresponding artifact repositories) in which groups of users should be separated. Allowing the connection via service principal credentials gives us the ability assign permissions on a container to a service principal\r\n\r\n- Why is it currently difficult to achieve this use case? (please be as specific as possible about why related MLflow features and components are insufficient)\r\nThe AzureBlobArtifactRepository requires either the AZURE_STORAGE_CONNECTION_STRING or AZURE_STORAGE_ACCESS_KEY to be set, which gives the user full control of the whole storage account.\r\n\r\n### What component(s), interfaces, languages, and integrations does this feature affect?\r\nComponents \r\n- [x] `area/artifacts`: Artifact stores and artifact logging\r\n- [ ] `area/build`: Build and test infrastructure for MLflow\r\n- [x] `area/docs`: MLflow documentation pages\r\n- [ ] `area/examples`: Example code\r\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\r\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\r\n- [ ] `area/projects`: MLproject format, project running backends\r\n- [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs\r\n- [ ] `area/server-infra`: MLflow server, JavaScript dev server\r\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\r\n\r\nInterfaces\r\n- [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting\r\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\r\n- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\r\n- [ ] `area/windows`: Windows support\r\n\r\nLanguages \r\n- [ ] `language/r`: R APIs and clients\r\n- [ ] `language/java`: Java APIs and clients\r\n- [ ] `language/new`: Proposals for new client languages\r\n\r\nIntegrations\r\n- [ ] `integrations/azure`: Azure and Azure ML integrations\r\n- [ ] `integrations/sagemaker`: SageMaker integrations\r\n- [ ] `integrations/databricks`: Databricks integrations\r\n\r\n## Details\r\n\r\nThe solution would entail adding another elif branch in the AzureBlobArtifactRepository __init__ function, that checks for the AZURE_TENANT_ID, AZURE_CLIENT_ID and AZURE_CLIENT_SECRET environment variables, creates a [ClientSecretCredential](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.clientsecretcredential?view=azure-python) and uses it to create the BlobServiceClient.\r\n\r\n\n", "before_files": [{"content": "import os\nimport posixpath\nimport re\nimport urllib.parse\n\nfrom mlflow.entities import FileInfo\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository\n\n\nclass AzureBlobArtifactRepository(ArtifactRepository):\n \"\"\"\n Stores artifacts on Azure Blob Storage.\n\n This repository is used with URIs of the form\n ``wasbs://<container-name>@<ystorage-account-name>.blob.core.windows.net/<path>``,\n following the same URI scheme as Hadoop on Azure blob storage. It requires that your Azure\n storage access key be available in the environment variable ``AZURE_STORAGE_ACCESS_KEY``.\n \"\"\"\n\n def __init__(self, artifact_uri, client=None):\n super().__init__(artifact_uri)\n\n # Allow override for testing\n if client:\n self.client = client\n return\n\n from azure.storage.blob import BlobServiceClient\n\n (_, account, _) = AzureBlobArtifactRepository.parse_wasbs_uri(artifact_uri)\n if \"AZURE_STORAGE_CONNECTION_STRING\" in os.environ:\n self.client = BlobServiceClient.from_connection_string(\n conn_str=os.environ.get(\"AZURE_STORAGE_CONNECTION_STRING\")\n )\n elif \"AZURE_STORAGE_ACCESS_KEY\" in os.environ:\n account_url = \"https://{account}.blob.core.windows.net\".format(account=account)\n self.client = BlobServiceClient(\n account_url=account_url, credential=os.environ.get(\"AZURE_STORAGE_ACCESS_KEY\")\n )\n else:\n raise Exception(\n \"You need to set one of AZURE_STORAGE_CONNECTION_STRING or \"\n \"AZURE_STORAGE_ACCESS_KEY to access Azure storage.\"\n )\n\n @staticmethod\n def parse_wasbs_uri(uri):\n \"\"\"Parse a wasbs:// URI, returning (container, storage_account, path).\"\"\"\n parsed = urllib.parse.urlparse(uri)\n if parsed.scheme != \"wasbs\":\n raise Exception(\"Not a WASBS URI: %s\" % uri)\n match = re.match(\"([^@]+)@([^.]+)\\\\.blob\\\\.core\\\\.windows\\\\.net\", parsed.netloc)\n if match is None:\n raise Exception(\n \"WASBS URI must be of the form \" \"<container>@<account>.blob.core.windows.net\"\n )\n container = match.group(1)\n storage_account = match.group(2)\n path = parsed.path\n if path.startswith(\"/\"):\n path = path[1:]\n return container, storage_account, path\n\n def log_artifact(self, local_file, artifact_path=None):\n (container, _, dest_path) = self.parse_wasbs_uri(self.artifact_uri)\n container_client = self.client.get_container_client(container)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n dest_path = posixpath.join(dest_path, os.path.basename(local_file))\n with open(local_file, \"rb\") as file:\n container_client.upload_blob(dest_path, file)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n (container, _, dest_path) = self.parse_wasbs_uri(self.artifact_uri)\n container_client = self.client.get_container_client(container)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n local_dir = os.path.abspath(local_dir)\n for (root, _, filenames) in os.walk(local_dir):\n upload_path = dest_path\n if root != local_dir:\n rel_path = os.path.relpath(root, local_dir)\n upload_path = posixpath.join(dest_path, rel_path)\n for f in filenames:\n remote_file_path = posixpath.join(upload_path, f)\n local_file_path = os.path.join(root, f)\n with open(local_file_path, \"rb\") as file:\n container_client.upload_blob(remote_file_path, file)\n\n def list_artifacts(self, path=None):\n # Newer versions of `azure-storage-blob` (>= 12.4.0) provide a public\n # `azure.storage.blob.BlobPrefix` object to signify that a blob is a directory,\n # while older versions only expose this API internally as\n # `azure.storage.blob._models.BlobPrefix`\n try:\n from azure.storage.blob import BlobPrefix\n except ImportError:\n from azure.storage.blob._models import BlobPrefix\n\n (container, _, artifact_path) = self.parse_wasbs_uri(self.artifact_uri)\n container_client = self.client.get_container_client(container)\n dest_path = artifact_path\n if path:\n dest_path = posixpath.join(dest_path, path)\n infos = []\n prefix = dest_path if dest_path.endswith(\"/\") else dest_path + \"/\"\n results = container_client.walk_blobs(name_starts_with=prefix)\n for r in results:\n if not r.name.startswith(artifact_path):\n raise MlflowException(\n \"The name of the listed Azure blob does not begin with the specified\"\n \" artifact path. Artifact path: {artifact_path}. Blob name:\"\n \" {blob_name}\".format(artifact_path=artifact_path, blob_name=r.name)\n )\n if isinstance(r, BlobPrefix): # This is a prefix for items in a subdirectory\n subdir = posixpath.relpath(path=r.name, start=artifact_path)\n if subdir.endswith(\"/\"):\n subdir = subdir[:-1]\n infos.append(FileInfo(subdir, True, None))\n else: # Just a plain old blob\n file_name = posixpath.relpath(path=r.name, start=artifact_path)\n infos.append(FileInfo(file_name, False, r.size))\n # The list_artifacts API expects us to return an empty list if the\n # the path references a single file.\n rel_path = dest_path[len(artifact_path) + 1 :]\n if (len(infos) == 1) and not infos[0].is_dir and (infos[0].path == rel_path):\n return []\n return sorted(infos, key=lambda f: f.path)\n\n def _download_file(self, remote_file_path, local_path):\n (container, _, remote_root_path) = self.parse_wasbs_uri(self.artifact_uri)\n container_client = self.client.get_container_client(container)\n remote_full_path = posixpath.join(remote_root_path, remote_file_path)\n with open(local_path, \"wb\") as file:\n container_client.download_blob(remote_full_path).readinto(file)\n\n def delete_artifacts(self, artifact_path=None):\n raise MlflowException(\"Not implemented yet\")\n", "path": "mlflow/store/artifact/azure_blob_artifact_repo.py"}]} | 3,073 | 414 |
gh_patches_debug_1246 | rasdani/github-patches | git_diff | getsentry__sentry-15491 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Simple typo in the compact docstring for utils.functional
## Important Details
How are you running Sentry?
* [ ] On-Premise docker [Version xyz]
* [ ] Saas (sentry.io)
* [x] Other [briefly describe your environment]
Observed documentation - not running sentry.
## Description
Simple typo should be values rather than valules.
## Steps to Reproduce
1. Observe docstring in utils.functional.compact method
### What you expected to happen
Should be values rather than valules.
### Possible Solution
Replace valules with values.
</issue>
<code>
[start of src/sentry/utils/functional.py]
1 from __future__ import absolute_import
2
3 import six
4
5 from django.utils.functional import empty
6
7
8 def extract_lazy_object(lo):
9 """
10 Unwrap a LazyObject and return the inner object. Whatever that may be.
11
12 ProTip: This is relying on `django.utils.functional.empty`, which may
13 or may not be removed in the future. It's 100% undocumented.
14 """
15 if not hasattr(lo, "_wrapped"):
16 return lo
17 if lo._wrapped is empty:
18 lo._setup()
19 return lo._wrapped
20
21
22 def apply_values(function, mapping):
23 """\
24 Applies ``function`` to a sequence containing all of the values in the
25 provided mapping, returing a new mapping with the values replaced with
26 the results of the provided function.
27
28 >>> apply_values(
29 ... lambda values: map(u'{} fish'.format, values),
30 ... {1: 'red', 2: 'blue'},
31 ... )
32 {1: u'red fish', 2: u'blue fish'}
33 """
34 if not mapping:
35 return {}
36
37 keys, values = zip(*mapping.items())
38 return dict(zip(keys, function(values)))
39
40
41 def compact(seq):
42 """
43 Removes ``None`` values from various sequence-based data structures.
44
45 dict:
46 Removes keys with a corresponding ``None`` value.
47
48 list:
49 Removes ``None`` valules.
50
51 >>> compact({'foo': 'bar', 'baz': None})
52 {'foo': 'bar'}
53
54 >>> compact([1, None, 2])
55 [1, 2]
56 """
57 if isinstance(seq, dict):
58 return {k: v for k, v in six.iteritems(seq) if v is not None}
59
60 elif isinstance(seq, list):
61 return [k for k in seq if k is not None]
62
[end of src/sentry/utils/functional.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/utils/functional.py b/src/sentry/utils/functional.py
--- a/src/sentry/utils/functional.py
+++ b/src/sentry/utils/functional.py
@@ -46,7 +46,7 @@
Removes keys with a corresponding ``None`` value.
list:
- Removes ``None`` valules.
+ Removes ``None`` values.
>>> compact({'foo': 'bar', 'baz': None})
{'foo': 'bar'}
| {"golden_diff": "diff --git a/src/sentry/utils/functional.py b/src/sentry/utils/functional.py\n--- a/src/sentry/utils/functional.py\n+++ b/src/sentry/utils/functional.py\n@@ -46,7 +46,7 @@\n Removes keys with a corresponding ``None`` value.\n \n list:\n- Removes ``None`` valules.\n+ Removes ``None`` values.\n \n >>> compact({'foo': 'bar', 'baz': None})\n {'foo': 'bar'}\n", "issue": "Simple typo in the compact docstring for utils.functional\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n* [ ] On-Premise docker [Version xyz]\r\n* [ ] Saas (sentry.io)\r\n* [x] Other [briefly describe your environment]\r\nObserved documentation - not running sentry.\r\n\r\n## Description\r\n\r\nSimple typo should be values rather than valules.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Observe docstring in utils.functional.compact method\r\n\r\n### What you expected to happen\r\n\r\nShould be values rather than valules.\r\n\r\n### Possible Solution\r\n\r\nReplace valules with values.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport six\n\nfrom django.utils.functional import empty\n\n\ndef extract_lazy_object(lo):\n \"\"\"\n Unwrap a LazyObject and return the inner object. Whatever that may be.\n\n ProTip: This is relying on `django.utils.functional.empty`, which may\n or may not be removed in the future. It's 100% undocumented.\n \"\"\"\n if not hasattr(lo, \"_wrapped\"):\n return lo\n if lo._wrapped is empty:\n lo._setup()\n return lo._wrapped\n\n\ndef apply_values(function, mapping):\n \"\"\"\\\n Applies ``function`` to a sequence containing all of the values in the\n provided mapping, returing a new mapping with the values replaced with\n the results of the provided function.\n\n >>> apply_values(\n ... lambda values: map(u'{} fish'.format, values),\n ... {1: 'red', 2: 'blue'},\n ... )\n {1: u'red fish', 2: u'blue fish'}\n \"\"\"\n if not mapping:\n return {}\n\n keys, values = zip(*mapping.items())\n return dict(zip(keys, function(values)))\n\n\ndef compact(seq):\n \"\"\"\n Removes ``None`` values from various sequence-based data structures.\n\n dict:\n Removes keys with a corresponding ``None`` value.\n\n list:\n Removes ``None`` valules.\n\n >>> compact({'foo': 'bar', 'baz': None})\n {'foo': 'bar'}\n\n >>> compact([1, None, 2])\n [1, 2]\n \"\"\"\n if isinstance(seq, dict):\n return {k: v for k, v in six.iteritems(seq) if v is not None}\n\n elif isinstance(seq, list):\n return [k for k in seq if k is not None]\n", "path": "src/sentry/utils/functional.py"}]} | 1,175 | 106 |
gh_patches_debug_20776 | rasdani/github-patches | git_diff | google__turbinia-1110 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Config module 'imp' deprecation warning
Currently, the config module uses a deprecated library method to load the config file into a module which causes a DeprecationWarning
```============================================= warnings summary =============================================
turbinia/config/__init__.py:19
/workspaces/turbinia/turbinia/config/__init__.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
```
</issue>
<code>
[start of turbinia/config/__init__.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2016 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Basic Turbinia config."""
16
17 from __future__ import unicode_literals
18
19 import imp
20 import itertools
21 import logging
22 import os
23 import sys
24
25 from turbinia import TurbiniaException
26
27 DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
28
29 # Look for config files with these names
30 CONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']
31 # Look in homedir first, then /etc/turbinia
32 CONFIGPATH = [
33 os.path.expanduser('~'),
34 '/etc/turbinia',
35 os.path.dirname(os.path.abspath(__file__)),
36 ]
37 # Config setup reminder for cleaner error handling on empty configs.
38 CONFIG_MSG = (
39 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '
40 'or /etc/turbinia/turbinia.conf, edit, and re-run.')
41
42 # Required config vars
43 REQUIRED_VARS = [
44 # Turbinia Config
45 'INSTANCE_ID',
46 'STATE_MANAGER',
47 'TASK_MANAGER',
48 'LOG_DIR',
49 'LOCK_FILE',
50 'TMP_RESOURCE_DIR',
51 'RESOURCE_FILE',
52 'RESOURCE_FILE_LOCK',
53 'SCALEDOWN_WORKER_FILE',
54 'OUTPUT_DIR',
55 'TMP_DIR',
56 'SLEEP_TIME',
57 'SINGLE_RUN',
58 'MOUNT_DIR_PREFIX',
59 'SHARED_FILESYSTEM',
60 'DEBUG_TASKS',
61 'DEPENDENCIES',
62 'DOCKER_ENABLED',
63 'DISABLED_JOBS',
64 ]
65
66 # Optional config vars. Some may be mandatory depending on the configuration
67 # (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are
68 # required), but these requirements are not enforced.
69 OPTIONAL_VARS = [
70 # GCE CONFIG
71 'TURBINIA_PROJECT',
72 'TURBINIA_ZONE',
73 'TURBINIA_REGION',
74 'BUCKET_NAME',
75 'PSQ_TOPIC',
76 'PUBSUB_TOPIC',
77 'GCS_OUTPUT_PATH',
78 'RECIPE_FILE_DIR',
79 'STACKDRIVER_LOGGING',
80 'STACKDRIVER_TRACEBACK',
81 # REDIS CONFIG
82 'REDIS_HOST',
83 'REDIS_PORT',
84 'REDIS_DB',
85 # Celery config
86 'CELERY_BROKER',
87 'CELERY_BACKEND',
88 'KOMBU_BROKER',
89 'KOMBU_CHANNEL',
90 'KOMBU_DURABLE',
91 # Email config
92 'EMAIL_NOTIFICATIONS',
93 'EMAIL_HOST_ADDRESS',
94 'EMAIL_PORT',
95 'EMAIL_ADDRESS',
96 'EMAIL_PASSWORD',
97 # Prometheus config
98 'PROMETHEUS_ENABLED',
99 'PROMETHEUS_ADDR',
100 'PROMETHEUS_PORT',
101 # dfDewey config
102 'DFDEWEY_PG_HOST',
103 'DFDEWEY_PG_PORT',
104 'DFDEWEY_PG_DB_NAME',
105 'DFDEWEY_OS_HOST',
106 'DFDEWEY_OS_PORT',
107 'DFDEWEY_OS_URL',
108 # General config
109 'TURBINIA_COMMAND'
110 ]
111
112 # Environment variable to look for path data in
113 ENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'
114
115 CONFIG = None
116
117 log = logging.getLogger('turbinia')
118
119
120 def LoadConfig(config_file=None):
121 """Finds Turbinia config file and loads it.
122
123 Args:
124 config_file(str): full path to config file
125 """
126 # TODO(aarontp): Find way to not require global var here. Maybe a singleton
127 # pattern on the config class.
128 # pylint: disable=global-statement
129 global CONFIG
130 if CONFIG and not config_file:
131 log.debug(
132 'Returning cached config from {0:s} instead of reloading config'.format(
133 CONFIG.configSource))
134 return CONFIG
135
136 if not config_file:
137 log.debug('No config specified. Looking in default locations for config.')
138 # If the environment variable is set, take precedence over the pre-defined
139 # CONFIGPATHs.
140 configpath = CONFIGPATH
141 if ENVCONFIGVAR in os.environ:
142 configpath = os.environ[ENVCONFIGVAR].split(':')
143
144 # Load first file found
145 for _dir, _file in itertools.product(configpath, CONFIGFILES):
146 if os.path.exists(os.path.join(_dir, _file)):
147 config_file = os.path.join(_dir, _file)
148 break
149
150 if config_file is None:
151 raise TurbiniaException('No config files found')
152
153 log.debug('Loading config from {0:s}'.format(config_file))
154 # Warn about using fallback source config, but it's currently necessary for
155 # tests. See issue #446.
156 if 'turbinia_config_tmpl' in config_file:
157 log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))
158 try:
159 _config = imp.load_source('config', config_file)
160 except IOError as exception:
161 message = (
162 'Could not load config file {0:s}: {1!s}'.format(
163 config_file, exception))
164 log.error(message)
165 raise TurbiniaException(message)
166
167 _config.configSource = config_file
168 ValidateAndSetConfig(_config)
169
170 # Set the environment var for this so that we don't see the "No project ID
171 # could be determined." warning later.
172 if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:
173 os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT
174
175 CONFIG = _config
176 log.debug(
177 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))
178 return _config
179
180
181 def ValidateAndSetConfig(_config):
182 """Makes sure that the config has the vars loaded and set in the module."""
183 # Explicitly set the config path
184 setattr(sys.modules[__name__], 'configSource', _config.configSource)
185
186 CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS
187 for var in CONFIGVARS:
188 empty_value = False
189 if not hasattr(_config, var):
190 if var in OPTIONAL_VARS:
191 log.debug(
192 'Setting non-existent but optional config variable {0:s} to '
193 'None'.format(var))
194 empty_value = True
195 else:
196 raise TurbiniaException(
197 'Required config attribute {0:s}:{1:s} not in config'.format(
198 _config.configSource, var))
199 if var in REQUIRED_VARS and getattr(_config, var) is None:
200 raise TurbiniaException(
201 'Config attribute {0:s}:{1:s} is not set'.format(
202 _config.configSource, var))
203
204 # Set the attribute in the current module
205 if empty_value:
206 setattr(sys.modules[__name__], var, None)
207 else:
208 setattr(sys.modules[__name__], var, getattr(_config, var))
209
210
211 def ParseDependencies():
212 """Parses the config file DEPENDENCIES variable.
213
214 Raises:
215 TurbiniaException: If bad config file.
216
217 Returns:
218 dependencies(dict): The parsed dependency values.
219 """
220 dependencies = {}
221 try:
222 for values in CONFIG.DEPENDENCIES:
223 job = values['job'].lower()
224 dependencies[job] = {}
225 dependencies[job]['programs'] = values['programs']
226 dependencies[job]['docker_image'] = values.get('docker_image')
227 dependencies[job]['timeout'] = values.get('timeout')
228 except (KeyError, TypeError) as exception:
229 raise TurbiniaException(
230 'An issue has occurred while parsing the '
231 'dependency config: {0!s}'.format(exception))
232 return dependencies
233
[end of turbinia/config/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/config/__init__.py b/turbinia/config/__init__.py
--- a/turbinia/config/__init__.py
+++ b/turbinia/config/__init__.py
@@ -16,12 +16,12 @@
from __future__ import unicode_literals
-import imp
+import importlib.util
+import importlib.machinery
import itertools
import logging
import os
import sys
-
from turbinia import TurbiniaException
DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
@@ -156,7 +156,11 @@
if 'turbinia_config_tmpl' in config_file:
log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))
try:
- _config = imp.load_source('config', config_file)
+ config_loader = importlib.machinery.SourceFileLoader('config', config_file)
+ config_spec = importlib.util.spec_from_loader(
+ config_loader.name, config_loader)
+ _config = importlib.util.module_from_spec(config_spec)
+ config_loader.exec_module(_config)
except IOError as exception:
message = (
'Could not load config file {0:s}: {1!s}'.format(
| {"golden_diff": "diff --git a/turbinia/config/__init__.py b/turbinia/config/__init__.py\n--- a/turbinia/config/__init__.py\n+++ b/turbinia/config/__init__.py\n@@ -16,12 +16,12 @@\n \n from __future__ import unicode_literals\n \n-import imp\n+import importlib.util\n+import importlib.machinery\n import itertools\n import logging\n import os\n import sys\n-\n from turbinia import TurbiniaException\n \n DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'\n@@ -156,7 +156,11 @@\n if 'turbinia_config_tmpl' in config_file:\n log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))\n try:\n- _config = imp.load_source('config', config_file)\n+ config_loader = importlib.machinery.SourceFileLoader('config', config_file)\n+ config_spec = importlib.util.spec_from_loader(\n+ config_loader.name, config_loader)\n+ _config = importlib.util.module_from_spec(config_spec)\n+ config_loader.exec_module(_config)\n except IOError as exception:\n message = (\n 'Could not load config file {0:s}: {1!s}'.format(\n", "issue": "Config module 'imp' deprecation warning\nCurrently, the config module uses a deprecated library method to load the config file into a module which causes a DeprecationWarning\r\n\r\n```============================================= warnings summary =============================================\r\nturbinia/config/__init__.py:19\r\n /workspaces/turbinia/turbinia/config/__init__.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Basic Turbinia config.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport imp\nimport itertools\nimport logging\nimport os\nimport sys\n\nfrom turbinia import TurbiniaException\n\nDATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'\n\n# Look for config files with these names\nCONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']\n# Look in homedir first, then /etc/turbinia\nCONFIGPATH = [\n os.path.expanduser('~'),\n '/etc/turbinia',\n os.path.dirname(os.path.abspath(__file__)),\n]\n# Config setup reminder for cleaner error handling on empty configs.\nCONFIG_MSG = (\n 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '\n 'or /etc/turbinia/turbinia.conf, edit, and re-run.')\n\n# Required config vars\nREQUIRED_VARS = [\n # Turbinia Config\n 'INSTANCE_ID',\n 'STATE_MANAGER',\n 'TASK_MANAGER',\n 'LOG_DIR',\n 'LOCK_FILE',\n 'TMP_RESOURCE_DIR',\n 'RESOURCE_FILE',\n 'RESOURCE_FILE_LOCK',\n 'SCALEDOWN_WORKER_FILE',\n 'OUTPUT_DIR',\n 'TMP_DIR',\n 'SLEEP_TIME',\n 'SINGLE_RUN',\n 'MOUNT_DIR_PREFIX',\n 'SHARED_FILESYSTEM',\n 'DEBUG_TASKS',\n 'DEPENDENCIES',\n 'DOCKER_ENABLED',\n 'DISABLED_JOBS',\n]\n\n# Optional config vars. Some may be mandatory depending on the configuration\n# (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are\n# required), but these requirements are not enforced.\nOPTIONAL_VARS = [\n # GCE CONFIG\n 'TURBINIA_PROJECT',\n 'TURBINIA_ZONE',\n 'TURBINIA_REGION',\n 'BUCKET_NAME',\n 'PSQ_TOPIC',\n 'PUBSUB_TOPIC',\n 'GCS_OUTPUT_PATH',\n 'RECIPE_FILE_DIR',\n 'STACKDRIVER_LOGGING',\n 'STACKDRIVER_TRACEBACK',\n # REDIS CONFIG\n 'REDIS_HOST',\n 'REDIS_PORT',\n 'REDIS_DB',\n # Celery config\n 'CELERY_BROKER',\n 'CELERY_BACKEND',\n 'KOMBU_BROKER',\n 'KOMBU_CHANNEL',\n 'KOMBU_DURABLE',\n # Email config\n 'EMAIL_NOTIFICATIONS',\n 'EMAIL_HOST_ADDRESS',\n 'EMAIL_PORT',\n 'EMAIL_ADDRESS',\n 'EMAIL_PASSWORD',\n # Prometheus config\n 'PROMETHEUS_ENABLED',\n 'PROMETHEUS_ADDR',\n 'PROMETHEUS_PORT',\n # dfDewey config\n 'DFDEWEY_PG_HOST',\n 'DFDEWEY_PG_PORT',\n 'DFDEWEY_PG_DB_NAME',\n 'DFDEWEY_OS_HOST',\n 'DFDEWEY_OS_PORT',\n 'DFDEWEY_OS_URL',\n # General config\n 'TURBINIA_COMMAND'\n]\n\n# Environment variable to look for path data in\nENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'\n\nCONFIG = None\n\nlog = logging.getLogger('turbinia')\n\n\ndef LoadConfig(config_file=None):\n \"\"\"Finds Turbinia config file and loads it.\n\n Args:\n config_file(str): full path to config file\n \"\"\"\n # TODO(aarontp): Find way to not require global var here. Maybe a singleton\n # pattern on the config class.\n # pylint: disable=global-statement\n global CONFIG\n if CONFIG and not config_file:\n log.debug(\n 'Returning cached config from {0:s} instead of reloading config'.format(\n CONFIG.configSource))\n return CONFIG\n\n if not config_file:\n log.debug('No config specified. Looking in default locations for config.')\n # If the environment variable is set, take precedence over the pre-defined\n # CONFIGPATHs.\n configpath = CONFIGPATH\n if ENVCONFIGVAR in os.environ:\n configpath = os.environ[ENVCONFIGVAR].split(':')\n\n # Load first file found\n for _dir, _file in itertools.product(configpath, CONFIGFILES):\n if os.path.exists(os.path.join(_dir, _file)):\n config_file = os.path.join(_dir, _file)\n break\n\n if config_file is None:\n raise TurbiniaException('No config files found')\n\n log.debug('Loading config from {0:s}'.format(config_file))\n # Warn about using fallback source config, but it's currently necessary for\n # tests. See issue #446.\n if 'turbinia_config_tmpl' in config_file:\n log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))\n try:\n _config = imp.load_source('config', config_file)\n except IOError as exception:\n message = (\n 'Could not load config file {0:s}: {1!s}'.format(\n config_file, exception))\n log.error(message)\n raise TurbiniaException(message)\n\n _config.configSource = config_file\n ValidateAndSetConfig(_config)\n\n # Set the environment var for this so that we don't see the \"No project ID\n # could be determined.\" warning later.\n if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:\n os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT\n\n CONFIG = _config\n log.debug(\n 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))\n return _config\n\n\ndef ValidateAndSetConfig(_config):\n \"\"\"Makes sure that the config has the vars loaded and set in the module.\"\"\"\n # Explicitly set the config path\n setattr(sys.modules[__name__], 'configSource', _config.configSource)\n\n CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS\n for var in CONFIGVARS:\n empty_value = False\n if not hasattr(_config, var):\n if var in OPTIONAL_VARS:\n log.debug(\n 'Setting non-existent but optional config variable {0:s} to '\n 'None'.format(var))\n empty_value = True\n else:\n raise TurbiniaException(\n 'Required config attribute {0:s}:{1:s} not in config'.format(\n _config.configSource, var))\n if var in REQUIRED_VARS and getattr(_config, var) is None:\n raise TurbiniaException(\n 'Config attribute {0:s}:{1:s} is not set'.format(\n _config.configSource, var))\n\n # Set the attribute in the current module\n if empty_value:\n setattr(sys.modules[__name__], var, None)\n else:\n setattr(sys.modules[__name__], var, getattr(_config, var))\n\n\ndef ParseDependencies():\n \"\"\"Parses the config file DEPENDENCIES variable.\n\n Raises:\n TurbiniaException: If bad config file.\n\n Returns:\n dependencies(dict): The parsed dependency values.\n \"\"\"\n dependencies = {}\n try:\n for values in CONFIG.DEPENDENCIES:\n job = values['job'].lower()\n dependencies[job] = {}\n dependencies[job]['programs'] = values['programs']\n dependencies[job]['docker_image'] = values.get('docker_image')\n dependencies[job]['timeout'] = values.get('timeout')\n except (KeyError, TypeError) as exception:\n raise TurbiniaException(\n 'An issue has occurred while parsing the '\n 'dependency config: {0!s}'.format(exception))\n return dependencies\n", "path": "turbinia/config/__init__.py"}]} | 3,039 | 282 |
gh_patches_debug_28789 | rasdani/github-patches | git_diff | pypa__virtualenv-2206 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect (broken) virtualenv layout with pypy3.8's new layout
**Issue**
PyPy3.8 (currently 7.3.6rc1) supports a new install layout that resembles CPython more. That is, `sys.prefix` no longer needs being isolated, and site-packages are found in `$prefix/lib/pypy3.8/site-packages`. However, virtualenv tries to symlink everything from `/usr/lib` including the `pypy3.8` directory. As a result, the user can't write to the site-packages directory in the venv.
I haven't tried running it as root but I can imagine it doing major mess if it virtualenv doesn't take any precautions from writing into system directories.
**Environment**
Provide at least:
- OS: Gentoo Linux
- ``pip list`` of the host python where ``virtualenv`` is installed: [pip-list.txt](https://github.com/pypa/virtualenv/files/7167321/pip-list.txt)
**Output of the virtual environment creation**
Make sure to run the creation with `-vvv --with-traceback`:
Full output: [output.txt](https://github.com/pypa/virtualenv/files/7167331/output.txt)
tail:
```
1048 create virtualenv import hook file /tmp/z/lib/pypy3.8/site-packages/_virtualenv.pth [DEBUG api:95]
Traceback (most recent call last):
File "/usr/lib/pypy3.8/runpy.py", line 198, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/pypy3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/lib/pypy3.8/site-packages/virtualenv/__main__.py", line 80, in <module>
run_with_catch() # pragma: no cov
File "/usr/lib/pypy3.8/site-packages/virtualenv/__main__.py", line 65, in run_with_catch
run(args, options, env)
File "/usr/lib/pypy3.8/site-packages/virtualenv/__main__.py", line 18, in run
session = cli_run(args, options, env)
File "/usr/lib/pypy3.8/site-packages/virtualenv/run/__init__.py", line 32, in cli_run
of_session.run()
File "/usr/lib/pypy3.8/site-packages/virtualenv/run/session.py", line 46, in run
self._create()
File "/usr/lib/pypy3.8/site-packages/virtualenv/run/session.py", line 53, in _create
self.creator.run()
File "/usr/lib/pypy3.8/site-packages/virtualenv/create/creator.py", line 171, in run
self.create()
File "/usr/lib/pypy3.8/site-packages/virtualenv/create/via_global_ref/builtin/via_global_self_do.py", line 101, in create
super(ViaGlobalRefVirtualenvBuiltin, self).create()
File "/usr/lib/pypy3.8/site-packages/virtualenv/create/via_global_ref/api.py", line 89, in create
self.install_patch()
File "/usr/lib/pypy3.8/site-packages/virtualenv/create/via_global_ref/api.py", line 96, in install_patch
pth.write_text("import _virtualenv")
File "/usr/lib/pypy3.8/pathlib.py", line 1255, in write_text
with self.open(mode='w', encoding=encoding, errors=errors) as f:
File "/usr/lib/pypy3.8/pathlib.py", line 1223, in open
opener=self._opener)
File "/usr/lib/pypy3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
PermissionError: [Errno 13] Permission denied: PosixPath('/tmp/z/lib/pypy3.8/site-packages/_virtualenv.pth')
```
</issue>
<code>
[start of src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py]
1 from __future__ import absolute_import, unicode_literals
2
3 import abc
4
5 from six import add_metaclass
6
7 from virtualenv.create.describe import PosixSupports, Python3Supports, WindowsSupports
8 from virtualenv.create.via_global_ref.builtin.ref import PathRefToDest
9 from virtualenv.util.path import Path
10
11 from .common import PyPy
12
13
14 @add_metaclass(abc.ABCMeta)
15 class PyPy3(PyPy, Python3Supports):
16 @classmethod
17 def exe_stem(cls):
18 return "pypy3"
19
20 @classmethod
21 def exe_names(cls, interpreter):
22 return super(PyPy3, cls).exe_names(interpreter) | {"pypy"}
23
24
25 class PyPy3Posix(PyPy3, PosixSupports):
26 """PyPy 2 on POSIX"""
27
28 @property
29 def stdlib(self):
30 """PyPy3 respects sysconfig only for the host python, virtual envs is instead lib/pythonx.y/site-packages"""
31 return self.dest / "lib" / "python{}".format(self.interpreter.version_release_str) / "site-packages"
32
33 @classmethod
34 def _shared_libs(cls):
35 return ["libpypy3-c.so", "libpypy3-c.dylib"]
36
37 def to_lib(self, src):
38 return self.dest / "lib" / src.name
39
40 @classmethod
41 def sources(cls, interpreter):
42 for src in super(PyPy3Posix, cls).sources(interpreter):
43 yield src
44 host_lib = Path(interpreter.system_prefix) / "lib"
45 if host_lib.exists() and host_lib.is_dir():
46 for path in host_lib.iterdir():
47 yield PathRefToDest(path, dest=cls.to_lib)
48
49
50 class Pypy3Windows(PyPy3, WindowsSupports):
51 """PyPy 2 on Windows"""
52
53 @property
54 def stdlib(self):
55 """PyPy3 respects sysconfig only for the host python, virtual envs is instead Lib/site-packages"""
56 return self.dest / "Lib" / "site-packages"
57
58 @property
59 def bin_dir(self):
60 """PyPy3 needs to fallback to pypy definition"""
61 return self.dest / "Scripts"
62
63 @classmethod
64 def _shared_libs(cls):
65 return ["libpypy3-c.dll", "libffi-7.dll"]
66
[end of src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py b/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py
--- a/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py
+++ b/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py
@@ -28,7 +28,7 @@
@property
def stdlib(self):
"""PyPy3 respects sysconfig only for the host python, virtual envs is instead lib/pythonx.y/site-packages"""
- return self.dest / "lib" / "python{}".format(self.interpreter.version_release_str) / "site-packages"
+ return self.dest / "lib" / "pypy{}".format(self.interpreter.version_release_str) / "site-packages"
@classmethod
def _shared_libs(cls):
@@ -41,9 +41,19 @@
def sources(cls, interpreter):
for src in super(PyPy3Posix, cls).sources(interpreter):
yield src
+ # Also copy/symlink anything under prefix/lib, which, for "portable"
+ # PyPy builds, includes the tk,tcl runtime and a number of shared
+ # objects. In distro-specific builds or on conda this should be empty
+ # (on PyPy3.8+ it will, like on CPython, hold the stdlib).
host_lib = Path(interpreter.system_prefix) / "lib"
+ stdlib = Path(interpreter.system_stdlib)
if host_lib.exists() and host_lib.is_dir():
for path in host_lib.iterdir():
+ if stdlib == path:
+ # For PyPy3.8+ the stdlib lives in lib/pypy3.8
+ # We need to avoid creating a symlink to it since that
+ # will defeat the purpose of a virtualenv
+ continue
yield PathRefToDest(path, dest=cls.to_lib)
| {"golden_diff": "diff --git a/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py b/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py\n--- a/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py\n+++ b/src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py\n@@ -28,7 +28,7 @@\n @property\n def stdlib(self):\n \"\"\"PyPy3 respects sysconfig only for the host python, virtual envs is instead lib/pythonx.y/site-packages\"\"\"\n- return self.dest / \"lib\" / \"python{}\".format(self.interpreter.version_release_str) / \"site-packages\"\n+ return self.dest / \"lib\" / \"pypy{}\".format(self.interpreter.version_release_str) / \"site-packages\"\n \n @classmethod\n def _shared_libs(cls):\n@@ -41,9 +41,19 @@\n def sources(cls, interpreter):\n for src in super(PyPy3Posix, cls).sources(interpreter):\n yield src\n+ # Also copy/symlink anything under prefix/lib, which, for \"portable\"\n+ # PyPy builds, includes the tk,tcl runtime and a number of shared\n+ # objects. In distro-specific builds or on conda this should be empty\n+ # (on PyPy3.8+ it will, like on CPython, hold the stdlib).\n host_lib = Path(interpreter.system_prefix) / \"lib\"\n+ stdlib = Path(interpreter.system_stdlib)\n if host_lib.exists() and host_lib.is_dir():\n for path in host_lib.iterdir():\n+ if stdlib == path:\n+ # For PyPy3.8+ the stdlib lives in lib/pypy3.8\n+ # We need to avoid creating a symlink to it since that\n+ # will defeat the purpose of a virtualenv\n+ continue\n yield PathRefToDest(path, dest=cls.to_lib)\n", "issue": "Incorrect (broken) virtualenv layout with pypy3.8's new layout\n**Issue**\r\n\r\nPyPy3.8 (currently 7.3.6rc1) supports a new install layout that resembles CPython more. That is, `sys.prefix` no longer needs being isolated, and site-packages are found in `$prefix/lib/pypy3.8/site-packages`. However, virtualenv tries to symlink everything from `/usr/lib` including the `pypy3.8` directory. As a result, the user can't write to the site-packages directory in the venv.\r\n\r\nI haven't tried running it as root but I can imagine it doing major mess if it virtualenv doesn't take any precautions from writing into system directories.\r\n\r\n**Environment**\r\n\r\nProvide at least:\r\n- OS: Gentoo Linux\r\n- ``pip list`` of the host python where ``virtualenv`` is installed: [pip-list.txt](https://github.com/pypa/virtualenv/files/7167321/pip-list.txt)\r\n\r\n\r\n**Output of the virtual environment creation**\r\n\r\nMake sure to run the creation with `-vvv --with-traceback`:\r\n\r\nFull output: [output.txt](https://github.com/pypa/virtualenv/files/7167331/output.txt)\r\n\r\ntail:\r\n```\r\n1048 create virtualenv import hook file /tmp/z/lib/pypy3.8/site-packages/_virtualenv.pth [DEBUG api:95]\r\nTraceback (most recent call last):\r\n File \"/usr/lib/pypy3.8/runpy.py\", line 198, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/pypy3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/__main__.py\", line 80, in <module>\r\n run_with_catch() # pragma: no cov\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/__main__.py\", line 65, in run_with_catch\r\n run(args, options, env)\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/__main__.py\", line 18, in run\r\n session = cli_run(args, options, env)\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/run/__init__.py\", line 32, in cli_run\r\n of_session.run()\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/run/session.py\", line 46, in run\r\n self._create()\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/run/session.py\", line 53, in _create\r\n self.creator.run()\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/create/creator.py\", line 171, in run\r\n self.create()\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/create/via_global_ref/builtin/via_global_self_do.py\", line 101, in create\r\n super(ViaGlobalRefVirtualenvBuiltin, self).create()\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/create/via_global_ref/api.py\", line 89, in create\r\n self.install_patch()\r\n File \"/usr/lib/pypy3.8/site-packages/virtualenv/create/via_global_ref/api.py\", line 96, in install_patch\r\n pth.write_text(\"import _virtualenv\")\r\n File \"/usr/lib/pypy3.8/pathlib.py\", line 1255, in write_text\r\n with self.open(mode='w', encoding=encoding, errors=errors) as f:\r\n File \"/usr/lib/pypy3.8/pathlib.py\", line 1223, in open\r\n opener=self._opener)\r\n File \"/usr/lib/pypy3.8/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nPermissionError: [Errno 13] Permission denied: PosixPath('/tmp/z/lib/pypy3.8/site-packages/_virtualenv.pth')\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport abc\n\nfrom six import add_metaclass\n\nfrom virtualenv.create.describe import PosixSupports, Python3Supports, WindowsSupports\nfrom virtualenv.create.via_global_ref.builtin.ref import PathRefToDest\nfrom virtualenv.util.path import Path\n\nfrom .common import PyPy\n\n\n@add_metaclass(abc.ABCMeta)\nclass PyPy3(PyPy, Python3Supports):\n @classmethod\n def exe_stem(cls):\n return \"pypy3\"\n\n @classmethod\n def exe_names(cls, interpreter):\n return super(PyPy3, cls).exe_names(interpreter) | {\"pypy\"}\n\n\nclass PyPy3Posix(PyPy3, PosixSupports):\n \"\"\"PyPy 2 on POSIX\"\"\"\n\n @property\n def stdlib(self):\n \"\"\"PyPy3 respects sysconfig only for the host python, virtual envs is instead lib/pythonx.y/site-packages\"\"\"\n return self.dest / \"lib\" / \"python{}\".format(self.interpreter.version_release_str) / \"site-packages\"\n\n @classmethod\n def _shared_libs(cls):\n return [\"libpypy3-c.so\", \"libpypy3-c.dylib\"]\n\n def to_lib(self, src):\n return self.dest / \"lib\" / src.name\n\n @classmethod\n def sources(cls, interpreter):\n for src in super(PyPy3Posix, cls).sources(interpreter):\n yield src\n host_lib = Path(interpreter.system_prefix) / \"lib\"\n if host_lib.exists() and host_lib.is_dir():\n for path in host_lib.iterdir():\n yield PathRefToDest(path, dest=cls.to_lib)\n\n\nclass Pypy3Windows(PyPy3, WindowsSupports):\n \"\"\"PyPy 2 on Windows\"\"\"\n\n @property\n def stdlib(self):\n \"\"\"PyPy3 respects sysconfig only for the host python, virtual envs is instead Lib/site-packages\"\"\"\n return self.dest / \"Lib\" / \"site-packages\"\n\n @property\n def bin_dir(self):\n \"\"\"PyPy3 needs to fallback to pypy definition\"\"\"\n return self.dest / \"Scripts\"\n\n @classmethod\n def _shared_libs(cls):\n return [\"libpypy3-c.dll\", \"libffi-7.dll\"]\n", "path": "src/virtualenv/create/via_global_ref/builtin/pypy/pypy3.py"}]} | 2,099 | 439 |
gh_patches_debug_36026 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-2094 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Save phone number received from login.gov
### Issue description
For the user profile feature, we will need phone number from login.gov and need it to appear on the connected Contact object. We should save the phone number both the user and connected contact as soon as we get it, but only if the phone number isn't already filled in. Users can update the phone number as needed in our system and we should trust if they already have a phone number filled in manually then it is probably a better one to contact them on anyways.
### Acceptance criteria
- [ ] Save phone number from login.gov to a user's Contact
- [ ] If phone number is already filled in for the associated contact, update the user and leave the contact alone
- [ ] On saving contact, if contact has a related user with no phone number, update user
- [ ] Update unit tests
### Additional context
_No response_
### Links to other issues
blocks: #1807
</issue>
<code>
[start of src/registrar/models/contact.py]
1 from django.db import models
2
3 from .utility.time_stamped_model import TimeStampedModel
4
5 from phonenumber_field.modelfields import PhoneNumberField # type: ignore
6
7
8 class Contact(TimeStampedModel):
9 """Contact information follows a similar pattern for each contact."""
10
11 user = models.OneToOneField(
12 "registrar.User",
13 null=True,
14 blank=True,
15 on_delete=models.SET_NULL,
16 )
17
18 first_name = models.CharField(
19 null=True,
20 blank=True,
21 verbose_name="first name",
22 db_index=True,
23 )
24 middle_name = models.CharField(
25 null=True,
26 blank=True,
27 )
28 last_name = models.CharField(
29 null=True,
30 blank=True,
31 verbose_name="last name",
32 db_index=True,
33 )
34 title = models.CharField(
35 null=True,
36 blank=True,
37 verbose_name="title / role",
38 )
39 email = models.EmailField(
40 null=True,
41 blank=True,
42 db_index=True,
43 max_length=320,
44 )
45 phone = PhoneNumberField(
46 null=True,
47 blank=True,
48 db_index=True,
49 )
50
51 def _get_all_relations(self):
52 """Returns an array of all fields which are relations"""
53 return [f.name for f in self._meta.get_fields() if f.is_relation]
54
55 def has_more_than_one_join(self, expected_relation):
56 """Helper for finding whether an object is joined more than once.
57 expected_relation is the one relation with one expected join"""
58 # all_relations is the list of all_relations (from contact) to be checked for existing joins
59 all_relations = self._get_all_relations()
60 return any(self._has_more_than_one_join_per_relation(rel, expected_relation) for rel in all_relations)
61
62 def _has_more_than_one_join_per_relation(self, relation, expected_relation):
63 """Helper for finding whether an object is joined more than once."""
64 # threshold is the number of related objects that are acceptable
65 # when determining if related objects exist. threshold is 0 for most
66 # relationships. if the relationship is expected_relation, we know that
67 # there is already exactly 1 acceptable relationship (the one we are
68 # attempting to delete), so the threshold is 1
69 threshold = 1 if relation == expected_relation else 0
70
71 # Raise a KeyError if rel is not a defined field on the db_obj model
72 # This will help catch any errors in relation passed.
73 if relation not in [field.name for field in self._meta.get_fields()]:
74 raise KeyError(f"{relation} is not a defined field on the {self._meta.model_name} model.")
75
76 # if attr rel in db_obj is not None, then test if reference object(s) exist
77 if getattr(self, relation) is not None:
78 field = self._meta.get_field(relation)
79 if isinstance(field, models.OneToOneField):
80 # if the rel field is a OneToOne field, then we have already
81 # determined that the object exists (is not None)
82 # so return True unless the relation being tested is the expected_relation
83 is_not_expected_relation = relation != expected_relation
84 return is_not_expected_relation
85 elif isinstance(field, models.ForeignObjectRel):
86 # if the rel field is a ManyToOne or ManyToMany, then we need
87 # to determine if the count of related objects is greater than
88 # the threshold
89 return getattr(self, relation).count() > threshold
90 return False
91
92 def get_formatted_name(self):
93 """Returns the contact's name in Western order."""
94 names = [n for n in [self.first_name, self.middle_name, self.last_name] if n]
95 return " ".join(names) if names else "Unknown"
96
97 def has_contact_info(self):
98 return bool(self.title or self.email or self.phone)
99
100 def save(self, *args, **kwargs):
101 # Call the parent class's save method to perform the actual save
102 super().save(*args, **kwargs)
103
104 # Update the related User object's first_name and last_name
105 if self.user and (not self.user.first_name or not self.user.last_name):
106 self.user.first_name = self.first_name
107 self.user.last_name = self.last_name
108 self.user.save()
109
110 def __str__(self):
111 if self.first_name or self.last_name:
112 return self.get_formatted_name()
113 elif self.email:
114 return self.email
115 elif self.pk:
116 return str(self.pk)
117 else:
118 return ""
119
[end of src/registrar/models/contact.py]
[start of src/djangooidc/backends.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import logging
5
6 from django.conf import settings
7 from django.contrib.auth import get_user_model
8 from django.contrib.auth.backends import ModelBackend
9 from django.utils import timezone
10
11 logger = logging.getLogger(__name__)
12
13
14 class OpenIdConnectBackend(ModelBackend):
15 """
16 This backend checks a previously performed OIDC authentication.
17 If it is OK and the user already exists in the database, it is returned.
18 If it is OK and user does not exist in the database, it is created and
19 returned unless setting OIDC_CREATE_UNKNOWN_USER is False.
20 In all other cases, None is returned.
21 """
22
23 def authenticate(self, request, **kwargs):
24 logger.debug("kwargs %s" % kwargs)
25 user = None
26 if not kwargs or "sub" not in kwargs.keys():
27 return user
28
29 UserModel = get_user_model()
30 username = self.clean_username(kwargs["sub"])
31
32 # Some OP may actually choose to withhold some information, so we must
33 # test if it is present
34 openid_data = {"last_login": timezone.now()}
35 openid_data["first_name"] = kwargs.get("given_name", "")
36 openid_data["last_name"] = kwargs.get("family_name", "")
37 openid_data["email"] = kwargs.get("email", "")
38 openid_data["phone"] = kwargs.get("phone", "")
39
40 # Note that this could be accomplished in one try-except clause, but
41 # instead we use get_or_create when creating unknown users since it has
42 # built-in safeguards for multiple threads.
43 if getattr(settings, "OIDC_CREATE_UNKNOWN_USER", True):
44 args = {
45 UserModel.USERNAME_FIELD: username,
46 # defaults _will_ be updated, these are not fallbacks
47 "defaults": openid_data,
48 }
49
50 user, created = UserModel.objects.get_or_create(**args)
51
52 if not created:
53 # If user exists, update existing user
54 self.update_existing_user(user, args["defaults"])
55 else:
56 # If user is created, configure the user
57 user = self.configure_user(user, **kwargs)
58 else:
59 try:
60 user = UserModel.objects.get_by_natural_key(username)
61 except UserModel.DoesNotExist:
62 return None
63 # run this callback for a each login
64 user.on_each_login()
65 return user
66
67 def update_existing_user(self, user, kwargs):
68 """Update other fields without overwriting first_name and last_name.
69 Overwrite first_name and last_name if not empty string"""
70
71 for key, value in kwargs.items():
72 # Check if the key is not first_name or last_name or value is not empty string
73 if key not in ["first_name", "last_name"] or value:
74 setattr(user, key, value)
75 user.save()
76
77 def clean_username(self, username):
78 """
79 Performs any cleaning on the "username" prior to using it to get or
80 create the user object. Returns the cleaned username.
81 """
82 return username
83
84 def configure_user(self, user, **kwargs):
85 """
86 Configures a user after creation and returns the updated user.
87 """
88 user.set_unusable_password()
89 return user
90
[end of src/djangooidc/backends.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/djangooidc/backends.py b/src/djangooidc/backends.py
--- a/src/djangooidc/backends.py
+++ b/src/djangooidc/backends.py
@@ -65,13 +65,31 @@
return user
def update_existing_user(self, user, kwargs):
- """Update other fields without overwriting first_name and last_name.
- Overwrite first_name and last_name if not empty string"""
+ """
+ Update user fields without overwriting certain fields.
+
+ Args:
+ user: User object to be updated.
+ kwargs: Dictionary containing fields to update and their new values.
+ Note:
+ This method updates user fields while preserving the values of 'first_name',
+ 'last_name', and 'phone' fields, unless specific conditions are met.
+
+ - 'first_name', 'last_name' or 'phone' will be updated if the provided value is not empty.
+ """
+
+ fields_to_check = ["first_name", "last_name", "phone"]
+
+ # Iterate over fields to update
for key, value in kwargs.items():
- # Check if the key is not first_name or last_name or value is not empty string
- if key not in ["first_name", "last_name"] or value:
+ # Check if the field is not 'first_name', 'last_name', or 'phone',
+ # or if it's 'first_name' or 'last_name' or 'phone' and the provided value is not empty
+ if key not in fields_to_check or (key in fields_to_check and value):
+ # Update the corresponding attribute of the user object
setattr(user, key, value)
+
+ # Save the user object with the updated fields
user.save()
def clean_username(self, username):
diff --git a/src/registrar/models/contact.py b/src/registrar/models/contact.py
--- a/src/registrar/models/contact.py
+++ b/src/registrar/models/contact.py
@@ -101,11 +101,23 @@
# Call the parent class's save method to perform the actual save
super().save(*args, **kwargs)
- # Update the related User object's first_name and last_name
- if self.user and (not self.user.first_name or not self.user.last_name):
- self.user.first_name = self.first_name
- self.user.last_name = self.last_name
- self.user.save()
+ if self.user:
+ updated = False
+
+ # Update first name and last name if necessary
+ if not self.user.first_name or not self.user.last_name:
+ self.user.first_name = self.first_name
+ self.user.last_name = self.last_name
+ updated = True
+
+ # Update phone if necessary
+ if not self.user.phone:
+ self.user.phone = self.phone
+ updated = True
+
+ # Save user if any updates were made
+ if updated:
+ self.user.save()
def __str__(self):
if self.first_name or self.last_name:
| {"golden_diff": "diff --git a/src/djangooidc/backends.py b/src/djangooidc/backends.py\n--- a/src/djangooidc/backends.py\n+++ b/src/djangooidc/backends.py\n@@ -65,13 +65,31 @@\n return user\n \n def update_existing_user(self, user, kwargs):\n- \"\"\"Update other fields without overwriting first_name and last_name.\n- Overwrite first_name and last_name if not empty string\"\"\"\n+ \"\"\"\n+ Update user fields without overwriting certain fields.\n+\n+ Args:\n+ user: User object to be updated.\n+ kwargs: Dictionary containing fields to update and their new values.\n \n+ Note:\n+ This method updates user fields while preserving the values of 'first_name',\n+ 'last_name', and 'phone' fields, unless specific conditions are met.\n+\n+ - 'first_name', 'last_name' or 'phone' will be updated if the provided value is not empty.\n+ \"\"\"\n+\n+ fields_to_check = [\"first_name\", \"last_name\", \"phone\"]\n+\n+ # Iterate over fields to update\n for key, value in kwargs.items():\n- # Check if the key is not first_name or last_name or value is not empty string\n- if key not in [\"first_name\", \"last_name\"] or value:\n+ # Check if the field is not 'first_name', 'last_name', or 'phone',\n+ # or if it's 'first_name' or 'last_name' or 'phone' and the provided value is not empty\n+ if key not in fields_to_check or (key in fields_to_check and value):\n+ # Update the corresponding attribute of the user object\n setattr(user, key, value)\n+\n+ # Save the user object with the updated fields\n user.save()\n \n def clean_username(self, username):\ndiff --git a/src/registrar/models/contact.py b/src/registrar/models/contact.py\n--- a/src/registrar/models/contact.py\n+++ b/src/registrar/models/contact.py\n@@ -101,11 +101,23 @@\n # Call the parent class's save method to perform the actual save\n super().save(*args, **kwargs)\n \n- # Update the related User object's first_name and last_name\n- if self.user and (not self.user.first_name or not self.user.last_name):\n- self.user.first_name = self.first_name\n- self.user.last_name = self.last_name\n- self.user.save()\n+ if self.user:\n+ updated = False\n+\n+ # Update first name and last name if necessary\n+ if not self.user.first_name or not self.user.last_name:\n+ self.user.first_name = self.first_name\n+ self.user.last_name = self.last_name\n+ updated = True\n+\n+ # Update phone if necessary\n+ if not self.user.phone:\n+ self.user.phone = self.phone\n+ updated = True\n+\n+ # Save user if any updates were made\n+ if updated:\n+ self.user.save()\n \n def __str__(self):\n if self.first_name or self.last_name:\n", "issue": "Save phone number received from login.gov\n### Issue description\n\nFor the user profile feature, we will need phone number from login.gov and need it to appear on the connected Contact object. We should save the phone number both the user and connected contact as soon as we get it, but only if the phone number isn't already filled in. Users can update the phone number as needed in our system and we should trust if they already have a phone number filled in manually then it is probably a better one to contact them on anyways. \n\n### Acceptance criteria\n\n- [ ] Save phone number from login.gov to a user's Contact\n- [ ] If phone number is already filled in for the associated contact, update the user and leave the contact alone\n- [ ] On saving contact, if contact has a related user with no phone number, update user\n- [ ] Update unit tests\n\n### Additional context\n\n_No response_\n\n### Links to other issues\n\nblocks: #1807 \n", "before_files": [{"content": "from django.db import models\n\nfrom .utility.time_stamped_model import TimeStampedModel\n\nfrom phonenumber_field.modelfields import PhoneNumberField # type: ignore\n\n\nclass Contact(TimeStampedModel):\n \"\"\"Contact information follows a similar pattern for each contact.\"\"\"\n\n user = models.OneToOneField(\n \"registrar.User\",\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n )\n\n first_name = models.CharField(\n null=True,\n blank=True,\n verbose_name=\"first name\",\n db_index=True,\n )\n middle_name = models.CharField(\n null=True,\n blank=True,\n )\n last_name = models.CharField(\n null=True,\n blank=True,\n verbose_name=\"last name\",\n db_index=True,\n )\n title = models.CharField(\n null=True,\n blank=True,\n verbose_name=\"title / role\",\n )\n email = models.EmailField(\n null=True,\n blank=True,\n db_index=True,\n max_length=320,\n )\n phone = PhoneNumberField(\n null=True,\n blank=True,\n db_index=True,\n )\n\n def _get_all_relations(self):\n \"\"\"Returns an array of all fields which are relations\"\"\"\n return [f.name for f in self._meta.get_fields() if f.is_relation]\n\n def has_more_than_one_join(self, expected_relation):\n \"\"\"Helper for finding whether an object is joined more than once.\n expected_relation is the one relation with one expected join\"\"\"\n # all_relations is the list of all_relations (from contact) to be checked for existing joins\n all_relations = self._get_all_relations()\n return any(self._has_more_than_one_join_per_relation(rel, expected_relation) for rel in all_relations)\n\n def _has_more_than_one_join_per_relation(self, relation, expected_relation):\n \"\"\"Helper for finding whether an object is joined more than once.\"\"\"\n # threshold is the number of related objects that are acceptable\n # when determining if related objects exist. threshold is 0 for most\n # relationships. if the relationship is expected_relation, we know that\n # there is already exactly 1 acceptable relationship (the one we are\n # attempting to delete), so the threshold is 1\n threshold = 1 if relation == expected_relation else 0\n\n # Raise a KeyError if rel is not a defined field on the db_obj model\n # This will help catch any errors in relation passed.\n if relation not in [field.name for field in self._meta.get_fields()]:\n raise KeyError(f\"{relation} is not a defined field on the {self._meta.model_name} model.\")\n\n # if attr rel in db_obj is not None, then test if reference object(s) exist\n if getattr(self, relation) is not None:\n field = self._meta.get_field(relation)\n if isinstance(field, models.OneToOneField):\n # if the rel field is a OneToOne field, then we have already\n # determined that the object exists (is not None)\n # so return True unless the relation being tested is the expected_relation\n is_not_expected_relation = relation != expected_relation\n return is_not_expected_relation\n elif isinstance(field, models.ForeignObjectRel):\n # if the rel field is a ManyToOne or ManyToMany, then we need\n # to determine if the count of related objects is greater than\n # the threshold\n return getattr(self, relation).count() > threshold\n return False\n\n def get_formatted_name(self):\n \"\"\"Returns the contact's name in Western order.\"\"\"\n names = [n for n in [self.first_name, self.middle_name, self.last_name] if n]\n return \" \".join(names) if names else \"Unknown\"\n\n def has_contact_info(self):\n return bool(self.title or self.email or self.phone)\n\n def save(self, *args, **kwargs):\n # Call the parent class's save method to perform the actual save\n super().save(*args, **kwargs)\n\n # Update the related User object's first_name and last_name\n if self.user and (not self.user.first_name or not self.user.last_name):\n self.user.first_name = self.first_name\n self.user.last_name = self.last_name\n self.user.save()\n\n def __str__(self):\n if self.first_name or self.last_name:\n return self.get_formatted_name()\n elif self.email:\n return self.email\n elif self.pk:\n return str(self.pk)\n else:\n return \"\"\n", "path": "src/registrar/models/contact.py"}, {"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.backends import ModelBackend\nfrom django.utils import timezone\n\nlogger = logging.getLogger(__name__)\n\n\nclass OpenIdConnectBackend(ModelBackend):\n \"\"\"\n This backend checks a previously performed OIDC authentication.\n If it is OK and the user already exists in the database, it is returned.\n If it is OK and user does not exist in the database, it is created and\n returned unless setting OIDC_CREATE_UNKNOWN_USER is False.\n In all other cases, None is returned.\n \"\"\"\n\n def authenticate(self, request, **kwargs):\n logger.debug(\"kwargs %s\" % kwargs)\n user = None\n if not kwargs or \"sub\" not in kwargs.keys():\n return user\n\n UserModel = get_user_model()\n username = self.clean_username(kwargs[\"sub\"])\n\n # Some OP may actually choose to withhold some information, so we must\n # test if it is present\n openid_data = {\"last_login\": timezone.now()}\n openid_data[\"first_name\"] = kwargs.get(\"given_name\", \"\")\n openid_data[\"last_name\"] = kwargs.get(\"family_name\", \"\")\n openid_data[\"email\"] = kwargs.get(\"email\", \"\")\n openid_data[\"phone\"] = kwargs.get(\"phone\", \"\")\n\n # Note that this could be accomplished in one try-except clause, but\n # instead we use get_or_create when creating unknown users since it has\n # built-in safeguards for multiple threads.\n if getattr(settings, \"OIDC_CREATE_UNKNOWN_USER\", True):\n args = {\n UserModel.USERNAME_FIELD: username,\n # defaults _will_ be updated, these are not fallbacks\n \"defaults\": openid_data,\n }\n\n user, created = UserModel.objects.get_or_create(**args)\n\n if not created:\n # If user exists, update existing user\n self.update_existing_user(user, args[\"defaults\"])\n else:\n # If user is created, configure the user\n user = self.configure_user(user, **kwargs)\n else:\n try:\n user = UserModel.objects.get_by_natural_key(username)\n except UserModel.DoesNotExist:\n return None\n # run this callback for a each login\n user.on_each_login()\n return user\n\n def update_existing_user(self, user, kwargs):\n \"\"\"Update other fields without overwriting first_name and last_name.\n Overwrite first_name and last_name if not empty string\"\"\"\n\n for key, value in kwargs.items():\n # Check if the key is not first_name or last_name or value is not empty string\n if key not in [\"first_name\", \"last_name\"] or value:\n setattr(user, key, value)\n user.save()\n\n def clean_username(self, username):\n \"\"\"\n Performs any cleaning on the \"username\" prior to using it to get or\n create the user object. Returns the cleaned username.\n \"\"\"\n return username\n\n def configure_user(self, user, **kwargs):\n \"\"\"\n Configures a user after creation and returns the updated user.\n \"\"\"\n user.set_unusable_password()\n return user\n", "path": "src/djangooidc/backends.py"}]} | 2,829 | 684 |
gh_patches_debug_4590 | rasdani/github-patches | git_diff | cowrie__cowrie-994 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
free -m command results in stack trace and cowrie freezes
From within cowrie (docker current 7 Dec 2018), when the command "free -m" is run it results a stack trace errors shown below. Cowrie freezes., and since many bots use free as info gathering, cowry may miss everything subsequent to this command.
Environment:
Ubuntu 16.04
Python 3.5.2
docker pull cowrie/cowrie
sudo iptables -t nat -A PREROUTING -p tcp --dport 22 -j REDIRECT --to-port 2222
docker run -it -p 2222:2222 -p 2223:2223 cowrie/cowrie
Console Error:
2018-12-07T04:53:52+0000 [SSHChannel session (0) on SSHService b'ssh-connection' on HoneyPotSSHTransport,1,172.17.0
.1] CMD: free -m
2018-12-07T04:53:52+0000 [SSHChannel session (0) on SSHService b'ssh-connection' on HoneyPotSSHTransport,1,172.17.0
.1] Command found: free -m
2018-12-07T04:53:52+0000 [SSHChannel session (0) on SSHService b'ssh-connection' on HoneyPotSSHTransport,1,172.17.0
.1] Unhandled Error
Traceback (most recent call last):
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py", line 122, in callWithCon
text
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py", line 85, in callWithCont
ext
return func(*args,**kw)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/service.py", line 45, in packetRec
eived
return f(packet)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/connection.py", line 249, in ssh_C
HANNEL_DATA
log.callWithLogger(channel, channel.dataReceived, data)
--- <exception caught here> ---
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/log.py", line 103, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/log.py", line 86, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py", line 122, in callWithCon
text
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py", line 85, in callWithCont
ext
return func(*args,**kw)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/session.py", line 112, in dataRece
ived
self.client.transport.write(data)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/session.py", line 163, in write
self.proto.dataReceived(data)
File "/cowrie/cowrie-git/src/cowrie/insults/insults.py", line 104, in dataReceived
insults.ServerProtocol.dataReceived(self, data)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/insults/insults.py", line 537, in data
Received
self.terminalProtocol.keystrokeReceived(ch, None)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/recvline.py", line 225, in keystrokeRe
ceived
m()
File "/cowrie/cowrie-git/src/cowrie/shell/protocol.py", line 325, in handle_RETURN
return recvline.RecvLine.handle_RETURN(self)
File "/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/recvline.py", line 292, in handle_RETU
RN
self.lineReceived(line)
File "/cowrie/cowrie-git/src/cowrie/shell/protocol.py", line 185, in lineReceived
self.cmdstack[-1].lineReceived(line)
File "/cowrie/cowrie-git/src/cowrie/shell/honeypot.py", line 106, in lineReceived
self.runCommand()
File "/cowrie/cowrie-git/src/cowrie/shell/honeypot.py", line 215, in runCommand
self.protocol.call_command(pp, cmdclass, *cmd_array[0]['rargs'])
File "/cowrie/cowrie-git/src/cowrie/shell/protocol.py", line 306, in call_command
HoneyPotBaseProtocol.call_command(self, pp, cmd, *args)
File "/cowrie/cowrie-git/src/cowrie/shell/protocol.py", line 194, in call_command
obj.start()
File "/cowrie/cowrie-git/src/cowrie/shell/command.py", line 128, in start
self.call()
File "/cowrie/cowrie-git/src/cowrie/commands/free.py", line 41, in call
self.do_free(fmt='megabytes')
File "/cowrie/cowrie-git/src/cowrie/commands/free.py", line 60, in do_free
for key, value in raw_mem_stats.iteritems():
builtins.AttributeError: 'dict' object has no attribute 'iteritems'
</issue>
<code>
[start of src/cowrie/commands/free.py]
1 # Copyright (c) 2015 Michel Oosterhof <[email protected]>
2 # All rights reserved.
3
4 """
5 This module ...
6 """
7
8 from __future__ import absolute_import, division
9
10 import getopt
11
12 from cowrie.shell.command import HoneyPotCommand
13
14 commands = {}
15
16 FREE_OUTPUT = """ total used free shared buff/cache available
17 Mem:{MemTotal:>15}{calc_total_used:>12}{MemFree:>12}{Shmem:>12}{calc_total_buffers_and_cache:>12}{MemAvailable:>12}
18 Swap:{SwapTotal:>14}{calc_swap_used:>12}{SwapFree:>12}
19 """
20
21
22 class command_free(HoneyPotCommand):
23 """
24 free
25 """
26
27 def call(self):
28 # Parse options or display no files
29 try:
30 opts, args = getopt.getopt(self.args, 'mh')
31 except getopt.GetoptError:
32 self.do_free()
33 return
34
35 # Parse options
36 for o, a in opts:
37 if o in ('-h'):
38 self.do_free(fmt='human')
39 return
40 elif o in ('-m'):
41 self.do_free(fmt='megabytes')
42 return
43 self.do_free()
44
45 def do_free(self, fmt='kilobytes'):
46 """
47 print free statistics
48 """
49
50 # Get real host memstats and add the calculated fields
51 raw_mem_stats = self.get_free_stats()
52 raw_mem_stats['calc_total_buffers_and_cache'] = raw_mem_stats['Buffers'] + raw_mem_stats['Cached']
53 raw_mem_stats['calc_total_used'] = raw_mem_stats['MemTotal'] - (
54 raw_mem_stats['MemFree'] + raw_mem_stats['calc_total_buffers_and_cache']
55 )
56 raw_mem_stats['calc_swap_used'] = raw_mem_stats['SwapTotal'] - raw_mem_stats['SwapFree']
57
58 if fmt == 'megabytes':
59 # Transform KB to MB
60 for key, value in raw_mem_stats.iteritems():
61 raw_mem_stats[key] = int(value / 1000)
62 elif fmt == 'human':
63 magnitude = ["B", "M", "G", "T", "Z"]
64 for key, value in raw_mem_stats.iteritems():
65 current_magnitude = 0
66
67 # Keep dividing until we get a sane magnitude
68 while(value >= 1000 and current_magnitude < len(magnitude)):
69 value = round(float(value / 1000), 1)
70 current_magnitude += 1
71
72 # Format to string and append value with new magnitude
73 raw_mem_stats[key] = str("{:g}{}".format(value, magnitude[current_magnitude]))
74
75 # Write the output to screen
76 self.write(FREE_OUTPUT.format(**raw_mem_stats))
77
78 def get_free_stats(self):
79 """
80 Get the free stats from /proc
81 """
82 needed_keys = ["Buffers", "Cached", "MemTotal", "MemFree", "SwapTotal", "SwapFree", "Shmem", "MemAvailable"]
83 mem_info_map = {}
84 with open('/proc/meminfo', 'r') as proc_file:
85 for line in proc_file:
86 tokens = line.split(':')
87
88 # Later we are going to do some math on those numbers, better not include uneeded keys for performance
89 if tokens[0] in needed_keys:
90 mem_info_map[tokens[0]] = int(tokens[1].lstrip().split(' ')[0])
91
92 # Got a map with all tokens from /proc/meminfo and sizes in KBs
93 return mem_info_map
94
95
96 commands['/usr/bin/free'] = command_free
97 commands['free'] = command_free
98
[end of src/cowrie/commands/free.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cowrie/commands/free.py b/src/cowrie/commands/free.py
--- a/src/cowrie/commands/free.py
+++ b/src/cowrie/commands/free.py
@@ -57,7 +57,7 @@
if fmt == 'megabytes':
# Transform KB to MB
- for key, value in raw_mem_stats.iteritems():
+ for key, value in raw_mem_stats.items():
raw_mem_stats[key] = int(value / 1000)
elif fmt == 'human':
magnitude = ["B", "M", "G", "T", "Z"]
| {"golden_diff": "diff --git a/src/cowrie/commands/free.py b/src/cowrie/commands/free.py\n--- a/src/cowrie/commands/free.py\n+++ b/src/cowrie/commands/free.py\n@@ -57,7 +57,7 @@\n \n if fmt == 'megabytes':\n # Transform KB to MB\n- for key, value in raw_mem_stats.iteritems():\n+ for key, value in raw_mem_stats.items():\n raw_mem_stats[key] = int(value / 1000)\n elif fmt == 'human':\n magnitude = [\"B\", \"M\", \"G\", \"T\", \"Z\"]\n", "issue": "free -m command results in stack trace and cowrie freezes\nFrom within cowrie (docker current 7 Dec 2018), when the command \"free -m\" is run it results a stack trace errors shown below. Cowrie freezes., and since many bots use free as info gathering, cowry may miss everything subsequent to this command.\r\n\r\nEnvironment:\r\nUbuntu 16.04\r\nPython 3.5.2\r\ndocker pull cowrie/cowrie\r\nsudo iptables -t nat -A PREROUTING -p tcp --dport 22 -j REDIRECT --to-port 2222\r\ndocker run -it -p 2222:2222 -p 2223:2223 cowrie/cowrie\r\n\r\nConsole Error:\r\n2018-12-07T04:53:52+0000 [SSHChannel session (0) on SSHService b'ssh-connection' on HoneyPotSSHTransport,1,172.17.0\r\n.1] CMD: free -m\r\n2018-12-07T04:53:52+0000 [SSHChannel session (0) on SSHService b'ssh-connection' on HoneyPotSSHTransport,1,172.17.0\r\n.1] Command found: free -m\r\n2018-12-07T04:53:52+0000 [SSHChannel session (0) on SSHService b'ssh-connection' on HoneyPotSSHTransport,1,172.17.0\r\n.1] Unhandled Error\r\n Traceback (most recent call last):\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py\", line 122, in callWithCon\r\ntext\r\n return self.currentContext().callWithContext(ctx, func, *args, **kw)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py\", line 85, in callWithCont\r\next\r\n return func(*args,**kw)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/service.py\", line 45, in packetRec\r\neived\r\n return f(packet)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/connection.py\", line 249, in ssh_C\r\nHANNEL_DATA\r\n log.callWithLogger(channel, channel.dataReceived, data)\r\n --- <exception caught here> ---\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/log.py\", line 103, in callWithLogger\r\n return callWithContext({\"system\": lp}, func, *args, **kw)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/log.py\", line 86, in callWithContext\r\n return context.call({ILogContext: newCtx}, func, *args, **kw)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py\", line 122, in callWithCon\r\ntext\r\n return self.currentContext().callWithContext(ctx, func, *args, **kw)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/python/context.py\", line 85, in callWithCont\r\next\r\n return func(*args,**kw)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/session.py\", line 112, in dataRece\r\nived\r\n self.client.transport.write(data)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/ssh/session.py\", line 163, in write\r\n self.proto.dataReceived(data)\r\n File \"/cowrie/cowrie-git/src/cowrie/insults/insults.py\", line 104, in dataReceived\r\n insults.ServerProtocol.dataReceived(self, data)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/insults/insults.py\", line 537, in data\r\nReceived\r\n self.terminalProtocol.keystrokeReceived(ch, None)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/recvline.py\", line 225, in keystrokeRe\r\nceived\r\n m()\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/protocol.py\", line 325, in handle_RETURN\r\n return recvline.RecvLine.handle_RETURN(self)\r\n File \"/cowrie/cowrie-env/lib/python3.5/site-packages/twisted/conch/recvline.py\", line 292, in handle_RETU\r\nRN\r\n self.lineReceived(line)\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/protocol.py\", line 185, in lineReceived\r\n self.cmdstack[-1].lineReceived(line)\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/honeypot.py\", line 106, in lineReceived\r\n self.runCommand()\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/honeypot.py\", line 215, in runCommand\r\n self.protocol.call_command(pp, cmdclass, *cmd_array[0]['rargs'])\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/protocol.py\", line 306, in call_command\r\n HoneyPotBaseProtocol.call_command(self, pp, cmd, *args)\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/protocol.py\", line 194, in call_command\r\n obj.start()\r\n File \"/cowrie/cowrie-git/src/cowrie/shell/command.py\", line 128, in start\r\n self.call()\r\n File \"/cowrie/cowrie-git/src/cowrie/commands/free.py\", line 41, in call\r\n self.do_free(fmt='megabytes')\r\n File \"/cowrie/cowrie-git/src/cowrie/commands/free.py\", line 60, in do_free\r\n for key, value in raw_mem_stats.iteritems():\r\n builtins.AttributeError: 'dict' object has no attribute 'iteritems'\r\n\n", "before_files": [{"content": "# Copyright (c) 2015 Michel Oosterhof <[email protected]>\n# All rights reserved.\n\n\"\"\"\nThis module ...\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport getopt\n\nfrom cowrie.shell.command import HoneyPotCommand\n\ncommands = {}\n\nFREE_OUTPUT = \"\"\" total used free shared buff/cache available\nMem:{MemTotal:>15}{calc_total_used:>12}{MemFree:>12}{Shmem:>12}{calc_total_buffers_and_cache:>12}{MemAvailable:>12}\nSwap:{SwapTotal:>14}{calc_swap_used:>12}{SwapFree:>12}\n\"\"\"\n\n\nclass command_free(HoneyPotCommand):\n \"\"\"\n free\n \"\"\"\n\n def call(self):\n # Parse options or display no files\n try:\n opts, args = getopt.getopt(self.args, 'mh')\n except getopt.GetoptError:\n self.do_free()\n return\n\n # Parse options\n for o, a in opts:\n if o in ('-h'):\n self.do_free(fmt='human')\n return\n elif o in ('-m'):\n self.do_free(fmt='megabytes')\n return\n self.do_free()\n\n def do_free(self, fmt='kilobytes'):\n \"\"\"\n print free statistics\n \"\"\"\n\n # Get real host memstats and add the calculated fields\n raw_mem_stats = self.get_free_stats()\n raw_mem_stats['calc_total_buffers_and_cache'] = raw_mem_stats['Buffers'] + raw_mem_stats['Cached']\n raw_mem_stats['calc_total_used'] = raw_mem_stats['MemTotal'] - (\n raw_mem_stats['MemFree'] + raw_mem_stats['calc_total_buffers_and_cache']\n )\n raw_mem_stats['calc_swap_used'] = raw_mem_stats['SwapTotal'] - raw_mem_stats['SwapFree']\n\n if fmt == 'megabytes':\n # Transform KB to MB\n for key, value in raw_mem_stats.iteritems():\n raw_mem_stats[key] = int(value / 1000)\n elif fmt == 'human':\n magnitude = [\"B\", \"M\", \"G\", \"T\", \"Z\"]\n for key, value in raw_mem_stats.iteritems():\n current_magnitude = 0\n\n # Keep dividing until we get a sane magnitude\n while(value >= 1000 and current_magnitude < len(magnitude)):\n value = round(float(value / 1000), 1)\n current_magnitude += 1\n\n # Format to string and append value with new magnitude\n raw_mem_stats[key] = str(\"{:g}{}\".format(value, magnitude[current_magnitude]))\n\n # Write the output to screen\n self.write(FREE_OUTPUT.format(**raw_mem_stats))\n\n def get_free_stats(self):\n \"\"\"\n Get the free stats from /proc\n \"\"\"\n needed_keys = [\"Buffers\", \"Cached\", \"MemTotal\", \"MemFree\", \"SwapTotal\", \"SwapFree\", \"Shmem\", \"MemAvailable\"]\n mem_info_map = {}\n with open('/proc/meminfo', 'r') as proc_file:\n for line in proc_file:\n tokens = line.split(':')\n\n # Later we are going to do some math on those numbers, better not include uneeded keys for performance\n if tokens[0] in needed_keys:\n mem_info_map[tokens[0]] = int(tokens[1].lstrip().split(' ')[0])\n\n # Got a map with all tokens from /proc/meminfo and sizes in KBs\n return mem_info_map\n\n\ncommands['/usr/bin/free'] = command_free\ncommands['free'] = command_free\n", "path": "src/cowrie/commands/free.py"}]} | 2,942 | 137 |
gh_patches_debug_65703 | rasdani/github-patches | git_diff | carpentries__amy-1793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: assignment form queryset may return duplicate results
Introduced in v2.16, AssignmentForm contains a queryset that may yield duplicate results due to the filtering used.
</issue>
<code>
[start of amy/dashboard/forms.py]
1 from django import forms
2 from django.core.exceptions import ValidationError
3 from django.db.models import Q
4 from django_countries.fields import CountryField
5
6 from workshops.models import (
7 Language,
8 GenderMixin,
9 Person,
10 TrainingProgress,
11 TrainingRequirement,
12 )
13
14 from workshops.forms import BootstrapHelper
15 # this is used instead of Django Autocomplete Light widgets
16 # see issue #1330: https://github.com/swcarpentry/amy/issues/1330
17 from workshops.fields import (
18 Select2Widget,
19 ModelSelect2MultipleWidget,
20 RadioSelectWithOther,
21 )
22
23
24 class AssignmentForm(forms.Form):
25 assigned_to = forms.ModelChoiceField(
26 label="Assigned to:",
27 required=False,
28 queryset=Person.objects.filter(
29 Q(is_superuser=True) | Q(groups__name="administrators")
30 ),
31 widget=Select2Widget(),
32 )
33 helper = BootstrapHelper(
34 add_submit_button=False,
35 add_cancel_button=False,
36 wider_labels=True,
37 use_get_method=True,
38 form_id="assignment-form"
39 )
40
41
42 class AutoUpdateProfileForm(forms.ModelForm):
43 username = forms.CharField(disabled=True, required=False)
44 email = forms.CharField(
45 disabled=True, required=False,
46 label=Person._meta.get_field('email').verbose_name,
47 help_text=Person._meta.get_field('email').help_text,
48 )
49 github = forms.CharField(
50 disabled=True, required=False,
51 help_text='If you want to change your github username, please email '
52 'us at <a href="mailto:[email protected]">'
53 '[email protected]</a>.')
54
55 country = CountryField().formfield(
56 required=False,
57 help_text='Your country of residence.',
58 widget=Select2Widget,
59 )
60
61 languages = forms.ModelMultipleChoiceField(
62 label='Languages',
63 required=False,
64 queryset=Language.objects.all(),
65 widget=ModelSelect2MultipleWidget(data_view='language-lookup')
66 )
67
68 helper = BootstrapHelper(add_cancel_button=False)
69
70 class Meta:
71 model = Person
72 fields = [
73 'personal',
74 'middle',
75 'family',
76 'email',
77 'secondary_email',
78 'gender',
79 'gender_other',
80 'may_contact',
81 'publish_profile',
82 'lesson_publication_consent',
83 'country',
84 'airport',
85 'github',
86 'twitter',
87 'url',
88 'username',
89 'affiliation',
90 'domains',
91 'lessons',
92 'languages',
93 'occupation',
94 'orcid',
95 ]
96 readonly_fields = (
97 'username',
98 'github',
99 )
100 widgets = {
101 'gender': RadioSelectWithOther('gender_other'),
102 'domains': forms.CheckboxSelectMultiple(),
103 'lessons': forms.CheckboxSelectMultiple(),
104 'airport': Select2Widget,
105 }
106
107 def __init__(self, *args, **kwargs):
108 super().__init__(*args, **kwargs)
109
110 # set up a layout object for the helper
111 self.helper.layout = self.helper.build_default_layout(self)
112
113 # set up `*WithOther` widgets so that they can display additional
114 # fields inline
115 self['gender'].field.widget.other_field = self['gender_other']
116
117 # remove additional fields
118 self.helper.layout.fields.remove('gender_other')
119
120 def clean(self):
121 super().clean()
122 errors = dict()
123
124 # 1: require "other gender" field if "other" was selected in
125 # "gender" field
126 gender = self.cleaned_data.get('gender', '')
127 gender_other = self.cleaned_data.get('gender_other', '')
128 if gender == GenderMixin.OTHER and not gender_other:
129 errors['gender'] = ValidationError("This field is required.")
130 elif gender != GenderMixin.OTHER and gender_other:
131 errors['gender'] = ValidationError(
132 'If you entered data in "Other" field, please select that '
133 "option.")
134
135 # raise errors if any present
136 if errors:
137 raise ValidationError(errors)
138
139
140 class SendHomeworkForm(forms.ModelForm):
141 url = forms.URLField(label='URL')
142 requirement = forms.ModelChoiceField(
143 queryset=TrainingRequirement.objects.filter(name__endswith="Homework"),
144 label="Type", required=True,
145 )
146
147 helper = BootstrapHelper(add_cancel_button=False)
148
149 class Meta:
150 model = TrainingProgress
151 fields = [
152 'requirement',
153 'url',
154 ]
155
156
157 class SearchForm(forms.Form):
158 """Represent general searching form."""
159
160 term = forms.CharField(label="Term", max_length=100)
161 no_redirect = forms.BooleanField(required=False, initial=False, widget=forms.HiddenInput)
162 helper = BootstrapHelper(add_cancel_button=False, use_get_method=True)
163
[end of amy/dashboard/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/amy/dashboard/forms.py b/amy/dashboard/forms.py
--- a/amy/dashboard/forms.py
+++ b/amy/dashboard/forms.py
@@ -27,7 +27,7 @@
required=False,
queryset=Person.objects.filter(
Q(is_superuser=True) | Q(groups__name="administrators")
- ),
+ ).distinct(),
widget=Select2Widget(),
)
helper = BootstrapHelper(
| {"golden_diff": "diff --git a/amy/dashboard/forms.py b/amy/dashboard/forms.py\n--- a/amy/dashboard/forms.py\n+++ b/amy/dashboard/forms.py\n@@ -27,7 +27,7 @@\n required=False,\n queryset=Person.objects.filter(\n Q(is_superuser=True) | Q(groups__name=\"administrators\")\n- ),\n+ ).distinct(),\n widget=Select2Widget(),\n )\n helper = BootstrapHelper(\n", "issue": "Bug: assignment form queryset may return duplicate results\nIntroduced in v2.16, AssignmentForm contains a queryset that may yield duplicate results due to the filtering used.\n", "before_files": [{"content": "from django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django_countries.fields import CountryField\n\nfrom workshops.models import (\n Language,\n GenderMixin,\n Person,\n TrainingProgress,\n TrainingRequirement,\n)\n\nfrom workshops.forms import BootstrapHelper\n# this is used instead of Django Autocomplete Light widgets\n# see issue #1330: https://github.com/swcarpentry/amy/issues/1330\nfrom workshops.fields import (\n Select2Widget,\n ModelSelect2MultipleWidget,\n RadioSelectWithOther,\n)\n\n\nclass AssignmentForm(forms.Form):\n assigned_to = forms.ModelChoiceField(\n label=\"Assigned to:\",\n required=False,\n queryset=Person.objects.filter(\n Q(is_superuser=True) | Q(groups__name=\"administrators\")\n ),\n widget=Select2Widget(),\n )\n helper = BootstrapHelper(\n add_submit_button=False,\n add_cancel_button=False,\n wider_labels=True,\n use_get_method=True,\n form_id=\"assignment-form\"\n )\n\n\nclass AutoUpdateProfileForm(forms.ModelForm):\n username = forms.CharField(disabled=True, required=False)\n email = forms.CharField(\n disabled=True, required=False,\n label=Person._meta.get_field('email').verbose_name,\n help_text=Person._meta.get_field('email').help_text,\n )\n github = forms.CharField(\n disabled=True, required=False,\n help_text='If you want to change your github username, please email '\n 'us at <a href=\"mailto:[email protected]\">'\n '[email protected]</a>.')\n\n country = CountryField().formfield(\n required=False,\n help_text='Your country of residence.',\n widget=Select2Widget,\n )\n\n languages = forms.ModelMultipleChoiceField(\n label='Languages',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2MultipleWidget(data_view='language-lookup')\n )\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n class Meta:\n model = Person\n fields = [\n 'personal',\n 'middle',\n 'family',\n 'email',\n 'secondary_email',\n 'gender',\n 'gender_other',\n 'may_contact',\n 'publish_profile',\n 'lesson_publication_consent',\n 'country',\n 'airport',\n 'github',\n 'twitter',\n 'url',\n 'username',\n 'affiliation',\n 'domains',\n 'lessons',\n 'languages',\n 'occupation',\n 'orcid',\n ]\n readonly_fields = (\n 'username',\n 'github',\n )\n widgets = {\n 'gender': RadioSelectWithOther('gender_other'),\n 'domains': forms.CheckboxSelectMultiple(),\n 'lessons': forms.CheckboxSelectMultiple(),\n 'airport': Select2Widget,\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # set up a layout object for the helper\n self.helper.layout = self.helper.build_default_layout(self)\n\n # set up `*WithOther` widgets so that they can display additional\n # fields inline\n self['gender'].field.widget.other_field = self['gender_other']\n\n # remove additional fields\n self.helper.layout.fields.remove('gender_other')\n\n def clean(self):\n super().clean()\n errors = dict()\n\n # 1: require \"other gender\" field if \"other\" was selected in\n # \"gender\" field\n gender = self.cleaned_data.get('gender', '')\n gender_other = self.cleaned_data.get('gender_other', '')\n if gender == GenderMixin.OTHER and not gender_other:\n errors['gender'] = ValidationError(\"This field is required.\")\n elif gender != GenderMixin.OTHER and gender_other:\n errors['gender'] = ValidationError(\n 'If you entered data in \"Other\" field, please select that '\n \"option.\")\n\n # raise errors if any present\n if errors:\n raise ValidationError(errors)\n\n\nclass SendHomeworkForm(forms.ModelForm):\n url = forms.URLField(label='URL')\n requirement = forms.ModelChoiceField(\n queryset=TrainingRequirement.objects.filter(name__endswith=\"Homework\"),\n label=\"Type\", required=True,\n )\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n class Meta:\n model = TrainingProgress\n fields = [\n 'requirement',\n 'url',\n ]\n\n\nclass SearchForm(forms.Form):\n \"\"\"Represent general searching form.\"\"\"\n\n term = forms.CharField(label=\"Term\", max_length=100)\n no_redirect = forms.BooleanField(required=False, initial=False, widget=forms.HiddenInput)\n helper = BootstrapHelper(add_cancel_button=False, use_get_method=True)\n", "path": "amy/dashboard/forms.py"}]} | 1,961 | 93 |
gh_patches_debug_201 | rasdani/github-patches | git_diff | blaze__blaze-475 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make blaze.test() return True or False
@asmeurer suggests this. Currently we're passing through pytest.main() which is like the error code from command line programs.
<!---
@huboard:{"order":398.859375,"milestone_order":452,"custom_state":""}
-->
</issue>
<code>
[start of blaze/__init__.py]
1 from __future__ import absolute_import, division, print_function
2
3 import logging
4
5 from dynd import nd
6 from pandas import DataFrame
7 import h5py
8
9 from multipledispatch import halt_ordering, restart_ordering
10
11 halt_ordering() # Turn off multipledispatch ordering
12
13 from .expr import *
14 from .expr.functions import *
15 from .api import *
16 from .data.csv import *
17 from .data.json import *
18 from .data.hdf5 import *
19 from .compute.python import *
20 from .data.meta import *
21 from .compute.pandas import *
22 from .compute.numpy import *
23 from .compute.core import *
24 from .compute.core import compute
25 from .sql import *
26
27 try:
28 from .spark import *
29 except ImportError:
30 pass
31 try:
32 from .compute.pytables import *
33 except ImportError:
34 pass
35 try:
36 from .compute.chunks import *
37 except ImportError:
38 pass
39 try:
40 from .bcolz import *
41 except ImportError:
42 pass
43 try:
44 from .mongo import *
45 except ImportError:
46 pass
47
48 restart_ordering() # Restart multipledispatch ordering and do ordering
49
50 logging.basicConfig()
51 logger = logging.getLogger(__name__)
52 logger.setLevel(logging.WARNING)
53
54
55 inf = float('inf')
56 nan = float('nan')
57
58 __version__ = '0.6.1'
59
60 # If IPython is already loaded, register the Blaze catalog magic
61 # from . import catalog
62 # import sys
63 # if 'IPython' in sys.modules:
64 # catalog.register_ipy_magic()
65 # del sys
66
67 def print_versions():
68 """Print all the versions of software that Blaze relies on."""
69 import sys, platform
70 import numpy as np
71 import dynd
72 import datashape
73 print("-=" * 38)
74 print("Blaze version: %s" % __version__)
75 print("Datashape version: %s" % datashape.__version__)
76 print("NumPy version: %s" % np.__version__)
77 print("DyND version: %s / LibDyND %s" %
78 (dynd.__version__, dynd.__libdynd_version__))
79 print("Python version: %s" % sys.version)
80 (sysname, nodename, release, version, machine, processor) = \
81 platform.uname()
82 print("Platform: %s-%s-%s (%s)" % (sysname, release, machine, version))
83 if sysname == "Linux":
84 print("Linux dist: %s" % " ".join(platform.linux_distribution()[:-1]))
85 if not processor:
86 processor = "not recognized"
87 print("Processor: %s" % processor)
88 print("Byte-ordering: %s" % sys.byteorder)
89 print("-=" * 38)
90
91
92 def test(verbose=False, junitfile=None, exit=False):
93 """
94 Runs the full Blaze test suite, outputting
95 the results of the tests to sys.stdout.
96
97 This uses py.test to discover which tests to
98 run, and runs tests in any 'tests' subdirectory
99 within the Blaze module.
100
101 Parameters
102 ----------
103 verbose : int, optional
104 Value 0 prints very little, 1 prints a little bit,
105 and 2 prints the test names while testing.
106 junitfile : string, optional
107 If provided, writes the test results to an junit xml
108 style xml file. This is useful for running the tests
109 in a CI server such as Jenkins.
110 exit : bool, optional
111 If True, the function will call sys.exit with an
112 error code after the tests are finished.
113 """
114 import os
115 import sys
116 import pytest
117
118 args = []
119
120 if verbose:
121 args.append('--verbose')
122
123 # Output an xunit file if requested
124 if junitfile is not None:
125 args.append('--junit-xml=%s' % junitfile)
126
127 # Add all 'tests' subdirectories to the options
128 rootdir = os.path.dirname(__file__)
129 for root, dirs, files in os.walk(rootdir):
130 if 'tests' in dirs:
131 testsdir = os.path.join(root, 'tests')
132 args.append(testsdir)
133 print('Test dir: %s' % testsdir[len(rootdir) + 1:])
134 # print versions (handy when reporting problems)
135 print_versions()
136 sys.stdout.flush()
137
138 # Ask pytest to do its thing
139 error_code = pytest.main(args=args)
140 if exit:
141 return sys.exit(error_code)
142 return error_code
143
[end of blaze/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/blaze/__init__.py b/blaze/__init__.py
--- a/blaze/__init__.py
+++ b/blaze/__init__.py
@@ -139,4 +139,4 @@
error_code = pytest.main(args=args)
if exit:
return sys.exit(error_code)
- return error_code
+ return error_code == 0
| {"golden_diff": "diff --git a/blaze/__init__.py b/blaze/__init__.py\n--- a/blaze/__init__.py\n+++ b/blaze/__init__.py\n@@ -139,4 +139,4 @@\n error_code = pytest.main(args=args)\n if exit:\n return sys.exit(error_code)\n- return error_code\n+ return error_code == 0\n", "issue": "Make blaze.test() return True or False\n@asmeurer suggests this. Currently we're passing through pytest.main() which is like the error code from command line programs.\n\n<!---\n@huboard:{\"order\":398.859375,\"milestone_order\":452,\"custom_state\":\"\"}\n-->\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport logging\n\nfrom dynd import nd\nfrom pandas import DataFrame\nimport h5py\n\nfrom multipledispatch import halt_ordering, restart_ordering\n\nhalt_ordering() # Turn off multipledispatch ordering\n\nfrom .expr import *\nfrom .expr.functions import *\nfrom .api import *\nfrom .data.csv import *\nfrom .data.json import *\nfrom .data.hdf5 import *\nfrom .compute.python import *\nfrom .data.meta import *\nfrom .compute.pandas import *\nfrom .compute.numpy import *\nfrom .compute.core import *\nfrom .compute.core import compute\nfrom .sql import *\n\ntry:\n from .spark import *\nexcept ImportError:\n pass\ntry:\n from .compute.pytables import *\nexcept ImportError:\n pass\ntry:\n from .compute.chunks import *\nexcept ImportError:\n pass\ntry:\n from .bcolz import *\nexcept ImportError:\n pass\ntry:\n from .mongo import *\nexcept ImportError:\n pass\n\nrestart_ordering() # Restart multipledispatch ordering and do ordering\n\nlogging.basicConfig()\nlogger = logging.getLogger(__name__)\nlogger.setLevel(logging.WARNING)\n\n\ninf = float('inf')\nnan = float('nan')\n\n__version__ = '0.6.1'\n\n# If IPython is already loaded, register the Blaze catalog magic\n# from . import catalog\n# import sys\n# if 'IPython' in sys.modules:\n# catalog.register_ipy_magic()\n# del sys\n\ndef print_versions():\n \"\"\"Print all the versions of software that Blaze relies on.\"\"\"\n import sys, platform\n import numpy as np\n import dynd\n import datashape\n print(\"-=\" * 38)\n print(\"Blaze version: %s\" % __version__)\n print(\"Datashape version: %s\" % datashape.__version__)\n print(\"NumPy version: %s\" % np.__version__)\n print(\"DyND version: %s / LibDyND %s\" %\n (dynd.__version__, dynd.__libdynd_version__))\n print(\"Python version: %s\" % sys.version)\n (sysname, nodename, release, version, machine, processor) = \\\n platform.uname()\n print(\"Platform: %s-%s-%s (%s)\" % (sysname, release, machine, version))\n if sysname == \"Linux\":\n print(\"Linux dist: %s\" % \" \".join(platform.linux_distribution()[:-1]))\n if not processor:\n processor = \"not recognized\"\n print(\"Processor: %s\" % processor)\n print(\"Byte-ordering: %s\" % sys.byteorder)\n print(\"-=\" * 38)\n\n\ndef test(verbose=False, junitfile=None, exit=False):\n \"\"\"\n Runs the full Blaze test suite, outputting\n the results of the tests to sys.stdout.\n\n This uses py.test to discover which tests to\n run, and runs tests in any 'tests' subdirectory\n within the Blaze module.\n\n Parameters\n ----------\n verbose : int, optional\n Value 0 prints very little, 1 prints a little bit,\n and 2 prints the test names while testing.\n junitfile : string, optional\n If provided, writes the test results to an junit xml\n style xml file. This is useful for running the tests\n in a CI server such as Jenkins.\n exit : bool, optional\n If True, the function will call sys.exit with an\n error code after the tests are finished.\n \"\"\"\n import os\n import sys\n import pytest\n\n args = []\n\n if verbose:\n args.append('--verbose')\n\n # Output an xunit file if requested\n if junitfile is not None:\n args.append('--junit-xml=%s' % junitfile)\n\n # Add all 'tests' subdirectories to the options\n rootdir = os.path.dirname(__file__)\n for root, dirs, files in os.walk(rootdir):\n if 'tests' in dirs:\n testsdir = os.path.join(root, 'tests')\n args.append(testsdir)\n print('Test dir: %s' % testsdir[len(rootdir) + 1:])\n # print versions (handy when reporting problems)\n print_versions()\n sys.stdout.flush()\n\n # Ask pytest to do its thing\n error_code = pytest.main(args=args)\n if exit:\n return sys.exit(error_code)\n return error_code\n", "path": "blaze/__init__.py"}]} | 1,901 | 85 |
gh_patches_debug_30662 | rasdani/github-patches | git_diff | goauthentik__authentik-4428 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
System Tasks: Show start timestamp and calculate Duration
**Is your feature request related to a problem? Please describe.**
For debugging purposes, I need the info when a task started, and when it finished.
**Describe the solution you'd like**
I have seen that the TaskInfo object actually holds that information, but it is not returned by the API, and not shown in the "SystemTasks" table of the web UI.
It would also make sense to calculate the duration for easier debugging.
**Describe alternatives you've considered**
I could look this up in the database, but this would be questionable UX, since there is already a view in the web app which should show this information.
**Additional context**
(none)
</issue>
<code>
[start of authentik/admin/api/tasks.py]
1 """Tasks API"""
2 from importlib import import_module
3
4 from django.contrib import messages
5 from django.http.response import Http404
6 from django.utils.translation import gettext_lazy as _
7 from drf_spectacular.types import OpenApiTypes
8 from drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema
9 from rest_framework.decorators import action
10 from rest_framework.fields import CharField, ChoiceField, DateTimeField, ListField
11 from rest_framework.permissions import IsAdminUser
12 from rest_framework.request import Request
13 from rest_framework.response import Response
14 from rest_framework.viewsets import ViewSet
15 from structlog.stdlib import get_logger
16
17 from authentik.core.api.utils import PassiveSerializer
18 from authentik.events.monitored_tasks import TaskInfo, TaskResultStatus
19
20 LOGGER = get_logger()
21
22
23 class TaskSerializer(PassiveSerializer):
24 """Serialize TaskInfo and TaskResult"""
25
26 task_name = CharField()
27 task_description = CharField()
28 task_finish_timestamp = DateTimeField(source="finish_time")
29
30 status = ChoiceField(
31 source="result.status.name",
32 choices=[(x.name, x.name) for x in TaskResultStatus],
33 )
34 messages = ListField(source="result.messages")
35
36 def to_representation(self, instance):
37 """When a new version of authentik adds fields to TaskInfo,
38 the API will fail with an AttributeError, as the classes
39 are pickled in cache. In that case, just delete the info"""
40 try:
41 return super().to_representation(instance)
42 except AttributeError: # pragma: no cover
43 if isinstance(self.instance, list):
44 for inst in self.instance:
45 inst.delete()
46 else:
47 self.instance.delete()
48 return {}
49
50
51 class TaskViewSet(ViewSet):
52 """Read-only view set that returns all background tasks"""
53
54 permission_classes = [IsAdminUser]
55 serializer_class = TaskSerializer
56
57 @extend_schema(
58 responses={
59 200: TaskSerializer(many=False),
60 404: OpenApiResponse(description="Task not found"),
61 },
62 parameters=[
63 OpenApiParameter(
64 "id",
65 type=OpenApiTypes.STR,
66 location=OpenApiParameter.PATH,
67 required=True,
68 ),
69 ],
70 )
71 # pylint: disable=invalid-name
72 def retrieve(self, request: Request, pk=None) -> Response:
73 """Get a single system task"""
74 task = TaskInfo.by_name(pk)
75 if not task:
76 raise Http404
77 return Response(TaskSerializer(task, many=False).data)
78
79 @extend_schema(responses={200: TaskSerializer(many=True)})
80 def list(self, request: Request) -> Response:
81 """List system tasks"""
82 tasks = sorted(TaskInfo.all().values(), key=lambda task: task.task_name)
83 return Response(TaskSerializer(tasks, many=True).data)
84
85 @extend_schema(
86 request=OpenApiTypes.NONE,
87 responses={
88 204: OpenApiResponse(description="Task retried successfully"),
89 404: OpenApiResponse(description="Task not found"),
90 500: OpenApiResponse(description="Failed to retry task"),
91 },
92 parameters=[
93 OpenApiParameter(
94 "id",
95 type=OpenApiTypes.STR,
96 location=OpenApiParameter.PATH,
97 required=True,
98 ),
99 ],
100 )
101 @action(detail=True, methods=["post"])
102 # pylint: disable=invalid-name
103 def retry(self, request: Request, pk=None) -> Response:
104 """Retry task"""
105 task = TaskInfo.by_name(pk)
106 if not task:
107 raise Http404
108 try:
109 task_module = import_module(task.task_call_module)
110 task_func = getattr(task_module, task.task_call_func)
111 LOGGER.debug("Running task", task=task_func)
112 task_func.delay(*task.task_call_args, **task.task_call_kwargs)
113 messages.success(
114 self.request,
115 _("Successfully re-scheduled Task %(name)s!" % {"name": task.task_name}),
116 )
117 return Response(status=204)
118 except (ImportError, AttributeError): # pragma: no cover
119 LOGGER.warning("Failed to run task, remove state", task=task)
120 # if we get an import error, the module path has probably changed
121 task.delete()
122 return Response(status=500)
123
[end of authentik/admin/api/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/admin/api/tasks.py b/authentik/admin/api/tasks.py
--- a/authentik/admin/api/tasks.py
+++ b/authentik/admin/api/tasks.py
@@ -7,7 +7,13 @@
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema
from rest_framework.decorators import action
-from rest_framework.fields import CharField, ChoiceField, DateTimeField, ListField
+from rest_framework.fields import (
+ CharField,
+ ChoiceField,
+ DateTimeField,
+ ListField,
+ SerializerMethodField,
+)
from rest_framework.permissions import IsAdminUser
from rest_framework.request import Request
from rest_framework.response import Response
@@ -26,6 +32,7 @@
task_name = CharField()
task_description = CharField()
task_finish_timestamp = DateTimeField(source="finish_time")
+ task_duration = SerializerMethodField()
status = ChoiceField(
source="result.status.name",
@@ -33,7 +40,11 @@
)
messages = ListField(source="result.messages")
- def to_representation(self, instance):
+ def get_task_duration(self, instance: TaskInfo) -> int:
+ """Get the duration a task took to run"""
+ return max(instance.finish_timestamp - instance.start_timestamp, 0)
+
+ def to_representation(self, instance: TaskInfo):
"""When a new version of authentik adds fields to TaskInfo,
the API will fail with an AttributeError, as the classes
are pickled in cache. In that case, just delete the info"""
| {"golden_diff": "diff --git a/authentik/admin/api/tasks.py b/authentik/admin/api/tasks.py\n--- a/authentik/admin/api/tasks.py\n+++ b/authentik/admin/api/tasks.py\n@@ -7,7 +7,13 @@\n from drf_spectacular.types import OpenApiTypes\n from drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema\n from rest_framework.decorators import action\n-from rest_framework.fields import CharField, ChoiceField, DateTimeField, ListField\n+from rest_framework.fields import (\n+ CharField,\n+ ChoiceField,\n+ DateTimeField,\n+ ListField,\n+ SerializerMethodField,\n+)\n from rest_framework.permissions import IsAdminUser\n from rest_framework.request import Request\n from rest_framework.response import Response\n@@ -26,6 +32,7 @@\n task_name = CharField()\n task_description = CharField()\n task_finish_timestamp = DateTimeField(source=\"finish_time\")\n+ task_duration = SerializerMethodField()\n \n status = ChoiceField(\n source=\"result.status.name\",\n@@ -33,7 +40,11 @@\n )\n messages = ListField(source=\"result.messages\")\n \n- def to_representation(self, instance):\n+ def get_task_duration(self, instance: TaskInfo) -> int:\n+ \"\"\"Get the duration a task took to run\"\"\"\n+ return max(instance.finish_timestamp - instance.start_timestamp, 0)\n+\n+ def to_representation(self, instance: TaskInfo):\n \"\"\"When a new version of authentik adds fields to TaskInfo,\n the API will fail with an AttributeError, as the classes\n are pickled in cache. In that case, just delete the info\"\"\"\n", "issue": "System Tasks: Show start timestamp and calculate Duration\n**Is your feature request related to a problem? Please describe.**\r\nFor debugging purposes, I need the info when a task started, and when it finished. \r\n\r\n**Describe the solution you'd like**\r\nI have seen that the TaskInfo object actually holds that information, but it is not returned by the API, and not shown in the \"SystemTasks\" table of the web UI. \r\nIt would also make sense to calculate the duration for easier debugging.\r\n\r\n**Describe alternatives you've considered**\r\nI could look this up in the database, but this would be questionable UX, since there is already a view in the web app which should show this information.\r\n\r\n**Additional context**\r\n(none)\r\n\n", "before_files": [{"content": "\"\"\"Tasks API\"\"\"\nfrom importlib import import_module\n\nfrom django.contrib import messages\nfrom django.http.response import Http404\nfrom django.utils.translation import gettext_lazy as _\nfrom drf_spectacular.types import OpenApiTypes\nfrom drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.fields import CharField, ChoiceField, DateTimeField, ListField\nfrom rest_framework.permissions import IsAdminUser\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ViewSet\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.api.utils import PassiveSerializer\nfrom authentik.events.monitored_tasks import TaskInfo, TaskResultStatus\n\nLOGGER = get_logger()\n\n\nclass TaskSerializer(PassiveSerializer):\n \"\"\"Serialize TaskInfo and TaskResult\"\"\"\n\n task_name = CharField()\n task_description = CharField()\n task_finish_timestamp = DateTimeField(source=\"finish_time\")\n\n status = ChoiceField(\n source=\"result.status.name\",\n choices=[(x.name, x.name) for x in TaskResultStatus],\n )\n messages = ListField(source=\"result.messages\")\n\n def to_representation(self, instance):\n \"\"\"When a new version of authentik adds fields to TaskInfo,\n the API will fail with an AttributeError, as the classes\n are pickled in cache. In that case, just delete the info\"\"\"\n try:\n return super().to_representation(instance)\n except AttributeError: # pragma: no cover\n if isinstance(self.instance, list):\n for inst in self.instance:\n inst.delete()\n else:\n self.instance.delete()\n return {}\n\n\nclass TaskViewSet(ViewSet):\n \"\"\"Read-only view set that returns all background tasks\"\"\"\n\n permission_classes = [IsAdminUser]\n serializer_class = TaskSerializer\n\n @extend_schema(\n responses={\n 200: TaskSerializer(many=False),\n 404: OpenApiResponse(description=\"Task not found\"),\n },\n parameters=[\n OpenApiParameter(\n \"id\",\n type=OpenApiTypes.STR,\n location=OpenApiParameter.PATH,\n required=True,\n ),\n ],\n )\n # pylint: disable=invalid-name\n def retrieve(self, request: Request, pk=None) -> Response:\n \"\"\"Get a single system task\"\"\"\n task = TaskInfo.by_name(pk)\n if not task:\n raise Http404\n return Response(TaskSerializer(task, many=False).data)\n\n @extend_schema(responses={200: TaskSerializer(many=True)})\n def list(self, request: Request) -> Response:\n \"\"\"List system tasks\"\"\"\n tasks = sorted(TaskInfo.all().values(), key=lambda task: task.task_name)\n return Response(TaskSerializer(tasks, many=True).data)\n\n @extend_schema(\n request=OpenApiTypes.NONE,\n responses={\n 204: OpenApiResponse(description=\"Task retried successfully\"),\n 404: OpenApiResponse(description=\"Task not found\"),\n 500: OpenApiResponse(description=\"Failed to retry task\"),\n },\n parameters=[\n OpenApiParameter(\n \"id\",\n type=OpenApiTypes.STR,\n location=OpenApiParameter.PATH,\n required=True,\n ),\n ],\n )\n @action(detail=True, methods=[\"post\"])\n # pylint: disable=invalid-name\n def retry(self, request: Request, pk=None) -> Response:\n \"\"\"Retry task\"\"\"\n task = TaskInfo.by_name(pk)\n if not task:\n raise Http404\n try:\n task_module = import_module(task.task_call_module)\n task_func = getattr(task_module, task.task_call_func)\n LOGGER.debug(\"Running task\", task=task_func)\n task_func.delay(*task.task_call_args, **task.task_call_kwargs)\n messages.success(\n self.request,\n _(\"Successfully re-scheduled Task %(name)s!\" % {\"name\": task.task_name}),\n )\n return Response(status=204)\n except (ImportError, AttributeError): # pragma: no cover\n LOGGER.warning(\"Failed to run task, remove state\", task=task)\n # if we get an import error, the module path has probably changed\n task.delete()\n return Response(status=500)\n", "path": "authentik/admin/api/tasks.py"}]} | 1,872 | 360 |
gh_patches_debug_31542 | rasdani/github-patches | git_diff | freqtrade__freqtrade-2284 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
download-data fails with difficult to interpret messages in certain cases
1)
```
(.env) user@nuc:~/freqtrade-wrk/github-hroff-1902/freqtrade$ freqtrade download-data
2019-09-06 01:00:01,335 - freqtrade.loggers - INFO - Verbosity set to 0
2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Dry run is enabled
2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite://"
2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Using max_open_trades: None ...
2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Using user-data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data ...
2019-09-06 01:00:01,337 - freqtrade.configuration.configuration - INFO - Using data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data/data ...
2019-09-06 01:00:01,337 - freqtrade.configuration.configuration - INFO - timeframes --timeframes: ['1m', '5m']
2019-09-06 01:00:01,337 - freqtrade.configuration.check_exchange - INFO - Checking exchange...
2019-09-06 01:00:01,337 - freqtrade - ERROR - Exchange "" is not supported by ccxt and therefore not available for the bot.
The following exchanges are supported by ccxt: _1btcxe, acx, allcoin, anxpro, bcex, bequant, bibox, bigone, binance, binanceje, bit2c, bitbank, bitbay, bitfinex, bitfinex2, bitflyer, bitforex, bithumb, bitkk, bitlish, bitmart, bitmex, bitso, bitstamp, bitstamp1, bittrex, bitz, bl3p, bleutrade, braziliex, btcalpha, btcbox, btcchina, btcmarkets, btctradeim, btctradeua, btcturk, buda, bxinth, cex, chilebit, cobinhood, coinbase, coinbaseprime, coinbasepro, coincheck, coinegg, coinex, coinexchange, coinfalcon, coinfloor, coingi, coinmarketcap, coinmate, coinone, coinspot, cointiger, coolcoin, coss, crex24, crypton, deribit, digifinex, dsx, dx, ethfinex, exmo, exx, fcoin, fcoinjp, flowbtc, foxbit, fybse, gateio, gdax, gemini, hitbtc, hitbtc2, huobipro, huobiru, ice3x, independentreserve, indodax, itbit, kkex, kraken, kucoin, kucoin2, kuna, lakebtc, latoken, lbank, liquid, livecoin, luno, lykke, mandala, mercado, mixcoins, negociecoins, nova, oceanex, okcoincny, okcoinusd, okex, okex3, paymium, poloniex, rightbtc, southxchange, stronghold, surbitcoin, theocean, therock, tidebit, tidex, upbit, vaultoro, vbtc, virwox, xbtce, yobit, zaif, zb
```
-- Maybe it's covered by #2217 . If not, the message should not include `Exchange ""` but should be something like `Exchange is not specified.`
2)
```
(.env) user@nuc:~/freqtrade-wrk/github-hroff-1902/freqtrade$ freqtrade download-data --exchange bittrex
2019-09-06 01:00:15,176 - freqtrade.loggers - INFO - Verbosity set to 0
2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Dry run is enabled
2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite://"
2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Using max_open_trades: None ...
2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Using exchange bittrex
2019-09-06 01:00:15,178 - freqtrade.configuration.configuration - INFO - Using user-data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data ...
2019-09-06 01:00:15,178 - freqtrade.configuration.configuration - INFO - Using data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data/data/bittrex ...
2019-09-06 01:00:15,178 - freqtrade.configuration.configuration - INFO - timeframes --timeframes: ['1m', '5m']
2019-09-06 01:00:15,178 - freqtrade.configuration.check_exchange - INFO - Checking exchange...
2019-09-06 01:00:15,178 - freqtrade.configuration.check_exchange - INFO - Exchange "bittrex" is officially supported by the Freqtrade development team.
2019-09-06 01:00:15,179 - freqtrade - ERROR - Fatal exception!
Traceback (most recent call last):
File "/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/main.py", line 40, in main
args.func(args)
File "/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/utils.py", line 75, in start_download_data
logger.info(f'About to download pairs: {config["pairs"]}, '
KeyError: 'pairs'
```
</issue>
<code>
[start of freqtrade/configuration/check_exchange.py]
1 import logging
2 from typing import Any, Dict
3
4 from freqtrade import OperationalException
5 from freqtrade.exchange import (available_exchanges, get_exchange_bad_reason,
6 is_exchange_available, is_exchange_bad,
7 is_exchange_officially_supported)
8 from freqtrade.state import RunMode
9
10 logger = logging.getLogger(__name__)
11
12
13 def check_exchange(config: Dict[str, Any], check_for_bad: bool = True) -> bool:
14 """
15 Check if the exchange name in the config file is supported by Freqtrade
16 :param check_for_bad: if True, check the exchange against the list of known 'bad'
17 exchanges
18 :return: False if exchange is 'bad', i.e. is known to work with the bot with
19 critical issues or does not work at all, crashes, etc. True otherwise.
20 raises an exception if the exchange if not supported by ccxt
21 and thus is not known for the Freqtrade at all.
22 """
23
24 if config['runmode'] in [RunMode.PLOT] and not config.get('exchange', {}).get('name'):
25 # Skip checking exchange in plot mode, since it requires no exchange
26 return True
27 logger.info("Checking exchange...")
28
29 exchange = config.get('exchange', {}).get('name').lower()
30 if not is_exchange_available(exchange):
31 raise OperationalException(
32 f'Exchange "{exchange}" is not supported by ccxt '
33 f'and therefore not available for the bot.\n'
34 f'The following exchanges are supported by ccxt: '
35 f'{", ".join(available_exchanges())}'
36 )
37
38 if check_for_bad and is_exchange_bad(exchange):
39 raise OperationalException(f'Exchange "{exchange}" is known to not work with the bot yet. '
40 f'Reason: {get_exchange_bad_reason(exchange)}')
41
42 if is_exchange_officially_supported(exchange):
43 logger.info(f'Exchange "{exchange}" is officially supported '
44 f'by the Freqtrade development team.')
45 else:
46 logger.warning(f'Exchange "{exchange}" is supported by ccxt '
47 f'and therefore available for the bot but not officially supported '
48 f'by the Freqtrade development team. '
49 f'It may work flawlessly (please report back) or have serious issues. '
50 f'Use it at your own discretion.')
51
52 return True
53
[end of freqtrade/configuration/check_exchange.py]
[start of freqtrade/utils.py]
1 import logging
2 import sys
3 from pathlib import Path
4 from typing import Any, Dict, List
5
6 import arrow
7
8 from freqtrade.configuration import Configuration, TimeRange
9 from freqtrade.configuration.directory_operations import create_userdata_dir
10 from freqtrade.data.history import refresh_backtest_ohlcv_data
11 from freqtrade.exchange import available_exchanges
12 from freqtrade.resolvers import ExchangeResolver
13 from freqtrade.state import RunMode
14
15 logger = logging.getLogger(__name__)
16
17
18 def setup_utils_configuration(args: Dict[str, Any], method: RunMode) -> Dict[str, Any]:
19 """
20 Prepare the configuration for utils subcommands
21 :param args: Cli args from Arguments()
22 :return: Configuration
23 """
24 configuration = Configuration(args, method)
25 config = configuration.get_config()
26
27 config['exchange']['dry_run'] = True
28 # Ensure we do not use Exchange credentials
29 config['exchange']['key'] = ''
30 config['exchange']['secret'] = ''
31
32 return config
33
34
35 def start_list_exchanges(args: Dict[str, Any]) -> None:
36 """
37 Print available exchanges
38 :param args: Cli args from Arguments()
39 :return: None
40 """
41
42 if args['print_one_column']:
43 print('\n'.join(available_exchanges()))
44 else:
45 print(f"Exchanges supported by ccxt and available for Freqtrade: "
46 f"{', '.join(available_exchanges())}")
47
48
49 def start_create_userdir(args: Dict[str, Any]) -> None:
50 """
51 Create "user_data" directory to contain user data strategies, hyperopts, ...)
52 :param args: Cli args from Arguments()
53 :return: None
54 """
55 if "user_data_dir" in args and args["user_data_dir"]:
56 create_userdata_dir(args["user_data_dir"], create_dir=True)
57 else:
58 logger.warning("`create-userdir` requires --userdir to be set.")
59 sys.exit(1)
60
61
62 def start_download_data(args: Dict[str, Any]) -> None:
63 """
64 Download data (former download_backtest_data.py script)
65 """
66 config = setup_utils_configuration(args, RunMode.OTHER)
67
68 timerange = TimeRange()
69 if 'days' in config:
70 time_since = arrow.utcnow().shift(days=-config['days']).strftime("%Y%m%d")
71 timerange = TimeRange.parse_timerange(f'{time_since}-')
72
73 dl_path = Path(config['datadir'])
74 logger.info(f'About to download pairs: {config["pairs"]}, '
75 f'intervals: {config["timeframes"]} to {dl_path}')
76
77 pairs_not_available: List[str] = []
78
79 try:
80 # Init exchange
81 exchange = ExchangeResolver(config['exchange']['name'], config).exchange
82
83 pairs_not_available = refresh_backtest_ohlcv_data(
84 exchange, pairs=config["pairs"], timeframes=config["timeframes"],
85 dl_path=Path(config['datadir']), timerange=timerange, erase=config.get("erase"))
86
87 except KeyboardInterrupt:
88 sys.exit("SIGINT received, aborting ...")
89
90 finally:
91 if pairs_not_available:
92 logger.info(f"Pairs [{','.join(pairs_not_available)}] not available "
93 f"on exchange {config['exchange']['name']}.")
94
[end of freqtrade/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/configuration/check_exchange.py b/freqtrade/configuration/check_exchange.py
--- a/freqtrade/configuration/check_exchange.py
+++ b/freqtrade/configuration/check_exchange.py
@@ -27,6 +27,14 @@
logger.info("Checking exchange...")
exchange = config.get('exchange', {}).get('name').lower()
+ if not exchange:
+ raise OperationalException(
+ f'This command requires a configured exchange. You should either use '
+ f'`--exchange <exchange_name>` or specify a configuration file via `--config`.\n'
+ f'The following exchanges are supported by ccxt: '
+ f'{", ".join(available_exchanges())}'
+ )
+
if not is_exchange_available(exchange):
raise OperationalException(
f'Exchange "{exchange}" is not supported by ccxt '
diff --git a/freqtrade/utils.py b/freqtrade/utils.py
--- a/freqtrade/utils.py
+++ b/freqtrade/utils.py
@@ -5,6 +5,7 @@
import arrow
+from freqtrade import OperationalException
from freqtrade.configuration import Configuration, TimeRange
from freqtrade.configuration.directory_operations import create_userdata_dir
from freqtrade.data.history import refresh_backtest_ohlcv_data
@@ -70,6 +71,11 @@
time_since = arrow.utcnow().shift(days=-config['days']).strftime("%Y%m%d")
timerange = TimeRange.parse_timerange(f'{time_since}-')
+ if 'pairs' not in config:
+ raise OperationalException(
+ "Downloading data requires a list of pairs. "
+ "Please check the documentation on how to configure this.")
+
dl_path = Path(config['datadir'])
logger.info(f'About to download pairs: {config["pairs"]}, '
f'intervals: {config["timeframes"]} to {dl_path}')
| {"golden_diff": "diff --git a/freqtrade/configuration/check_exchange.py b/freqtrade/configuration/check_exchange.py\n--- a/freqtrade/configuration/check_exchange.py\n+++ b/freqtrade/configuration/check_exchange.py\n@@ -27,6 +27,14 @@\n logger.info(\"Checking exchange...\")\n \n exchange = config.get('exchange', {}).get('name').lower()\n+ if not exchange:\n+ raise OperationalException(\n+ f'This command requires a configured exchange. You should either use '\n+ f'`--exchange <exchange_name>` or specify a configuration file via `--config`.\\n'\n+ f'The following exchanges are supported by ccxt: '\n+ f'{\", \".join(available_exchanges())}'\n+ )\n+\n if not is_exchange_available(exchange):\n raise OperationalException(\n f'Exchange \"{exchange}\" is not supported by ccxt '\ndiff --git a/freqtrade/utils.py b/freqtrade/utils.py\n--- a/freqtrade/utils.py\n+++ b/freqtrade/utils.py\n@@ -5,6 +5,7 @@\n \n import arrow\n \n+from freqtrade import OperationalException\n from freqtrade.configuration import Configuration, TimeRange\n from freqtrade.configuration.directory_operations import create_userdata_dir\n from freqtrade.data.history import refresh_backtest_ohlcv_data\n@@ -70,6 +71,11 @@\n time_since = arrow.utcnow().shift(days=-config['days']).strftime(\"%Y%m%d\")\n timerange = TimeRange.parse_timerange(f'{time_since}-')\n \n+ if 'pairs' not in config:\n+ raise OperationalException(\n+ \"Downloading data requires a list of pairs. \"\n+ \"Please check the documentation on how to configure this.\")\n+\n dl_path = Path(config['datadir'])\n logger.info(f'About to download pairs: {config[\"pairs\"]}, '\n f'intervals: {config[\"timeframes\"]} to {dl_path}')\n", "issue": "download-data fails with difficult to interpret messages in certain cases\n1)\r\n```\r\n(.env) user@nuc:~/freqtrade-wrk/github-hroff-1902/freqtrade$ freqtrade download-data\r\n2019-09-06 01:00:01,335 - freqtrade.loggers - INFO - Verbosity set to 0\r\n2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Dry run is enabled\r\n2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Using DB: \"sqlite://\"\r\n2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Using max_open_trades: None ...\r\n2019-09-06 01:00:01,336 - freqtrade.configuration.configuration - INFO - Using user-data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data ...\r\n2019-09-06 01:00:01,337 - freqtrade.configuration.configuration - INFO - Using data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data/data ...\r\n2019-09-06 01:00:01,337 - freqtrade.configuration.configuration - INFO - timeframes --timeframes: ['1m', '5m']\r\n2019-09-06 01:00:01,337 - freqtrade.configuration.check_exchange - INFO - Checking exchange...\r\n2019-09-06 01:00:01,337 - freqtrade - ERROR - Exchange \"\" is not supported by ccxt and therefore not available for the bot.\r\nThe following exchanges are supported by ccxt: _1btcxe, acx, allcoin, anxpro, bcex, bequant, bibox, bigone, binance, binanceje, bit2c, bitbank, bitbay, bitfinex, bitfinex2, bitflyer, bitforex, bithumb, bitkk, bitlish, bitmart, bitmex, bitso, bitstamp, bitstamp1, bittrex, bitz, bl3p, bleutrade, braziliex, btcalpha, btcbox, btcchina, btcmarkets, btctradeim, btctradeua, btcturk, buda, bxinth, cex, chilebit, cobinhood, coinbase, coinbaseprime, coinbasepro, coincheck, coinegg, coinex, coinexchange, coinfalcon, coinfloor, coingi, coinmarketcap, coinmate, coinone, coinspot, cointiger, coolcoin, coss, crex24, crypton, deribit, digifinex, dsx, dx, ethfinex, exmo, exx, fcoin, fcoinjp, flowbtc, foxbit, fybse, gateio, gdax, gemini, hitbtc, hitbtc2, huobipro, huobiru, ice3x, independentreserve, indodax, itbit, kkex, kraken, kucoin, kucoin2, kuna, lakebtc, latoken, lbank, liquid, livecoin, luno, lykke, mandala, mercado, mixcoins, negociecoins, nova, oceanex, okcoincny, okcoinusd, okex, okex3, paymium, poloniex, rightbtc, southxchange, stronghold, surbitcoin, theocean, therock, tidebit, tidex, upbit, vaultoro, vbtc, virwox, xbtce, yobit, zaif, zb\r\n```\r\n-- Maybe it's covered by #2217 . If not, the message should not include `Exchange \"\"` but should be something like `Exchange is not specified.`\r\n\r\n2) \r\n```\r\n(.env) user@nuc:~/freqtrade-wrk/github-hroff-1902/freqtrade$ freqtrade download-data --exchange bittrex\r\n2019-09-06 01:00:15,176 - freqtrade.loggers - INFO - Verbosity set to 0\r\n2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Dry run is enabled\r\n2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Using DB: \"sqlite://\"\r\n2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Using max_open_trades: None ...\r\n2019-09-06 01:00:15,177 - freqtrade.configuration.configuration - INFO - Using exchange bittrex\r\n2019-09-06 01:00:15,178 - freqtrade.configuration.configuration - INFO - Using user-data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data ...\r\n2019-09-06 01:00:15,178 - freqtrade.configuration.configuration - INFO - Using data directory: /home/user/freqtrade-wrk/github-hroff-1902/freqtrade/user_data/data/bittrex ...\r\n2019-09-06 01:00:15,178 - freqtrade.configuration.configuration - INFO - timeframes --timeframes: ['1m', '5m']\r\n2019-09-06 01:00:15,178 - freqtrade.configuration.check_exchange - INFO - Checking exchange...\r\n2019-09-06 01:00:15,178 - freqtrade.configuration.check_exchange - INFO - Exchange \"bittrex\" is officially supported by the Freqtrade development team.\r\n2019-09-06 01:00:15,179 - freqtrade - ERROR - Fatal exception!\r\nTraceback (most recent call last):\r\n File \"/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/main.py\", line 40, in main\r\n args.func(args)\r\n File \"/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/utils.py\", line 75, in start_download_data\r\n logger.info(f'About to download pairs: {config[\"pairs\"]}, '\r\nKeyError: 'pairs'\r\n```\r\n\n", "before_files": [{"content": "import logging\nfrom typing import Any, Dict\n\nfrom freqtrade import OperationalException\nfrom freqtrade.exchange import (available_exchanges, get_exchange_bad_reason,\n is_exchange_available, is_exchange_bad,\n is_exchange_officially_supported)\nfrom freqtrade.state import RunMode\n\nlogger = logging.getLogger(__name__)\n\n\ndef check_exchange(config: Dict[str, Any], check_for_bad: bool = True) -> bool:\n \"\"\"\n Check if the exchange name in the config file is supported by Freqtrade\n :param check_for_bad: if True, check the exchange against the list of known 'bad'\n exchanges\n :return: False if exchange is 'bad', i.e. is known to work with the bot with\n critical issues or does not work at all, crashes, etc. True otherwise.\n raises an exception if the exchange if not supported by ccxt\n and thus is not known for the Freqtrade at all.\n \"\"\"\n\n if config['runmode'] in [RunMode.PLOT] and not config.get('exchange', {}).get('name'):\n # Skip checking exchange in plot mode, since it requires no exchange\n return True\n logger.info(\"Checking exchange...\")\n\n exchange = config.get('exchange', {}).get('name').lower()\n if not is_exchange_available(exchange):\n raise OperationalException(\n f'Exchange \"{exchange}\" is not supported by ccxt '\n f'and therefore not available for the bot.\\n'\n f'The following exchanges are supported by ccxt: '\n f'{\", \".join(available_exchanges())}'\n )\n\n if check_for_bad and is_exchange_bad(exchange):\n raise OperationalException(f'Exchange \"{exchange}\" is known to not work with the bot yet. '\n f'Reason: {get_exchange_bad_reason(exchange)}')\n\n if is_exchange_officially_supported(exchange):\n logger.info(f'Exchange \"{exchange}\" is officially supported '\n f'by the Freqtrade development team.')\n else:\n logger.warning(f'Exchange \"{exchange}\" is supported by ccxt '\n f'and therefore available for the bot but not officially supported '\n f'by the Freqtrade development team. '\n f'It may work flawlessly (please report back) or have serious issues. '\n f'Use it at your own discretion.')\n\n return True\n", "path": "freqtrade/configuration/check_exchange.py"}, {"content": "import logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict, List\n\nimport arrow\n\nfrom freqtrade.configuration import Configuration, TimeRange\nfrom freqtrade.configuration.directory_operations import create_userdata_dir\nfrom freqtrade.data.history import refresh_backtest_ohlcv_data\nfrom freqtrade.exchange import available_exchanges\nfrom freqtrade.resolvers import ExchangeResolver\nfrom freqtrade.state import RunMode\n\nlogger = logging.getLogger(__name__)\n\n\ndef setup_utils_configuration(args: Dict[str, Any], method: RunMode) -> Dict[str, Any]:\n \"\"\"\n Prepare the configuration for utils subcommands\n :param args: Cli args from Arguments()\n :return: Configuration\n \"\"\"\n configuration = Configuration(args, method)\n config = configuration.get_config()\n\n config['exchange']['dry_run'] = True\n # Ensure we do not use Exchange credentials\n config['exchange']['key'] = ''\n config['exchange']['secret'] = ''\n\n return config\n\n\ndef start_list_exchanges(args: Dict[str, Any]) -> None:\n \"\"\"\n Print available exchanges\n :param args: Cli args from Arguments()\n :return: None\n \"\"\"\n\n if args['print_one_column']:\n print('\\n'.join(available_exchanges()))\n else:\n print(f\"Exchanges supported by ccxt and available for Freqtrade: \"\n f\"{', '.join(available_exchanges())}\")\n\n\ndef start_create_userdir(args: Dict[str, Any]) -> None:\n \"\"\"\n Create \"user_data\" directory to contain user data strategies, hyperopts, ...)\n :param args: Cli args from Arguments()\n :return: None\n \"\"\"\n if \"user_data_dir\" in args and args[\"user_data_dir\"]:\n create_userdata_dir(args[\"user_data_dir\"], create_dir=True)\n else:\n logger.warning(\"`create-userdir` requires --userdir to be set.\")\n sys.exit(1)\n\n\ndef start_download_data(args: Dict[str, Any]) -> None:\n \"\"\"\n Download data (former download_backtest_data.py script)\n \"\"\"\n config = setup_utils_configuration(args, RunMode.OTHER)\n\n timerange = TimeRange()\n if 'days' in config:\n time_since = arrow.utcnow().shift(days=-config['days']).strftime(\"%Y%m%d\")\n timerange = TimeRange.parse_timerange(f'{time_since}-')\n\n dl_path = Path(config['datadir'])\n logger.info(f'About to download pairs: {config[\"pairs\"]}, '\n f'intervals: {config[\"timeframes\"]} to {dl_path}')\n\n pairs_not_available: List[str] = []\n\n try:\n # Init exchange\n exchange = ExchangeResolver(config['exchange']['name'], config).exchange\n\n pairs_not_available = refresh_backtest_ohlcv_data(\n exchange, pairs=config[\"pairs\"], timeframes=config[\"timeframes\"],\n dl_path=Path(config['datadir']), timerange=timerange, erase=config.get(\"erase\"))\n\n except KeyboardInterrupt:\n sys.exit(\"SIGINT received, aborting ...\")\n\n finally:\n if pairs_not_available:\n logger.info(f\"Pairs [{','.join(pairs_not_available)}] not available \"\n f\"on exchange {config['exchange']['name']}.\")\n", "path": "freqtrade/utils.py"}]} | 3,564 | 410 |
gh_patches_debug_11756 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1673 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[META 659] Backend dependencies granularity for NoSQL and Messaging
See meta issue for the description and details:
- Meta issue: https://github.com/elastic/apm/issues/659
This is done for everything except for Elasticsearch.
</issue>
<code>
[start of elasticapm/instrumentation/packages/elasticsearch.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from __future__ import absolute_import
32
33 import re
34 from typing import Optional
35 from urllib.parse import parse_qs, urlparse
36
37 import elasticapm
38 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
39 from elasticapm.traces import DroppedSpan, execution_context
40 from elasticapm.utils.logging import get_logger
41
42 logger = get_logger("elasticapm.instrument")
43
44 should_capture_body_re = re.compile("/(_search|_msearch|_count|_async_search|_sql|_eql)(/|$)")
45
46
47 class ElasticsearchConnectionInstrumentation(AbstractInstrumentedModule):
48 name = "elasticsearch_connection"
49
50 def get_instrument_list(self):
51 try:
52 import elastic_transport # noqa: F401
53
54 return [
55 ("elastic_transport._node._http_urllib3", "Urllib3HttpNode.perform_request"),
56 ("elastic_transport._node._http_requests", "RequestsHttpNode.perform_request"),
57 ]
58 except ImportError:
59 return [
60 ("elasticsearch.connection.http_urllib3", "Urllib3HttpConnection.perform_request"),
61 ("elasticsearch.connection.http_requests", "RequestsHttpConnection.perform_request"),
62 ]
63
64 def call(self, module, method, wrapped, instance, args, kwargs):
65 span = execution_context.get_span()
66 if not span or isinstance(span, DroppedSpan):
67 return wrapped(*args, **kwargs)
68
69 self._update_context_by_request_data(span.context, instance, args, kwargs)
70
71 result = wrapped(*args, **kwargs)
72 if hasattr(result, "meta"): # elasticsearch-py 8.x+
73 status_code = result.meta.status
74 else:
75 status_code = result[0]
76 span.context["http"] = {"status_code": status_code}
77
78 return result
79
80 def _update_context_by_request_data(self, context, instance, args, kwargs):
81 args_len = len(args)
82 url = args[1] if args_len > 1 else kwargs.get("url")
83 params = args[2] if args_len > 2 else kwargs.get("params")
84 body_serialized = args[3] if args_len > 3 else kwargs.get("body")
85
86 if "?" in url and not params:
87 url, qs = url.split("?", 1)
88 params = {k: v[0] for k, v in parse_qs(qs).items()}
89
90 should_capture_body = bool(should_capture_body_re.search(url))
91
92 context["db"] = {"type": "elasticsearch"}
93 if should_capture_body:
94 query = []
95 # using both q AND body is allowed in some API endpoints / ES versions,
96 # but not in others. We simply capture both if they are there so the
97 # user can see it.
98 if params and "q" in params:
99 # 'q' may already be encoded to a byte string at this point.
100 # We assume utf8, which is the default
101 q = params["q"]
102 if isinstance(q, bytes):
103 q = q.decode("utf-8", errors="replace")
104 query.append("q=" + q)
105 if body_serialized:
106 if isinstance(body_serialized, bytes):
107 query.append(body_serialized.decode("utf-8", errors="replace"))
108 else:
109 query.append(body_serialized)
110 if query:
111 context["db"]["statement"] = "\n\n".join(query)
112
113 # ES5: `host` is URL, no `port` attribute
114 # ES6, ES7: `host` URL, `hostname` is host, `port` is port
115 # ES8: `host` is hostname, no `hostname` attribute, `port` is `port`
116 if not hasattr(instance, "port"):
117 # ES5, parse hostname and port from URL stored in `host`
118 parsed_url = urlparse(instance.host)
119 host = parsed_url.hostname
120 port = parsed_url.port
121 elif not hasattr(instance, "hostname"):
122 # ES8 (and up, one can hope)
123 host = instance.host
124 port = instance.port
125 else:
126 # ES6, ES7
127 host = instance.hostname
128 port = instance.port
129
130 context["destination"] = {"address": host, "port": port}
131
132
133 class ElasticsearchTransportInstrumentation(AbstractInstrumentedModule):
134 name = "elasticsearch_connection"
135
136 def get_instrument_list(self):
137 try:
138 import elastic_transport # noqa: F401
139
140 return [
141 ("elastic_transport", "Transport.perform_request"),
142 ]
143 except ImportError:
144 return [
145 ("elasticsearch.transport", "Transport.perform_request"),
146 ]
147
148 def call(self, module, method, wrapped, instance, args, kwargs):
149 with elasticapm.capture_span(
150 self._get_signature(args, kwargs),
151 span_type="db",
152 span_subtype="elasticsearch",
153 span_action="query",
154 extra={},
155 skip_frames=2,
156 leaf=True,
157 ) as span:
158 result_data = wrapped(*args, **kwargs)
159
160 hits = self._get_hits(result_data)
161 if hits:
162 span.context["db"]["rows_affected"] = hits
163
164 return result_data
165
166 def _get_signature(self, args, kwargs):
167 args_len = len(args)
168 http_method = args[0] if args_len else kwargs.get("method")
169 http_path = args[1] if args_len > 1 else kwargs.get("url")
170 http_path = http_path.split("?", 1)[0] # we don't want to capture a potential query string in the span name
171
172 return "ES %s %s" % (http_method, http_path)
173
174 def _get_hits(self, result) -> Optional[int]:
175 if getattr(result, "body", None) and "hits" in result.body: # ES >= 8
176 return result.body["hits"]["total"]["value"]
177 elif isinstance(result, dict) and "hits" in result:
178 return (
179 result["hits"]["total"]["value"]
180 if isinstance(result["hits"]["total"], dict)
181 else result["hits"]["total"]
182 )
183
[end of elasticapm/instrumentation/packages/elasticsearch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/elasticsearch.py b/elasticapm/instrumentation/packages/elasticsearch.py
--- a/elasticapm/instrumentation/packages/elasticsearch.py
+++ b/elasticapm/instrumentation/packages/elasticsearch.py
@@ -71,9 +71,13 @@
result = wrapped(*args, **kwargs)
if hasattr(result, "meta"): # elasticsearch-py 8.x+
status_code = result.meta.status
+ cluster = result.meta.headers.get("x-found-handling-cluster")
else:
status_code = result[0]
+ cluster = result[1].get("x-found-handling-cluster")
span.context["http"] = {"status_code": status_code}
+ if cluster:
+ span.context["db"] = {"instance": cluster}
return result
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/elasticsearch.py b/elasticapm/instrumentation/packages/elasticsearch.py\n--- a/elasticapm/instrumentation/packages/elasticsearch.py\n+++ b/elasticapm/instrumentation/packages/elasticsearch.py\n@@ -71,9 +71,13 @@\n result = wrapped(*args, **kwargs)\n if hasattr(result, \"meta\"): # elasticsearch-py 8.x+\n status_code = result.meta.status\n+ cluster = result.meta.headers.get(\"x-found-handling-cluster\")\n else:\n status_code = result[0]\n+ cluster = result[1].get(\"x-found-handling-cluster\")\n span.context[\"http\"] = {\"status_code\": status_code}\n+ if cluster:\n+ span.context[\"db\"] = {\"instance\": cluster}\n \n return result\n", "issue": "[META 659] Backend dependencies granularity for NoSQL and Messaging\nSee meta issue for the description and details:\r\n- Meta issue: https://github.com/elastic/apm/issues/659\r\n\r\nThis is done for everything except for Elasticsearch.\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nimport re\nfrom typing import Optional\nfrom urllib.parse import parse_qs, urlparse\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, execution_context\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\nshould_capture_body_re = re.compile(\"/(_search|_msearch|_count|_async_search|_sql|_eql)(/|$)\")\n\n\nclass ElasticsearchConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport._node._http_urllib3\", \"Urllib3HttpNode.perform_request\"),\n (\"elastic_transport._node._http_requests\", \"RequestsHttpNode.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch.connection.http_urllib3\", \"Urllib3HttpConnection.perform_request\"),\n (\"elasticsearch.connection.http_requests\", \"RequestsHttpConnection.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if not span or isinstance(span, DroppedSpan):\n return wrapped(*args, **kwargs)\n\n self._update_context_by_request_data(span.context, instance, args, kwargs)\n\n result = wrapped(*args, **kwargs)\n if hasattr(result, \"meta\"): # elasticsearch-py 8.x+\n status_code = result.meta.status\n else:\n status_code = result[0]\n span.context[\"http\"] = {\"status_code\": status_code}\n\n return result\n\n def _update_context_by_request_data(self, context, instance, args, kwargs):\n args_len = len(args)\n url = args[1] if args_len > 1 else kwargs.get(\"url\")\n params = args[2] if args_len > 2 else kwargs.get(\"params\")\n body_serialized = args[3] if args_len > 3 else kwargs.get(\"body\")\n\n if \"?\" in url and not params:\n url, qs = url.split(\"?\", 1)\n params = {k: v[0] for k, v in parse_qs(qs).items()}\n\n should_capture_body = bool(should_capture_body_re.search(url))\n\n context[\"db\"] = {\"type\": \"elasticsearch\"}\n if should_capture_body:\n query = []\n # using both q AND body is allowed in some API endpoints / ES versions,\n # but not in others. We simply capture both if they are there so the\n # user can see it.\n if params and \"q\" in params:\n # 'q' may already be encoded to a byte string at this point.\n # We assume utf8, which is the default\n q = params[\"q\"]\n if isinstance(q, bytes):\n q = q.decode(\"utf-8\", errors=\"replace\")\n query.append(\"q=\" + q)\n if body_serialized:\n if isinstance(body_serialized, bytes):\n query.append(body_serialized.decode(\"utf-8\", errors=\"replace\"))\n else:\n query.append(body_serialized)\n if query:\n context[\"db\"][\"statement\"] = \"\\n\\n\".join(query)\n\n # ES5: `host` is URL, no `port` attribute\n # ES6, ES7: `host` URL, `hostname` is host, `port` is port\n # ES8: `host` is hostname, no `hostname` attribute, `port` is `port`\n if not hasattr(instance, \"port\"):\n # ES5, parse hostname and port from URL stored in `host`\n parsed_url = urlparse(instance.host)\n host = parsed_url.hostname\n port = parsed_url.port\n elif not hasattr(instance, \"hostname\"):\n # ES8 (and up, one can hope)\n host = instance.host\n port = instance.port\n else:\n # ES6, ES7\n host = instance.hostname\n port = instance.port\n\n context[\"destination\"] = {\"address\": host, \"port\": port}\n\n\nclass ElasticsearchTransportInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport\", \"Transport.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch.transport\", \"Transport.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n with elasticapm.capture_span(\n self._get_signature(args, kwargs),\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra={},\n skip_frames=2,\n leaf=True,\n ) as span:\n result_data = wrapped(*args, **kwargs)\n\n hits = self._get_hits(result_data)\n if hits:\n span.context[\"db\"][\"rows_affected\"] = hits\n\n return result_data\n\n def _get_signature(self, args, kwargs):\n args_len = len(args)\n http_method = args[0] if args_len else kwargs.get(\"method\")\n http_path = args[1] if args_len > 1 else kwargs.get(\"url\")\n http_path = http_path.split(\"?\", 1)[0] # we don't want to capture a potential query string in the span name\n\n return \"ES %s %s\" % (http_method, http_path)\n\n def _get_hits(self, result) -> Optional[int]:\n if getattr(result, \"body\", None) and \"hits\" in result.body: # ES >= 8\n return result.body[\"hits\"][\"total\"][\"value\"]\n elif isinstance(result, dict) and \"hits\" in result:\n return (\n result[\"hits\"][\"total\"][\"value\"]\n if isinstance(result[\"hits\"][\"total\"], dict)\n else result[\"hits\"][\"total\"]\n )\n", "path": "elasticapm/instrumentation/packages/elasticsearch.py"}]} | 2,692 | 184 |
gh_patches_debug_14354 | rasdani/github-patches | git_diff | azavea__raster-vision-746 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Evaluation for object detection with AOIs crashes
```
Running evaluator: ObjectDetectionEvaluator...
2019-03-28 16:47:07:rastervision.evaluation.classification_evaluator: INFO - Computing evaluation for scene 01986917-30ea-4f7f-8e01-985d73b8aa2a...
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision/__main__.py", line 17, in <module>
rv.main()
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/opt/src/rastervision/cli/main.py", line 260, in run_command
rv.runner.CommandRunner.run(command_config_uri)
File "/opt/src/rastervision/runner/command_runner.py", line 11, in run
CommandRunner.run_from_proto(msg)
File "/opt/src/rastervision/runner/command_runner.py", line 17, in run_from_proto
command.run()
File "/opt/src/rastervision/command/eval_command.py", line 24, in run
evaluator.process(scenes, tmp_dir)
File "/opt/src/rastervision/evaluation/classification_evaluator.py", line 36, in process
scene.aoi_polygons)
File "/opt/src/rastervision/data/label/object_detection_labels.py", line 70, in filter_by_aoi
np.array(new_boxes), np.array(new_class_ids), np.array(new_scores))
File "/opt/src/rastervision/data/label/object_detection_labels.py", line 27, in __init__
self.boxlist = BoxList(npboxes)
File "/opt/tf-models/object_detection/utils/np_box_list.py", line 46, in __init__
raise ValueError('Invalid dimensions for box data.')
ValueError: Invalid dimensions for box data.
/tmp/tmpwie3_vrf/tmp41bj3kgh/Makefile:9: recipe for target '3' failed
make: *** [3] Error 1
make: *** Waiting for unfinished jobs....
```
</issue>
<code>
[start of rastervision/data/label/object_detection_labels.py]
1 import numpy as np
2 from shapely.geometry import shape
3
4 from rastervision.core.box import Box
5 from rastervision.data.label import Labels
6
7
8 class ObjectDetectionLabels(Labels):
9 """A set of boxes and associated class_ids and scores.
10
11 Implemented using the Tensorflow Object Detection API's BoxList class.
12 """
13
14 def __init__(self, npboxes, class_ids, scores=None):
15 """Construct a set of object detection labels.
16
17 Args:
18 npboxes: float numpy array of size nx4 with cols
19 ymin, xmin, ymax, xmax. Should be in pixel coordinates within
20 the global frame of reference.
21 class_ids: int numpy array of size n with class ids starting at 1
22 scores: float numpy array of size n
23 """
24 # Lazily load TF Object Detection
25 from object_detection.utils.np_box_list import BoxList
26
27 self.boxlist = BoxList(npboxes)
28 # This field name actually needs to be 'classes' to be able to use
29 # certain utility functions in the TF Object Detection API.
30 self.boxlist.add_field('classes', class_ids)
31 # We need to ensure that there is always a scores field so that the
32 # concatenate method will work with empty labels objects.
33 if scores is None:
34 scores = np.ones(class_ids.shape)
35 self.boxlist.add_field('scores', scores)
36
37 def __add__(self, other):
38 return ObjectDetectionLabels.concatenate(self, other)
39
40 def __eq__(self, other):
41 return (isinstance(other, ObjectDetectionLabels)
42 and self.to_dict() == other.to_dict())
43
44 def assert_equal(self, expected_labels):
45 np.testing.assert_array_equal(self.get_npboxes(),
46 expected_labels.get_npboxes())
47 np.testing.assert_array_equal(self.get_class_ids(),
48 expected_labels.get_class_ids())
49 np.testing.assert_array_equal(self.get_scores(),
50 expected_labels.get_scores())
51
52 def filter_by_aoi(self, aoi_polygons):
53 boxes = self.get_boxes()
54 class_ids = self.get_class_ids()
55 scores = self.get_scores()
56
57 new_boxes = []
58 new_class_ids = []
59 new_scores = []
60 for box, class_id, score in zip(boxes, class_ids, scores):
61 box_poly = box.to_shapely()
62 for aoi in aoi_polygons:
63 if box_poly.within(aoi):
64 new_boxes.append(box)
65 new_class_ids.append(class_id)
66 new_scores.append(score)
67 break
68
69 return ObjectDetectionLabels(
70 np.array(new_boxes), np.array(new_class_ids), np.array(new_scores))
71
72 @staticmethod
73 def make_empty():
74 npboxes = np.empty((0, 4))
75 class_ids = np.empty((0, ))
76 scores = np.empty((0, ))
77 return ObjectDetectionLabels(npboxes, class_ids, scores)
78
79 @staticmethod
80 def from_boxlist(boxlist):
81 """Make ObjectDetectionLabels from BoxList object."""
82 scores = (boxlist.get_field('scores')
83 if boxlist.has_field('scores') else None)
84 return ObjectDetectionLabels(
85 boxlist.get(), boxlist.get_field('classes'), scores=scores)
86
87 @staticmethod
88 def from_geojson(geojson, extent=None):
89 """Convert GeoJSON to ObjectDetectionLabels object.
90
91 If extent is provided, filter out the boxes that lie "more than a little
92 bit" outside the extent.
93
94 Args:
95 geojson: (dict) normalized GeoJSON (see VectorSource)
96 extent: (Box) in pixel coords
97
98 Returns:
99 ObjectDetectionLabels
100 """
101 boxes = []
102 class_ids = []
103 scores = []
104
105 for f in geojson['features']:
106 geom = shape(f['geometry'])
107 (xmin, ymin, xmax, ymax) = geom.bounds
108 boxes.append(Box(ymin, xmin, ymax, xmax))
109
110 props = f['properties']
111 class_ids.append(props['class_id'])
112 scores.append(props.get('score', 1.0))
113
114 if len(boxes):
115 boxes = np.array(
116 [box.npbox_format() for box in boxes], dtype=float)
117 class_ids = np.array(class_ids)
118 scores = np.array(scores)
119 labels = ObjectDetectionLabels(boxes, class_ids, scores=scores)
120 else:
121 labels = ObjectDetectionLabels.make_empty()
122
123 if extent is not None:
124 labels = ObjectDetectionLabels.get_overlapping(
125 labels, extent, ioa_thresh=0.8, clip=True)
126 return labels
127
128 def get_boxes(self):
129 """Return list of Boxes."""
130 return [Box.from_npbox(npbox) for npbox in self.boxlist.get()]
131
132 def get_npboxes(self):
133 return self.boxlist.get()
134
135 def get_scores(self):
136 if self.boxlist.has_field('scores'):
137 return self.boxlist.get_field('scores')
138 return None
139
140 def get_class_ids(self):
141 return self.boxlist.get_field('classes')
142
143 def __len__(self):
144 return self.boxlist.get().shape[0]
145
146 def __str__(self):
147 return str(self.boxlist.get())
148
149 def to_boxlist(self):
150 return self.boxlist
151
152 def to_dict(self):
153 """Returns a dict version of these labels.
154
155 The Dict has a Box as a key, and a tuple of (class_id, score)
156 as the values.
157 """
158 d = {}
159 boxes = list(map(Box.from_npbox, self.get_npboxes()))
160 classes = list(self.get_class_ids())
161 scores = list(self.get_scores())
162 for box, class_id, score in zip(boxes, classes, scores):
163 d[box.tuple_format()] = (class_id, score)
164 return d
165
166 @staticmethod
167 def local_to_global(npboxes, window):
168 """Convert from local to global coordinates.
169
170 The local coordinates are row/col within the window frame of reference.
171 The global coordinates are row/col within the extent of a RasterSource.
172 """
173 xmin = window.xmin
174 ymin = window.ymin
175 return npboxes + np.array([[ymin, xmin, ymin, xmin]])
176
177 @staticmethod
178 def global_to_local(npboxes, window):
179 """Convert from global to local coordinates.
180
181 The global coordinates are row/col within the extent of a RasterSource.
182 The local coordinates are row/col within the window frame of reference.
183 """
184 xmin = window.xmin
185 ymin = window.ymin
186 return npboxes - np.array([[ymin, xmin, ymin, xmin]])
187
188 @staticmethod
189 def local_to_normalized(npboxes, window):
190 """Convert from local to normalized coordinates.
191
192 The local coordinates are row/col within the window frame of reference.
193 Normalized coordinates range from 0 to 1 on each (height/width) axis.
194 """
195 height = window.get_height()
196 width = window.get_width()
197 return npboxes / np.array([[height, width, height, width]])
198
199 @staticmethod
200 def normalized_to_local(npboxes, window):
201 """Convert from normalized to local coordinates.
202
203 Normalized coordinates range from 0 to 1 on each (height/width) axis.
204 The local coordinates are row/col within the window frame of reference.
205 """
206 height = window.get_height()
207 width = window.get_width()
208 return npboxes * np.array([[height, width, height, width]])
209
210 @staticmethod
211 def get_overlapping(labels, window, ioa_thresh=0.000001, clip=False):
212 """Return subset of labels that overlap with window.
213
214 Args:
215 labels: ObjectDetectionLabels
216 window: Box
217 ioa_thresh: the minimum IOA for a box to be considered as
218 overlapping
219 clip: if True, clip label boxes to the window
220 """
221 # Lazily load TF Object Detection
222 from object_detection.utils.np_box_list import BoxList
223 from object_detection.utils.np_box_list_ops import (
224 prune_non_overlapping_boxes, clip_to_window)
225
226 window_npbox = window.npbox_format()
227 window_boxlist = BoxList(np.expand_dims(window_npbox, axis=0))
228 boxlist = prune_non_overlapping_boxes(
229 labels.boxlist, window_boxlist, minoverlap=ioa_thresh)
230 if clip:
231 boxlist = clip_to_window(boxlist, window_npbox)
232
233 return ObjectDetectionLabels.from_boxlist(boxlist)
234
235 @staticmethod
236 def concatenate(labels1, labels2):
237 """Return concatenation of labels.
238
239 Args:
240 labels1: ObjectDetectionLabels
241 labels2: ObjectDetectionLabels
242 """
243 from object_detection.utils.np_box_list_ops import concatenate
244
245 new_boxlist = concatenate([labels1.to_boxlist(), labels2.to_boxlist()])
246 return ObjectDetectionLabels.from_boxlist(new_boxlist)
247
248 @staticmethod
249 def prune_duplicates(labels, score_thresh, merge_thresh):
250 """Remove duplicate boxes.
251
252 Runs non-maximum suppression to remove duplicate boxes that result from
253 sliding window prediction algorithm.
254
255 Args:
256 labels: ObjectDetectionLabels
257 score_thresh: the minimum allowed score of boxes
258 merge_thresh: the minimum IOA allowed when merging two boxes
259 together
260
261 Returns:
262 ObjectDetectionLabels
263 """
264 from object_detection.utils.np_box_list_ops import non_max_suppression
265
266 max_output_size = 1000000
267 pruned_boxlist = non_max_suppression(
268 labels.boxlist,
269 max_output_size=max_output_size,
270 iou_threshold=merge_thresh,
271 score_threshold=score_thresh)
272 return ObjectDetectionLabels.from_boxlist(pruned_boxlist)
273
[end of rastervision/data/label/object_detection_labels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision/data/label/object_detection_labels.py b/rastervision/data/label/object_detection_labels.py
--- a/rastervision/data/label/object_detection_labels.py
+++ b/rastervision/data/label/object_detection_labels.py
@@ -61,11 +61,14 @@
box_poly = box.to_shapely()
for aoi in aoi_polygons:
if box_poly.within(aoi):
- new_boxes.append(box)
+ new_boxes.append(box.npbox_format())
new_class_ids.append(class_id)
new_scores.append(score)
break
+ if len(new_boxes) == 0:
+ return ObjectDetectionLabels.make_empty()
+
return ObjectDetectionLabels(
np.array(new_boxes), np.array(new_class_ids), np.array(new_scores))
| {"golden_diff": "diff --git a/rastervision/data/label/object_detection_labels.py b/rastervision/data/label/object_detection_labels.py\n--- a/rastervision/data/label/object_detection_labels.py\n+++ b/rastervision/data/label/object_detection_labels.py\n@@ -61,11 +61,14 @@\n box_poly = box.to_shapely()\n for aoi in aoi_polygons:\n if box_poly.within(aoi):\n- new_boxes.append(box)\n+ new_boxes.append(box.npbox_format())\n new_class_ids.append(class_id)\n new_scores.append(score)\n break\n \n+ if len(new_boxes) == 0:\n+ return ObjectDetectionLabels.make_empty()\n+\n return ObjectDetectionLabels(\n np.array(new_boxes), np.array(new_class_ids), np.array(new_scores))\n", "issue": "Evaluation for object detection with AOIs crashes\n```\r\nRunning evaluator: ObjectDetectionEvaluator...\r\n2019-03-28 16:47:07:rastervision.evaluation.classification_evaluator: INFO - Computing evaluation for scene 01986917-30ea-4f7f-8e01-985d73b8aa2a...\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.5/runpy.py\", line 184, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/src/rastervision/__main__.py\", line 17, in <module>\r\n rv.main()\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/opt/src/rastervision/cli/main.py\", line 260, in run_command\r\n rv.runner.CommandRunner.run(command_config_uri)\r\n File \"/opt/src/rastervision/runner/command_runner.py\", line 11, in run\r\n CommandRunner.run_from_proto(msg)\r\n File \"/opt/src/rastervision/runner/command_runner.py\", line 17, in run_from_proto\r\n command.run()\r\n File \"/opt/src/rastervision/command/eval_command.py\", line 24, in run\r\n evaluator.process(scenes, tmp_dir)\r\n File \"/opt/src/rastervision/evaluation/classification_evaluator.py\", line 36, in process\r\n scene.aoi_polygons)\r\n File \"/opt/src/rastervision/data/label/object_detection_labels.py\", line 70, in filter_by_aoi\r\n np.array(new_boxes), np.array(new_class_ids), np.array(new_scores))\r\n File \"/opt/src/rastervision/data/label/object_detection_labels.py\", line 27, in __init__\r\n self.boxlist = BoxList(npboxes)\r\n File \"/opt/tf-models/object_detection/utils/np_box_list.py\", line 46, in __init__\r\n raise ValueError('Invalid dimensions for box data.')\r\nValueError: Invalid dimensions for box data.\r\n/tmp/tmpwie3_vrf/tmp41bj3kgh/Makefile:9: recipe for target '3' failed\r\nmake: *** [3] Error 1\r\nmake: *** Waiting for unfinished jobs....\r\n```\n", "before_files": [{"content": "import numpy as np\nfrom shapely.geometry import shape\n\nfrom rastervision.core.box import Box\nfrom rastervision.data.label import Labels\n\n\nclass ObjectDetectionLabels(Labels):\n \"\"\"A set of boxes and associated class_ids and scores.\n\n Implemented using the Tensorflow Object Detection API's BoxList class.\n \"\"\"\n\n def __init__(self, npboxes, class_ids, scores=None):\n \"\"\"Construct a set of object detection labels.\n\n Args:\n npboxes: float numpy array of size nx4 with cols\n ymin, xmin, ymax, xmax. Should be in pixel coordinates within\n the global frame of reference.\n class_ids: int numpy array of size n with class ids starting at 1\n scores: float numpy array of size n\n \"\"\"\n # Lazily load TF Object Detection\n from object_detection.utils.np_box_list import BoxList\n\n self.boxlist = BoxList(npboxes)\n # This field name actually needs to be 'classes' to be able to use\n # certain utility functions in the TF Object Detection API.\n self.boxlist.add_field('classes', class_ids)\n # We need to ensure that there is always a scores field so that the\n # concatenate method will work with empty labels objects.\n if scores is None:\n scores = np.ones(class_ids.shape)\n self.boxlist.add_field('scores', scores)\n\n def __add__(self, other):\n return ObjectDetectionLabels.concatenate(self, other)\n\n def __eq__(self, other):\n return (isinstance(other, ObjectDetectionLabels)\n and self.to_dict() == other.to_dict())\n\n def assert_equal(self, expected_labels):\n np.testing.assert_array_equal(self.get_npboxes(),\n expected_labels.get_npboxes())\n np.testing.assert_array_equal(self.get_class_ids(),\n expected_labels.get_class_ids())\n np.testing.assert_array_equal(self.get_scores(),\n expected_labels.get_scores())\n\n def filter_by_aoi(self, aoi_polygons):\n boxes = self.get_boxes()\n class_ids = self.get_class_ids()\n scores = self.get_scores()\n\n new_boxes = []\n new_class_ids = []\n new_scores = []\n for box, class_id, score in zip(boxes, class_ids, scores):\n box_poly = box.to_shapely()\n for aoi in aoi_polygons:\n if box_poly.within(aoi):\n new_boxes.append(box)\n new_class_ids.append(class_id)\n new_scores.append(score)\n break\n\n return ObjectDetectionLabels(\n np.array(new_boxes), np.array(new_class_ids), np.array(new_scores))\n\n @staticmethod\n def make_empty():\n npboxes = np.empty((0, 4))\n class_ids = np.empty((0, ))\n scores = np.empty((0, ))\n return ObjectDetectionLabels(npboxes, class_ids, scores)\n\n @staticmethod\n def from_boxlist(boxlist):\n \"\"\"Make ObjectDetectionLabels from BoxList object.\"\"\"\n scores = (boxlist.get_field('scores')\n if boxlist.has_field('scores') else None)\n return ObjectDetectionLabels(\n boxlist.get(), boxlist.get_field('classes'), scores=scores)\n\n @staticmethod\n def from_geojson(geojson, extent=None):\n \"\"\"Convert GeoJSON to ObjectDetectionLabels object.\n\n If extent is provided, filter out the boxes that lie \"more than a little\n bit\" outside the extent.\n\n Args:\n geojson: (dict) normalized GeoJSON (see VectorSource)\n extent: (Box) in pixel coords\n\n Returns:\n ObjectDetectionLabels\n \"\"\"\n boxes = []\n class_ids = []\n scores = []\n\n for f in geojson['features']:\n geom = shape(f['geometry'])\n (xmin, ymin, xmax, ymax) = geom.bounds\n boxes.append(Box(ymin, xmin, ymax, xmax))\n\n props = f['properties']\n class_ids.append(props['class_id'])\n scores.append(props.get('score', 1.0))\n\n if len(boxes):\n boxes = np.array(\n [box.npbox_format() for box in boxes], dtype=float)\n class_ids = np.array(class_ids)\n scores = np.array(scores)\n labels = ObjectDetectionLabels(boxes, class_ids, scores=scores)\n else:\n labels = ObjectDetectionLabels.make_empty()\n\n if extent is not None:\n labels = ObjectDetectionLabels.get_overlapping(\n labels, extent, ioa_thresh=0.8, clip=True)\n return labels\n\n def get_boxes(self):\n \"\"\"Return list of Boxes.\"\"\"\n return [Box.from_npbox(npbox) for npbox in self.boxlist.get()]\n\n def get_npboxes(self):\n return self.boxlist.get()\n\n def get_scores(self):\n if self.boxlist.has_field('scores'):\n return self.boxlist.get_field('scores')\n return None\n\n def get_class_ids(self):\n return self.boxlist.get_field('classes')\n\n def __len__(self):\n return self.boxlist.get().shape[0]\n\n def __str__(self):\n return str(self.boxlist.get())\n\n def to_boxlist(self):\n return self.boxlist\n\n def to_dict(self):\n \"\"\"Returns a dict version of these labels.\n\n The Dict has a Box as a key, and a tuple of (class_id, score)\n as the values.\n \"\"\"\n d = {}\n boxes = list(map(Box.from_npbox, self.get_npboxes()))\n classes = list(self.get_class_ids())\n scores = list(self.get_scores())\n for box, class_id, score in zip(boxes, classes, scores):\n d[box.tuple_format()] = (class_id, score)\n return d\n\n @staticmethod\n def local_to_global(npboxes, window):\n \"\"\"Convert from local to global coordinates.\n\n The local coordinates are row/col within the window frame of reference.\n The global coordinates are row/col within the extent of a RasterSource.\n \"\"\"\n xmin = window.xmin\n ymin = window.ymin\n return npboxes + np.array([[ymin, xmin, ymin, xmin]])\n\n @staticmethod\n def global_to_local(npboxes, window):\n \"\"\"Convert from global to local coordinates.\n\n The global coordinates are row/col within the extent of a RasterSource.\n The local coordinates are row/col within the window frame of reference.\n \"\"\"\n xmin = window.xmin\n ymin = window.ymin\n return npboxes - np.array([[ymin, xmin, ymin, xmin]])\n\n @staticmethod\n def local_to_normalized(npboxes, window):\n \"\"\"Convert from local to normalized coordinates.\n\n The local coordinates are row/col within the window frame of reference.\n Normalized coordinates range from 0 to 1 on each (height/width) axis.\n \"\"\"\n height = window.get_height()\n width = window.get_width()\n return npboxes / np.array([[height, width, height, width]])\n\n @staticmethod\n def normalized_to_local(npboxes, window):\n \"\"\"Convert from normalized to local coordinates.\n\n Normalized coordinates range from 0 to 1 on each (height/width) axis.\n The local coordinates are row/col within the window frame of reference.\n \"\"\"\n height = window.get_height()\n width = window.get_width()\n return npboxes * np.array([[height, width, height, width]])\n\n @staticmethod\n def get_overlapping(labels, window, ioa_thresh=0.000001, clip=False):\n \"\"\"Return subset of labels that overlap with window.\n\n Args:\n labels: ObjectDetectionLabels\n window: Box\n ioa_thresh: the minimum IOA for a box to be considered as\n overlapping\n clip: if True, clip label boxes to the window\n \"\"\"\n # Lazily load TF Object Detection\n from object_detection.utils.np_box_list import BoxList\n from object_detection.utils.np_box_list_ops import (\n prune_non_overlapping_boxes, clip_to_window)\n\n window_npbox = window.npbox_format()\n window_boxlist = BoxList(np.expand_dims(window_npbox, axis=0))\n boxlist = prune_non_overlapping_boxes(\n labels.boxlist, window_boxlist, minoverlap=ioa_thresh)\n if clip:\n boxlist = clip_to_window(boxlist, window_npbox)\n\n return ObjectDetectionLabels.from_boxlist(boxlist)\n\n @staticmethod\n def concatenate(labels1, labels2):\n \"\"\"Return concatenation of labels.\n\n Args:\n labels1: ObjectDetectionLabels\n labels2: ObjectDetectionLabels\n \"\"\"\n from object_detection.utils.np_box_list_ops import concatenate\n\n new_boxlist = concatenate([labels1.to_boxlist(), labels2.to_boxlist()])\n return ObjectDetectionLabels.from_boxlist(new_boxlist)\n\n @staticmethod\n def prune_duplicates(labels, score_thresh, merge_thresh):\n \"\"\"Remove duplicate boxes.\n\n Runs non-maximum suppression to remove duplicate boxes that result from\n sliding window prediction algorithm.\n\n Args:\n labels: ObjectDetectionLabels\n score_thresh: the minimum allowed score of boxes\n merge_thresh: the minimum IOA allowed when merging two boxes\n together\n\n Returns:\n ObjectDetectionLabels\n \"\"\"\n from object_detection.utils.np_box_list_ops import non_max_suppression\n\n max_output_size = 1000000\n pruned_boxlist = non_max_suppression(\n labels.boxlist,\n max_output_size=max_output_size,\n iou_threshold=merge_thresh,\n score_threshold=score_thresh)\n return ObjectDetectionLabels.from_boxlist(pruned_boxlist)\n", "path": "rastervision/data/label/object_detection_labels.py"}]} | 4,057 | 178 |
gh_patches_debug_4397 | rasdani/github-patches | git_diff | gratipay__gratipay.com-1934 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sign up with a Twitter account fails.
```
Internal server error, program!
Traceback (most recent call last):
File "/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/algorithm.py", line 288, in run
new_state = function(**deps.as_kwargs)
File "/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/algorithms/website.py", line 88, in get_response_for_resource
return {'response': resource.respond(request)}
File "/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py", line 52, in respond
exec self.pages[1] in context
File "/home/sim6/www.gittip.com/www/on/twitter/associate.spt", line 77, in
account = twitter.TwitterAccount(website.db, user_info['id'], user_info)
File "/home/sim6/www.gittip.com/gittip/elsewhere/__init__.py", line 80, in __init__
typecheck(user_id, (int, unicode), user_info, (None, dict))
File "/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/utils.py", line 377, in typecheck
raise TypeError(msg)
TypeError: Check #1: 2304869623L is of type long, not one of: int, unicode.
```
</issue>
<code>
[start of gittip/elsewhere/__init__.py]
1 """This subpackage contains functionality for working with accounts elsewhere.
2 """
3 from __future__ import print_function, unicode_literals
4 from collections import OrderedDict
5
6 from aspen.utils import typecheck
7 from aspen import json
8 from psycopg2 import IntegrityError
9
10 import gittip
11 from gittip.exceptions import ProblemChangingUsername, UnknownPlatform
12 from gittip.utils.username import reserve_a_random_username
13
14
15 ACTIONS = [u'opt-in', u'connect', u'lock', u'unlock']
16
17
18 # to add a new elsewhere/platform:
19 # 1) add its name (also the name of its module) to this list.
20 # it's best to append it; this ordering is used in templates.
21 # 2) inherit from AccountElsewhere in the platform class
22 #
23 # platform_modules will populate the platform class automatically in configure-aspen.
24 platforms_ordered = (
25 'twitter',
26 'github',
27 'bitbucket',
28 'bountysource',
29 'venmo',
30 'openstreetmap'
31 )
32
33 # init-time key setup ensures the future ordering of platform_classes will match
34 # platforms_ordered, since overwriting entries will maintain their order.
35 platform_classes = OrderedDict([(platform, None) for platform in platforms_ordered])
36
37
38 class _RegisterPlatformMeta(type):
39 """Tied to AccountElsewhere to enable registration by the platform field.
40 """
41
42 def __new__(cls, name, bases, dct):
43 c = super(_RegisterPlatformMeta, cls).__new__(cls, name, bases, dct)
44
45 # * register the platform
46 # * verify it was added at init-time
47 # * register the subclass's json encoder with aspen
48 c_platform = getattr(c, 'platform')
49 if name == 'AccountElsewhere':
50 pass
51 elif c_platform not in platform_classes:
52 raise UnknownPlatform(c_platform) # has it been added to platform_classes init?
53 else:
54 platform_classes[c_platform] = c
55
56 # aspen's json encoder registry does not take class hierarchies into account,
57 # so we need to register the subclasses explicitly.
58 json.register_encoder(c, c.to_json_compatible_object)
59
60 return c
61
62 class AccountElsewhere(object):
63
64 __metaclass__ = _RegisterPlatformMeta
65
66 platform = None # set in subclass
67
68 # only fields in this set will be encoded
69 json_encode_field_whitelist = set([
70 'id', 'is_locked', 'participant', 'platform', 'user_id', 'user_info',
71 ])
72
73 def __init__(self, db, user_id, user_info=None, existing_record=None):
74 """Either:
75 - Takes a user_id and user_info, and updates the database.
76
77 Or:
78 - Takes a user_id and existing_record, and constructs a "model" object out of the record
79 """
80 typecheck(user_id, (int, unicode), user_info, (None, dict))
81 self.user_id = unicode(user_id)
82 self.db = db
83
84 if user_info is not None:
85 a,b,c,d = self.upsert(user_info)
86
87 self.participant = a
88 self.is_claimed = b
89 self.is_locked = c
90 self.balance = d
91
92 self.user_info = user_info
93
94 # hack to make this into a weird pseudo-model that can share convenience methods
95 elif existing_record is not None:
96 self.participant = existing_record.participant
97 self.is_claimed, self.is_locked, self.balance = self.get_misc_info(self.participant)
98 self.user_info = existing_record.user_info
99 self.record = existing_record
100
101 def to_json_compatible_object(self):
102 """
103 This is registered as an aspen.json encoder in configure-aspen
104 for all subclasses of this class.
105
106 It only exports fields in the whitelist.
107 """
108 output = {k: v for (k,v) in self.record._asdict().items()
109 if k in self.json_encode_field_whitelist}
110
111 return output
112
113 def set_is_locked(self, is_locked):
114 self.db.run("""
115
116 UPDATE elsewhere
117 SET is_locked=%s
118 WHERE platform=%s AND user_id=%s
119
120 """, (is_locked, self.platform, self.user_id))
121
122
123 def opt_in(self, desired_username):
124 """Given a desired username, return a User object.
125 """
126 from gittip.security.user import User
127
128 self.set_is_locked(False)
129 user = User.from_username(self.participant)
130 user.sign_in()
131 assert not user.ANON, self.participant # sanity check
132 if self.is_claimed:
133 newly_claimed = False
134 else:
135 newly_claimed = True
136 user.participant.set_as_claimed()
137 try:
138 user.participant.change_username(desired_username)
139 except ProblemChangingUsername:
140 pass
141 return user, newly_claimed
142
143
144 def upsert(self, user_info):
145 """Given a dict, return a tuple.
146
147 User_id is an immutable unique identifier for the given user on the
148 given platform. Username is the user's login/username on the given
149 platform. It is only used here for logging. Specifically, we don't
150 reserve their username for them on Gittip if they're new here. We give
151 them a random username here, and they'll have a chance to change it
152 if/when they opt in. User_id and username may or may not be the same.
153 User_info is a dictionary of profile info per the named platform. All
154 platform dicts must have an id key that corresponds to the primary key
155 in the underlying table in our own db.
156
157 The return value is a tuple: (username [unicode], is_claimed [boolean],
158 is_locked [boolean], balance [Decimal]).
159
160 """
161 typecheck(user_info, dict)
162
163
164 # Insert the account if needed.
165 # =============================
166 # Do this with a transaction so that if the insert fails, the
167 # participant we reserved for them is rolled back as well.
168
169 try:
170 with self.db.get_cursor() as cursor:
171 _username = reserve_a_random_username(cursor)
172 cursor.execute( "INSERT INTO elsewhere "
173 "(platform, user_id, participant) "
174 "VALUES (%s, %s, %s)"
175 , (self.platform, self.user_id, _username)
176 )
177 except IntegrityError:
178 pass
179
180
181 # Update their user_info.
182 # =======================
183 # Cast everything to unicode, because (I believe) hstore can take any
184 # type of value, but psycopg2 can't.
185 #
186 # https://postgres.heroku.com/blog/past/2012/3/14/introducing_keyvalue_data_storage_in_heroku_postgres/
187 # http://initd.org/psycopg/docs/extras.html#hstore-data-type
188 #
189 # XXX This clobbers things, of course, such as booleans. See
190 # /on/bitbucket/%username/index.html
191
192 for k, v in user_info.items():
193 user_info[k] = unicode(v)
194
195
196 username = self.db.one("""
197
198 UPDATE elsewhere
199 SET user_info=%s
200 WHERE platform=%s AND user_id=%s
201 RETURNING participant
202
203 """, (user_info, self.platform, self.user_id))
204
205 return (username,) + self.get_misc_info(username)
206
207 def get_misc_info(self, username):
208 rec = self.db.one("""
209
210 SELECT claimed_time, balance, is_locked
211 FROM participants
212 JOIN elsewhere
213 ON participants.username=participant
214 WHERE platform=%s
215 AND participants.username=%s
216
217 """, (self.platform, username))
218
219 assert rec is not None # sanity check
220
221 return ( rec.claimed_time is not None
222 , rec.is_locked
223 , rec.balance
224 )
225
226 def set_oauth_tokens(self, access_token, refresh_token, expires):
227 """
228 Updates the elsewhere row with the given access token, refresh token, and Python datetime
229 """
230
231 self.db.run("""
232 UPDATE elsewhere
233 SET (access_token, refresh_token, expires)
234 = (%s, %s, %s)
235 WHERE platform=%s AND user_id=%s
236 """, (access_token, refresh_token, expires, self.platform, self.user_id))
237
[end of gittip/elsewhere/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gittip/elsewhere/__init__.py b/gittip/elsewhere/__init__.py
--- a/gittip/elsewhere/__init__.py
+++ b/gittip/elsewhere/__init__.py
@@ -77,7 +77,7 @@
Or:
- Takes a user_id and existing_record, and constructs a "model" object out of the record
"""
- typecheck(user_id, (int, unicode), user_info, (None, dict))
+ typecheck(user_id, (int, unicode, long), user_info, (None, dict))
self.user_id = unicode(user_id)
self.db = db
| {"golden_diff": "diff --git a/gittip/elsewhere/__init__.py b/gittip/elsewhere/__init__.py\n--- a/gittip/elsewhere/__init__.py\n+++ b/gittip/elsewhere/__init__.py\n@@ -77,7 +77,7 @@\n Or:\n - Takes a user_id and existing_record, and constructs a \"model\" object out of the record\n \"\"\"\n- typecheck(user_id, (int, unicode), user_info, (None, dict))\n+ typecheck(user_id, (int, unicode, long), user_info, (None, dict))\n self.user_id = unicode(user_id)\n self.db = db\n", "issue": "Sign up with a Twitter account fails.\n```\nInternal server error, program!\nTraceback (most recent call last):\n File \"/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/algorithm.py\", line 288, in run\n new_state = function(**deps.as_kwargs)\n File \"/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/algorithms/website.py\", line 88, in get_response_for_resource\n return {'response': resource.respond(request)}\n File \"/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py\", line 52, in respond\n exec self.pages[1] in context\n File \"/home/sim6/www.gittip.com/www/on/twitter/associate.spt\", line 77, in \n account = twitter.TwitterAccount(website.db, user_info['id'], user_info)\n File \"/home/sim6/www.gittip.com/gittip/elsewhere/__init__.py\", line 80, in __init__\n typecheck(user_id, (int, unicode), user_info, (None, dict))\n File \"/home/sim6/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/utils.py\", line 377, in typecheck\n raise TypeError(msg)\nTypeError: Check #1: 2304869623L is of type long, not one of: int, unicode.\n```\n\n", "before_files": [{"content": "\"\"\"This subpackage contains functionality for working with accounts elsewhere.\n\"\"\"\nfrom __future__ import print_function, unicode_literals\nfrom collections import OrderedDict\n\nfrom aspen.utils import typecheck\nfrom aspen import json\nfrom psycopg2 import IntegrityError\n\nimport gittip\nfrom gittip.exceptions import ProblemChangingUsername, UnknownPlatform\nfrom gittip.utils.username import reserve_a_random_username\n\n\nACTIONS = [u'opt-in', u'connect', u'lock', u'unlock']\n\n\n# to add a new elsewhere/platform:\n# 1) add its name (also the name of its module) to this list.\n# it's best to append it; this ordering is used in templates.\n# 2) inherit from AccountElsewhere in the platform class\n#\n# platform_modules will populate the platform class automatically in configure-aspen.\nplatforms_ordered = (\n 'twitter',\n 'github',\n 'bitbucket',\n 'bountysource',\n 'venmo',\n 'openstreetmap'\n)\n\n# init-time key setup ensures the future ordering of platform_classes will match\n# platforms_ordered, since overwriting entries will maintain their order.\nplatform_classes = OrderedDict([(platform, None) for platform in platforms_ordered])\n\n\nclass _RegisterPlatformMeta(type):\n \"\"\"Tied to AccountElsewhere to enable registration by the platform field.\n \"\"\"\n\n def __new__(cls, name, bases, dct):\n c = super(_RegisterPlatformMeta, cls).__new__(cls, name, bases, dct)\n\n # * register the platform\n # * verify it was added at init-time\n # * register the subclass's json encoder with aspen\n c_platform = getattr(c, 'platform')\n if name == 'AccountElsewhere':\n pass\n elif c_platform not in platform_classes:\n raise UnknownPlatform(c_platform) # has it been added to platform_classes init?\n else:\n platform_classes[c_platform] = c\n\n # aspen's json encoder registry does not take class hierarchies into account,\n # so we need to register the subclasses explicitly.\n json.register_encoder(c, c.to_json_compatible_object)\n\n return c\n\nclass AccountElsewhere(object):\n\n __metaclass__ = _RegisterPlatformMeta\n\n platform = None # set in subclass\n\n # only fields in this set will be encoded\n json_encode_field_whitelist = set([\n 'id', 'is_locked', 'participant', 'platform', 'user_id', 'user_info',\n ])\n\n def __init__(self, db, user_id, user_info=None, existing_record=None):\n \"\"\"Either:\n - Takes a user_id and user_info, and updates the database.\n\n Or:\n - Takes a user_id and existing_record, and constructs a \"model\" object out of the record\n \"\"\"\n typecheck(user_id, (int, unicode), user_info, (None, dict))\n self.user_id = unicode(user_id)\n self.db = db\n\n if user_info is not None:\n a,b,c,d = self.upsert(user_info)\n\n self.participant = a\n self.is_claimed = b\n self.is_locked = c\n self.balance = d\n\n self.user_info = user_info\n\n # hack to make this into a weird pseudo-model that can share convenience methods\n elif existing_record is not None:\n self.participant = existing_record.participant\n self.is_claimed, self.is_locked, self.balance = self.get_misc_info(self.participant)\n self.user_info = existing_record.user_info\n self.record = existing_record\n\n def to_json_compatible_object(self):\n \"\"\"\n This is registered as an aspen.json encoder in configure-aspen\n for all subclasses of this class.\n\n It only exports fields in the whitelist.\n \"\"\"\n output = {k: v for (k,v) in self.record._asdict().items()\n if k in self.json_encode_field_whitelist}\n\n return output\n\n def set_is_locked(self, is_locked):\n self.db.run(\"\"\"\n\n UPDATE elsewhere\n SET is_locked=%s\n WHERE platform=%s AND user_id=%s\n\n \"\"\", (is_locked, self.platform, self.user_id))\n\n\n def opt_in(self, desired_username):\n \"\"\"Given a desired username, return a User object.\n \"\"\"\n from gittip.security.user import User\n\n self.set_is_locked(False)\n user = User.from_username(self.participant)\n user.sign_in()\n assert not user.ANON, self.participant # sanity check\n if self.is_claimed:\n newly_claimed = False\n else:\n newly_claimed = True\n user.participant.set_as_claimed()\n try:\n user.participant.change_username(desired_username)\n except ProblemChangingUsername:\n pass\n return user, newly_claimed\n\n\n def upsert(self, user_info):\n \"\"\"Given a dict, return a tuple.\n\n User_id is an immutable unique identifier for the given user on the\n given platform. Username is the user's login/username on the given\n platform. It is only used here for logging. Specifically, we don't\n reserve their username for them on Gittip if they're new here. We give\n them a random username here, and they'll have a chance to change it\n if/when they opt in. User_id and username may or may not be the same.\n User_info is a dictionary of profile info per the named platform. All\n platform dicts must have an id key that corresponds to the primary key\n in the underlying table in our own db.\n\n The return value is a tuple: (username [unicode], is_claimed [boolean],\n is_locked [boolean], balance [Decimal]).\n\n \"\"\"\n typecheck(user_info, dict)\n\n\n # Insert the account if needed.\n # =============================\n # Do this with a transaction so that if the insert fails, the\n # participant we reserved for them is rolled back as well.\n\n try:\n with self.db.get_cursor() as cursor:\n _username = reserve_a_random_username(cursor)\n cursor.execute( \"INSERT INTO elsewhere \"\n \"(platform, user_id, participant) \"\n \"VALUES (%s, %s, %s)\"\n , (self.platform, self.user_id, _username)\n )\n except IntegrityError:\n pass\n\n\n # Update their user_info.\n # =======================\n # Cast everything to unicode, because (I believe) hstore can take any\n # type of value, but psycopg2 can't.\n #\n # https://postgres.heroku.com/blog/past/2012/3/14/introducing_keyvalue_data_storage_in_heroku_postgres/\n # http://initd.org/psycopg/docs/extras.html#hstore-data-type\n #\n # XXX This clobbers things, of course, such as booleans. See\n # /on/bitbucket/%username/index.html\n\n for k, v in user_info.items():\n user_info[k] = unicode(v)\n\n\n username = self.db.one(\"\"\"\n\n UPDATE elsewhere\n SET user_info=%s\n WHERE platform=%s AND user_id=%s\n RETURNING participant\n\n \"\"\", (user_info, self.platform, self.user_id))\n\n return (username,) + self.get_misc_info(username)\n\n def get_misc_info(self, username):\n rec = self.db.one(\"\"\"\n\n SELECT claimed_time, balance, is_locked\n FROM participants\n JOIN elsewhere\n ON participants.username=participant\n WHERE platform=%s\n AND participants.username=%s\n\n \"\"\", (self.platform, username))\n\n assert rec is not None # sanity check\n\n return ( rec.claimed_time is not None\n , rec.is_locked\n , rec.balance\n )\n\n def set_oauth_tokens(self, access_token, refresh_token, expires):\n \"\"\"\n Updates the elsewhere row with the given access token, refresh token, and Python datetime\n \"\"\"\n\n self.db.run(\"\"\"\n UPDATE elsewhere \n SET (access_token, refresh_token, expires) \n = (%s, %s, %s) \n WHERE platform=%s AND user_id=%s\n \"\"\", (access_token, refresh_token, expires, self.platform, self.user_id))\n", "path": "gittip/elsewhere/__init__.py"}]} | 3,286 | 148 |
gh_patches_debug_22614 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-269 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E2502 is mistaken about IamInstanceProfile
*cfn-lint version: 0.4.2*
*Description of issue.*
Linting a template returned:
```
E2502 Property IamInstanceProfile shouldn't be an ARN for Resources/BuildkiteSpotfleet/Properties/SpotFleetRequestConfigData/LaunchSpecifications/0/IamInstanceProfile/Arn/Fn::GetAtt
```
However that property can be an ARN according to https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-spotfleet-spotfleetrequestconfigdata-launchspecifications.html#cfn-ec2-spotfleet-spotfleetrequestconfigdata-launchspecifications-iaminstanceprofile
It can be an `{"Arn": "profile_arn"}` structure.
</issue>
<code>
[start of src/cfnlint/rules/resources/iam/InstanceProfile.py]
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from cfnlint import CloudFormationLintRule
18 from cfnlint import RuleMatch
19
20
21 class InstanceProfile(CloudFormationLintRule):
22 """Check if IamInstanceProfile are used"""
23 id = 'E2502'
24 shortdesc = 'Check if IamInstanceProfile are using the name and not ARN'
25 description = 'See if there are any properties IamInstanceProfile' + \
26 'are using name and not ARN'
27 source_url = 'https://github.com/awslabs/cfn-python-lint'
28 tags = ['properties']
29
30 def match(self, cfn):
31 """Check CloudFormation IamInstanceProfile Parameters"""
32
33 matches = list()
34
35 # Build the list of keys
36 trees = cfn.search_deep_keys('Fn::GetAtt')
37 # Filter only resoureces
38 # Disable pylint for Pylint 2
39 # pylint: disable=W0110
40 trees = filter(lambda x: x[0] == 'Resources', trees)
41 for tree in trees:
42 if any(e == 'IamInstanceProfile' for e in tree):
43 obj = tree[-1]
44 objtype = cfn.template.get('Resources', {}).get(obj[0], {}).get('Type')
45 if objtype:
46 if objtype != 'AWS::IAM::InstanceProfile':
47 message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (
48 '/'.join(map(str, tree[:-1])))
49 matches.append(RuleMatch(tree[:-1], message))
50 else:
51 if obj[1] == 'Arn':
52 message = 'Property IamInstanceProfile shouldn\'t be an ARN for %s' % (
53 '/'.join(map(str, tree[:-1])))
54 matches.append(RuleMatch(tree[:-1], message))
55
56 # Search Refs
57 trees = cfn.search_deep_keys('Ref')
58 # Filter only resoureces
59 trees = filter(lambda x: x[0] == 'Resources', trees)
60 for tree in trees:
61 if any(e == 'IamInstanceProfile' for e in tree):
62 obj = tree[-1]
63 objtype = cfn.template.get('Resources', {}).get(obj, {}).get('Type')
64 if objtype:
65 if objtype != 'AWS::IAM::InstanceProfile':
66 message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (
67 '/'.join(map(str, tree[:-1])))
68 matches.append(RuleMatch(tree[:-1], message))
69
70 return matches
71
[end of src/cfnlint/rules/resources/iam/InstanceProfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/iam/InstanceProfile.py b/src/cfnlint/rules/resources/iam/InstanceProfile.py
--- a/src/cfnlint/rules/resources/iam/InstanceProfile.py
+++ b/src/cfnlint/rules/resources/iam/InstanceProfile.py
@@ -48,10 +48,16 @@
'/'.join(map(str, tree[:-1])))
matches.append(RuleMatch(tree[:-1], message))
else:
- if obj[1] == 'Arn':
- message = 'Property IamInstanceProfile shouldn\'t be an ARN for %s' % (
- '/'.join(map(str, tree[:-1])))
- matches.append(RuleMatch(tree[:-1], message))
+ if cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:
+ if obj[1] != 'Arn':
+ message = 'Property IamInstanceProfile should be an ARN for %s' % (
+ '/'.join(map(str, tree[:-1])))
+ matches.append(RuleMatch(tree[:-1], message))
+ else:
+ if obj[1] == 'Arn':
+ message = 'Property IamInstanceProfile shouldn\'t be an ARN for %s' % (
+ '/'.join(map(str, tree[:-1])))
+ matches.append(RuleMatch(tree[:-1], message))
# Search Refs
trees = cfn.search_deep_keys('Ref')
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/iam/InstanceProfile.py b/src/cfnlint/rules/resources/iam/InstanceProfile.py\n--- a/src/cfnlint/rules/resources/iam/InstanceProfile.py\n+++ b/src/cfnlint/rules/resources/iam/InstanceProfile.py\n@@ -48,10 +48,16 @@\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n- if obj[1] == 'Arn':\n- message = 'Property IamInstanceProfile shouldn\\'t be an ARN for %s' % (\n- '/'.join(map(str, tree[:-1])))\n- matches.append(RuleMatch(tree[:-1], message))\n+ if cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:\n+ if obj[1] != 'Arn':\n+ message = 'Property IamInstanceProfile should be an ARN for %s' % (\n+ '/'.join(map(str, tree[:-1])))\n+ matches.append(RuleMatch(tree[:-1], message))\n+ else:\n+ if obj[1] == 'Arn':\n+ message = 'Property IamInstanceProfile shouldn\\'t be an ARN for %s' % (\n+ '/'.join(map(str, tree[:-1])))\n+ matches.append(RuleMatch(tree[:-1], message))\n \n # Search Refs\n trees = cfn.search_deep_keys('Ref')\n", "issue": "E2502 is mistaken about IamInstanceProfile\n*cfn-lint version: 0.4.2*\r\n\r\n*Description of issue.*\r\n\r\nLinting a template returned:\r\n```\r\nE2502 Property IamInstanceProfile shouldn't be an ARN for Resources/BuildkiteSpotfleet/Properties/SpotFleetRequestConfigData/LaunchSpecifications/0/IamInstanceProfile/Arn/Fn::GetAtt\r\n```\r\n\r\nHowever that property can be an ARN according to https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-spotfleet-spotfleetrequestconfigdata-launchspecifications.html#cfn-ec2-spotfleet-spotfleetrequestconfigdata-launchspecifications-iaminstanceprofile\r\n\r\nIt can be an `{\"Arn\": \"profile_arn\"}` structure.\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass InstanceProfile(CloudFormationLintRule):\n \"\"\"Check if IamInstanceProfile are used\"\"\"\n id = 'E2502'\n shortdesc = 'Check if IamInstanceProfile are using the name and not ARN'\n description = 'See if there are any properties IamInstanceProfile' + \\\n 'are using name and not ARN'\n source_url = 'https://github.com/awslabs/cfn-python-lint'\n tags = ['properties']\n\n def match(self, cfn):\n \"\"\"Check CloudFormation IamInstanceProfile Parameters\"\"\"\n\n matches = list()\n\n # Build the list of keys\n trees = cfn.search_deep_keys('Fn::GetAtt')\n # Filter only resoureces\n # Disable pylint for Pylint 2\n # pylint: disable=W0110\n trees = filter(lambda x: x[0] == 'Resources', trees)\n for tree in trees:\n if any(e == 'IamInstanceProfile' for e in tree):\n obj = tree[-1]\n objtype = cfn.template.get('Resources', {}).get(obj[0], {}).get('Type')\n if objtype:\n if objtype != 'AWS::IAM::InstanceProfile':\n message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n if obj[1] == 'Arn':\n message = 'Property IamInstanceProfile shouldn\\'t be an ARN for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n\n # Search Refs\n trees = cfn.search_deep_keys('Ref')\n # Filter only resoureces\n trees = filter(lambda x: x[0] == 'Resources', trees)\n for tree in trees:\n if any(e == 'IamInstanceProfile' for e in tree):\n obj = tree[-1]\n objtype = cfn.template.get('Resources', {}).get(obj, {}).get('Type')\n if objtype:\n if objtype != 'AWS::IAM::InstanceProfile':\n message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n\n return matches\n", "path": "src/cfnlint/rules/resources/iam/InstanceProfile.py"}]} | 1,611 | 329 |
gh_patches_debug_28095 | rasdani/github-patches | git_diff | Qiskit__qiskit-8714 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document naming of CircuitInstruction and related classes
### What should we add?
I asked @jakelishman why CircuitInstruction is named that way when it is not an Instruction, and why it has an attribute named `operation` which is actually an Instruction. He replied:
>it’s because what’s currently called Instruction shouldn’t really be called that, but we can’t change it without breaking everything. So instead, the new interface definition for “something that can be added to QuantumCircuit is called Operation, and the container is CircuitInstruction to avoid the naming clash
This should be publicly documented somewhere to mitigate user confusion.
</issue>
<code>
[start of qiskit/circuit/quantumcircuitdata.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2019.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """A wrapper class for the purposes of validating modifications to
14 QuantumCircuit.data while maintaining the interface of a python list."""
15
16 from collections.abc import MutableSequence
17 from typing import Tuple, Iterable, Optional
18
19 from .exceptions import CircuitError
20 from .instruction import Instruction
21 from .quantumregister import Qubit
22 from .classicalregister import Clbit
23
24
25 class CircuitInstruction:
26 """A single instruction in a :class:`.QuantumCircuit`, comprised of the :attr:`operation` and
27 various operands.
28
29 .. warning::
30
31 This is a lightweight internal class and there is minimal error checking; you must respect
32 the type hints when using it. It is the user's responsibility to ensure that direct
33 mutations of the object do not invalidate the types, nor the restrictions placed on it by
34 its context. Typically this will mean, for example, that :attr:`qubits` must be a sequence
35 of distinct items, with no duplicates.
36 """
37
38 __slots__ = ("operation", "qubits", "clbits", "_legacy_format_cache")
39
40 operation: Instruction
41 """The logical operation that this instruction represents an execution of."""
42 qubits: Tuple[Qubit, ...]
43 """A sequence of the qubits that the operation is applied to."""
44 clbits: Tuple[Clbit, ...]
45 """A sequence of the classical bits that this operation reads from or writes to."""
46
47 def __init__(
48 self,
49 operation: Instruction,
50 qubits: Iterable[Qubit] = (),
51 clbits: Iterable[Clbit] = (),
52 ):
53 self.operation = operation
54 self.qubits = tuple(qubits)
55 self.clbits = tuple(clbits)
56 self._legacy_format_cache = None
57
58 def copy(self) -> "CircuitInstruction":
59 """Return a shallow copy of the :class:`CircuitInstruction`."""
60 return self.__class__(
61 operation=self.operation,
62 qubits=self.qubits,
63 clbits=self.clbits,
64 )
65
66 def replace(
67 self,
68 operation: Optional[Instruction] = None,
69 qubits: Optional[Iterable[Qubit]] = None,
70 clbits: Optional[Iterable[Clbit]] = None,
71 ) -> "CircuitInstruction":
72 """Return a new :class:`CircuitInstruction` with the given fields replaced."""
73 return self.__class__(
74 operation=self.operation if operation is None else operation,
75 qubits=self.qubits if qubits is None else qubits,
76 clbits=self.clbits if clbits is None else clbits,
77 )
78
79 def __repr__(self):
80 return (
81 f"{type(self).__name__}("
82 f"operation={self.operation!r}"
83 f", qubits={self.qubits!r}"
84 f", clbits={self.clbits!r}"
85 ")"
86 )
87
88 def __eq__(self, other):
89 if isinstance(other, type(self)):
90 # Ordered from fastest comparisons to slowest.
91 return (
92 self.clbits == other.clbits
93 and self.qubits == other.qubits
94 and self.operation == other.operation
95 )
96 if isinstance(other, tuple):
97 return self._legacy_format == other
98 return NotImplemented
99
100 # Legacy tuple-like interface support.
101 #
102 # For a best attempt at API compatibility during the transition to using this new class, we need
103 # the interface to behave exactly like the old 3-tuple `(inst, qargs, cargs)` if it's treated
104 # like that via unpacking or similar. That means that the `parameters` field is completely
105 # absent, and the qubits and clbits must be converted to lists.
106
107 @property
108 def _legacy_format(self):
109 if self._legacy_format_cache is None:
110 # The qubits and clbits were generally stored as lists in the old format, and various
111 # places assume that they will certainly be lists.
112 self._legacy_format_cache = (self.operation, list(self.qubits), list(self.clbits))
113 return self._legacy_format_cache
114
115 def __getitem__(self, key):
116 return self._legacy_format[key]
117
118 def __iter__(self):
119 return iter(self._legacy_format)
120
121 def __len__(self):
122 return 3
123
124
125 class QuantumCircuitData(MutableSequence):
126 """A wrapper class for the purposes of validating modifications to
127 QuantumCircuit.data while maintaining the interface of a python list."""
128
129 def __init__(self, circuit):
130 self._circuit = circuit
131
132 def __getitem__(self, i):
133 return self._circuit._data[i]
134
135 def __setitem__(self, key, value):
136 # For now (Terra 0.21), the `QuantumCircuit.data` setter is meant to perform validation, so
137 # we do the same qubit checks that `QuantumCircuit.append` would do.
138 if isinstance(value, CircuitInstruction):
139 operation, qargs, cargs = value.operation, value.qubits, value.clbits
140 else:
141 # Handle the legacy 3-tuple format.
142 operation, qargs, cargs = value
143 value = self._resolve_legacy_value(operation, qargs, cargs)
144 self._circuit._data[key] = value
145 self._circuit._update_parameter_table(value)
146
147 def _resolve_legacy_value(self, operation, qargs, cargs) -> CircuitInstruction:
148 """Resolve the old-style 3-tuple into the new :class:`CircuitInstruction` type."""
149 if not isinstance(operation, Instruction) and hasattr(operation, "to_instruction"):
150 operation = operation.to_instruction()
151 if not isinstance(operation, Instruction):
152 raise CircuitError("object is not an Instruction.")
153
154 expanded_qargs = [self._circuit.qbit_argument_conversion(qarg) for qarg in qargs or []]
155 expanded_cargs = [self._circuit.cbit_argument_conversion(carg) for carg in cargs or []]
156
157 if isinstance(operation, Instruction):
158 broadcast_args = list(operation.broadcast_arguments(expanded_qargs, expanded_cargs))
159 else:
160 broadcast_args = list(
161 Instruction.broadcast_arguments(operation, expanded_qargs, expanded_cargs)
162 )
163
164 if len(broadcast_args) > 1:
165 raise CircuitError(
166 "QuantumCircuit.data modification does not support argument broadcasting."
167 )
168
169 qargs, cargs = broadcast_args[0]
170
171 self._circuit._check_dups(qargs)
172 return CircuitInstruction(operation, tuple(qargs), tuple(cargs))
173
174 def insert(self, index, value):
175 self._circuit._data.insert(index, None)
176 try:
177 self[index] = value
178 except CircuitError:
179 del self._circuit._data[index]
180 raise
181
182 def __iter__(self):
183 return iter(self._circuit._data)
184
185 def __delitem__(self, i):
186 del self._circuit._data[i]
187
188 def __len__(self):
189 return len(self._circuit._data)
190
191 def __cast(self, other):
192 return other._circuit._data if isinstance(other, QuantumCircuitData) else other
193
194 def __repr__(self):
195 return repr(self._circuit._data)
196
197 def __lt__(self, other):
198 return self._circuit._data < self.__cast(other)
199
200 def __le__(self, other):
201 return self._circuit._data <= self.__cast(other)
202
203 def __eq__(self, other):
204 return self._circuit._data == self.__cast(other)
205
206 def __gt__(self, other):
207 return self._circuit._data > self.__cast(other)
208
209 def __ge__(self, other):
210 return self._circuit._data >= self.__cast(other)
211
212 def __add__(self, other):
213 return self._circuit._data + self.__cast(other)
214
215 def __radd__(self, other):
216 return self.__cast(other) + self._circuit._data
217
218 def __mul__(self, n):
219 return self._circuit._data * n
220
221 def __rmul__(self, n):
222 return n * self._circuit._data
223
224 def sort(self, *args, **kwargs):
225 """In-place stable sort. Accepts arguments of list.sort."""
226 self._circuit._data.sort(*args, **kwargs)
227
228 def copy(self):
229 """Returns a shallow copy of instruction list."""
230 return self._circuit._data.copy()
231
[end of qiskit/circuit/quantumcircuitdata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/circuit/quantumcircuitdata.py b/qiskit/circuit/quantumcircuitdata.py
--- a/qiskit/circuit/quantumcircuitdata.py
+++ b/qiskit/circuit/quantumcircuitdata.py
@@ -26,6 +26,28 @@
"""A single instruction in a :class:`.QuantumCircuit`, comprised of the :attr:`operation` and
various operands.
+ .. note::
+
+ There is some possible confusion in the names of this class, :class:`~.circuit.Instruction`,
+ and :class:`~.circuit.Operation`, and this class's attribute :attr:`operation`. Our
+ preferred terminology is by analogy to assembly languages, where an "instruction" is made up
+ of an "operation" and its "operands".
+
+ Historically, :class:`~.circuit.Instruction` came first, and originally contained the qubits
+ it operated on and any parameters, so it was a true "instruction". Over time,
+ :class:`.QuantumCircuit` became responsible for tracking qubits and clbits, and the class
+ became better described as an "operation". Changing the name of such a core object would be
+ a very unpleasant API break for users, and so we have stuck with it.
+
+ This class was created to provide a formal "instruction" context object in
+ :class:`.QuantumCircuit.data`, which had long been made of ad-hoc tuples. With this, and
+ the advent of the :class:`~.circuit.Operation` interface for adding more complex objects to
+ circuits, we took the opportunity to correct the historical naming. For the time being,
+ this leads to an awkward case where :attr:`.CircuitInstruction.operation` is often an
+ :class:`~.circuit.Instruction` instance (:class:`~.circuit.Instruction` implements the
+ :class:`.Operation` interface), but as the :class:`.Operation` interface gains more use,
+ this confusion will hopefully abate.
+
.. warning::
This is a lightweight internal class and there is minimal error checking; you must respect
| {"golden_diff": "diff --git a/qiskit/circuit/quantumcircuitdata.py b/qiskit/circuit/quantumcircuitdata.py\n--- a/qiskit/circuit/quantumcircuitdata.py\n+++ b/qiskit/circuit/quantumcircuitdata.py\n@@ -26,6 +26,28 @@\n \"\"\"A single instruction in a :class:`.QuantumCircuit`, comprised of the :attr:`operation` and\n various operands.\n \n+ .. note::\n+\n+ There is some possible confusion in the names of this class, :class:`~.circuit.Instruction`,\n+ and :class:`~.circuit.Operation`, and this class's attribute :attr:`operation`. Our\n+ preferred terminology is by analogy to assembly languages, where an \"instruction\" is made up\n+ of an \"operation\" and its \"operands\".\n+\n+ Historically, :class:`~.circuit.Instruction` came first, and originally contained the qubits\n+ it operated on and any parameters, so it was a true \"instruction\". Over time,\n+ :class:`.QuantumCircuit` became responsible for tracking qubits and clbits, and the class\n+ became better described as an \"operation\". Changing the name of such a core object would be\n+ a very unpleasant API break for users, and so we have stuck with it.\n+\n+ This class was created to provide a formal \"instruction\" context object in\n+ :class:`.QuantumCircuit.data`, which had long been made of ad-hoc tuples. With this, and\n+ the advent of the :class:`~.circuit.Operation` interface for adding more complex objects to\n+ circuits, we took the opportunity to correct the historical naming. For the time being,\n+ this leads to an awkward case where :attr:`.CircuitInstruction.operation` is often an\n+ :class:`~.circuit.Instruction` instance (:class:`~.circuit.Instruction` implements the\n+ :class:`.Operation` interface), but as the :class:`.Operation` interface gains more use,\n+ this confusion will hopefully abate.\n+\n .. warning::\n \n This is a lightweight internal class and there is minimal error checking; you must respect\n", "issue": "Document naming of CircuitInstruction and related classes\n### What should we add?\n\nI asked @jakelishman why CircuitInstruction is named that way when it is not an Instruction, and why it has an attribute named `operation` which is actually an Instruction. He replied:\r\n>it\u2019s because what\u2019s currently called Instruction shouldn\u2019t really be called that, but we can\u2019t change it without breaking everything. So instead, the new interface definition for \u201csomething that can be added to QuantumCircuit is called Operation, and the container is CircuitInstruction to avoid the naming clash\r\n\r\nThis should be publicly documented somewhere to mitigate user confusion.\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"A wrapper class for the purposes of validating modifications to\nQuantumCircuit.data while maintaining the interface of a python list.\"\"\"\n\nfrom collections.abc import MutableSequence\nfrom typing import Tuple, Iterable, Optional\n\nfrom .exceptions import CircuitError\nfrom .instruction import Instruction\nfrom .quantumregister import Qubit\nfrom .classicalregister import Clbit\n\n\nclass CircuitInstruction:\n \"\"\"A single instruction in a :class:`.QuantumCircuit`, comprised of the :attr:`operation` and\n various operands.\n\n .. warning::\n\n This is a lightweight internal class and there is minimal error checking; you must respect\n the type hints when using it. It is the user's responsibility to ensure that direct\n mutations of the object do not invalidate the types, nor the restrictions placed on it by\n its context. Typically this will mean, for example, that :attr:`qubits` must be a sequence\n of distinct items, with no duplicates.\n \"\"\"\n\n __slots__ = (\"operation\", \"qubits\", \"clbits\", \"_legacy_format_cache\")\n\n operation: Instruction\n \"\"\"The logical operation that this instruction represents an execution of.\"\"\"\n qubits: Tuple[Qubit, ...]\n \"\"\"A sequence of the qubits that the operation is applied to.\"\"\"\n clbits: Tuple[Clbit, ...]\n \"\"\"A sequence of the classical bits that this operation reads from or writes to.\"\"\"\n\n def __init__(\n self,\n operation: Instruction,\n qubits: Iterable[Qubit] = (),\n clbits: Iterable[Clbit] = (),\n ):\n self.operation = operation\n self.qubits = tuple(qubits)\n self.clbits = tuple(clbits)\n self._legacy_format_cache = None\n\n def copy(self) -> \"CircuitInstruction\":\n \"\"\"Return a shallow copy of the :class:`CircuitInstruction`.\"\"\"\n return self.__class__(\n operation=self.operation,\n qubits=self.qubits,\n clbits=self.clbits,\n )\n\n def replace(\n self,\n operation: Optional[Instruction] = None,\n qubits: Optional[Iterable[Qubit]] = None,\n clbits: Optional[Iterable[Clbit]] = None,\n ) -> \"CircuitInstruction\":\n \"\"\"Return a new :class:`CircuitInstruction` with the given fields replaced.\"\"\"\n return self.__class__(\n operation=self.operation if operation is None else operation,\n qubits=self.qubits if qubits is None else qubits,\n clbits=self.clbits if clbits is None else clbits,\n )\n\n def __repr__(self):\n return (\n f\"{type(self).__name__}(\"\n f\"operation={self.operation!r}\"\n f\", qubits={self.qubits!r}\"\n f\", clbits={self.clbits!r}\"\n \")\"\n )\n\n def __eq__(self, other):\n if isinstance(other, type(self)):\n # Ordered from fastest comparisons to slowest.\n return (\n self.clbits == other.clbits\n and self.qubits == other.qubits\n and self.operation == other.operation\n )\n if isinstance(other, tuple):\n return self._legacy_format == other\n return NotImplemented\n\n # Legacy tuple-like interface support.\n #\n # For a best attempt at API compatibility during the transition to using this new class, we need\n # the interface to behave exactly like the old 3-tuple `(inst, qargs, cargs)` if it's treated\n # like that via unpacking or similar. That means that the `parameters` field is completely\n # absent, and the qubits and clbits must be converted to lists.\n\n @property\n def _legacy_format(self):\n if self._legacy_format_cache is None:\n # The qubits and clbits were generally stored as lists in the old format, and various\n # places assume that they will certainly be lists.\n self._legacy_format_cache = (self.operation, list(self.qubits), list(self.clbits))\n return self._legacy_format_cache\n\n def __getitem__(self, key):\n return self._legacy_format[key]\n\n def __iter__(self):\n return iter(self._legacy_format)\n\n def __len__(self):\n return 3\n\n\nclass QuantumCircuitData(MutableSequence):\n \"\"\"A wrapper class for the purposes of validating modifications to\n QuantumCircuit.data while maintaining the interface of a python list.\"\"\"\n\n def __init__(self, circuit):\n self._circuit = circuit\n\n def __getitem__(self, i):\n return self._circuit._data[i]\n\n def __setitem__(self, key, value):\n # For now (Terra 0.21), the `QuantumCircuit.data` setter is meant to perform validation, so\n # we do the same qubit checks that `QuantumCircuit.append` would do.\n if isinstance(value, CircuitInstruction):\n operation, qargs, cargs = value.operation, value.qubits, value.clbits\n else:\n # Handle the legacy 3-tuple format.\n operation, qargs, cargs = value\n value = self._resolve_legacy_value(operation, qargs, cargs)\n self._circuit._data[key] = value\n self._circuit._update_parameter_table(value)\n\n def _resolve_legacy_value(self, operation, qargs, cargs) -> CircuitInstruction:\n \"\"\"Resolve the old-style 3-tuple into the new :class:`CircuitInstruction` type.\"\"\"\n if not isinstance(operation, Instruction) and hasattr(operation, \"to_instruction\"):\n operation = operation.to_instruction()\n if not isinstance(operation, Instruction):\n raise CircuitError(\"object is not an Instruction.\")\n\n expanded_qargs = [self._circuit.qbit_argument_conversion(qarg) for qarg in qargs or []]\n expanded_cargs = [self._circuit.cbit_argument_conversion(carg) for carg in cargs or []]\n\n if isinstance(operation, Instruction):\n broadcast_args = list(operation.broadcast_arguments(expanded_qargs, expanded_cargs))\n else:\n broadcast_args = list(\n Instruction.broadcast_arguments(operation, expanded_qargs, expanded_cargs)\n )\n\n if len(broadcast_args) > 1:\n raise CircuitError(\n \"QuantumCircuit.data modification does not support argument broadcasting.\"\n )\n\n qargs, cargs = broadcast_args[0]\n\n self._circuit._check_dups(qargs)\n return CircuitInstruction(operation, tuple(qargs), tuple(cargs))\n\n def insert(self, index, value):\n self._circuit._data.insert(index, None)\n try:\n self[index] = value\n except CircuitError:\n del self._circuit._data[index]\n raise\n\n def __iter__(self):\n return iter(self._circuit._data)\n\n def __delitem__(self, i):\n del self._circuit._data[i]\n\n def __len__(self):\n return len(self._circuit._data)\n\n def __cast(self, other):\n return other._circuit._data if isinstance(other, QuantumCircuitData) else other\n\n def __repr__(self):\n return repr(self._circuit._data)\n\n def __lt__(self, other):\n return self._circuit._data < self.__cast(other)\n\n def __le__(self, other):\n return self._circuit._data <= self.__cast(other)\n\n def __eq__(self, other):\n return self._circuit._data == self.__cast(other)\n\n def __gt__(self, other):\n return self._circuit._data > self.__cast(other)\n\n def __ge__(self, other):\n return self._circuit._data >= self.__cast(other)\n\n def __add__(self, other):\n return self._circuit._data + self.__cast(other)\n\n def __radd__(self, other):\n return self.__cast(other) + self._circuit._data\n\n def __mul__(self, n):\n return self._circuit._data * n\n\n def __rmul__(self, n):\n return n * self._circuit._data\n\n def sort(self, *args, **kwargs):\n \"\"\"In-place stable sort. Accepts arguments of list.sort.\"\"\"\n self._circuit._data.sort(*args, **kwargs)\n\n def copy(self):\n \"\"\"Returns a shallow copy of instruction list.\"\"\"\n return self._circuit._data.copy()\n", "path": "qiskit/circuit/quantumcircuitdata.py"}]} | 3,223 | 492 |
gh_patches_debug_7860 | rasdani/github-patches | git_diff | encode__httpx-321 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Away-from-origin redirects should add a new host header
Prompted by #310
Away-from-origin redirects currently remove the `Host` header, without adding a new one.
I think we ought to be using `headers['Host'] = url.authority.encode("ascii")` instead of simply deleting the header.
</issue>
<code>
[start of httpx/middleware.py]
1 import functools
2 import typing
3 from base64 import b64encode
4
5 from .config import DEFAULT_MAX_REDIRECTS
6 from .exceptions import RedirectBodyUnavailable, RedirectLoop, TooManyRedirects
7 from .models import URL, AsyncRequest, AsyncResponse, Cookies, Headers
8 from .status_codes import codes
9
10
11 class BaseMiddleware:
12 async def __call__(
13 self, request: AsyncRequest, get_response: typing.Callable
14 ) -> AsyncResponse:
15 raise NotImplementedError # pragma: no cover
16
17
18 class BasicAuthMiddleware(BaseMiddleware):
19 def __init__(
20 self, username: typing.Union[str, bytes], password: typing.Union[str, bytes]
21 ):
22 if isinstance(username, str):
23 username = username.encode("latin1")
24
25 if isinstance(password, str):
26 password = password.encode("latin1")
27
28 userpass = b":".join((username, password))
29 token = b64encode(userpass).decode().strip()
30
31 self.authorization_header = f"Basic {token}"
32
33 async def __call__(
34 self, request: AsyncRequest, get_response: typing.Callable
35 ) -> AsyncResponse:
36 request.headers["Authorization"] = self.authorization_header
37 return await get_response(request)
38
39
40 class CustomAuthMiddleware(BaseMiddleware):
41 def __init__(self, auth: typing.Callable[[AsyncRequest], AsyncRequest]):
42 self.auth = auth
43
44 async def __call__(
45 self, request: AsyncRequest, get_response: typing.Callable
46 ) -> AsyncResponse:
47 request = self.auth(request)
48 return await get_response(request)
49
50
51 class RedirectMiddleware(BaseMiddleware):
52 def __init__(
53 self,
54 allow_redirects: bool = True,
55 max_redirects: int = DEFAULT_MAX_REDIRECTS,
56 cookies: typing.Optional[Cookies] = None,
57 ):
58 self.allow_redirects = allow_redirects
59 self.max_redirects = max_redirects
60 self.cookies = cookies
61 self.history: typing.List[AsyncResponse] = []
62
63 async def __call__(
64 self, request: AsyncRequest, get_response: typing.Callable
65 ) -> AsyncResponse:
66 if len(self.history) > self.max_redirects:
67 raise TooManyRedirects()
68 if request.url in (response.url for response in self.history):
69 raise RedirectLoop()
70
71 response = await get_response(request)
72 response.history = list(self.history)
73
74 if not response.is_redirect:
75 return response
76
77 self.history.append(response)
78 next_request = self.build_redirect_request(request, response)
79
80 if self.allow_redirects:
81 return await self(next_request, get_response)
82
83 response.call_next = functools.partial(self, next_request, get_response)
84 return response
85
86 def build_redirect_request(
87 self, request: AsyncRequest, response: AsyncResponse
88 ) -> AsyncRequest:
89 method = self.redirect_method(request, response)
90 url = self.redirect_url(request, response)
91 headers = self.redirect_headers(request, url, method) # TODO: merge headers?
92 content = self.redirect_content(request, method)
93 cookies = Cookies(self.cookies)
94 cookies.update(request.cookies)
95 return AsyncRequest(
96 method=method, url=url, headers=headers, data=content, cookies=cookies
97 )
98
99 def redirect_method(self, request: AsyncRequest, response: AsyncResponse) -> str:
100 """
101 When being redirected we may want to change the method of the request
102 based on certain specs or browser behavior.
103 """
104 method = request.method
105
106 # https://tools.ietf.org/html/rfc7231#section-6.4.4
107 if response.status_code == codes.SEE_OTHER and method != "HEAD":
108 method = "GET"
109
110 # Do what the browsers do, despite standards...
111 # Turn 302s into GETs.
112 if response.status_code == codes.FOUND and method != "HEAD":
113 method = "GET"
114
115 # If a POST is responded to with a 301, turn it into a GET.
116 # This bizarre behaviour is explained in 'requests' issue 1704.
117 if response.status_code == codes.MOVED_PERMANENTLY and method == "POST":
118 method = "GET"
119
120 return method
121
122 def redirect_url(self, request: AsyncRequest, response: AsyncResponse) -> URL:
123 """
124 Return the URL for the redirect to follow.
125 """
126 location = response.headers["Location"]
127
128 url = URL(location, allow_relative=True)
129
130 # Facilitate relative 'Location' headers, as allowed by RFC 7231.
131 # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')
132 if url.is_relative_url:
133 url = request.url.join(url)
134
135 # Attach previous fragment if needed (RFC 7231 7.1.2)
136 if request.url.fragment and not url.fragment:
137 url = url.copy_with(fragment=request.url.fragment)
138
139 return url
140
141 def redirect_headers(self, request: AsyncRequest, url: URL, method: str) -> Headers:
142 """
143 Return the headers that should be used for the redirect request.
144 """
145 headers = Headers(request.headers)
146
147 if url.origin != request.url.origin:
148 # Strip Authorization headers when responses are redirected away from
149 # the origin.
150 del headers["Authorization"]
151 del headers["Host"]
152
153 if method != request.method and method == "GET":
154 # If we've switch to a 'GET' request, then strip any headers which
155 # are only relevant to the request body.
156 del headers["Content-Length"]
157 del headers["Transfer-Encoding"]
158
159 return headers
160
161 def redirect_content(self, request: AsyncRequest, method: str) -> bytes:
162 """
163 Return the body that should be used for the redirect request.
164 """
165 if method != request.method and method == "GET":
166 return b""
167 if request.is_streaming:
168 raise RedirectBodyUnavailable()
169 return request.content
170
[end of httpx/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/middleware.py b/httpx/middleware.py
--- a/httpx/middleware.py
+++ b/httpx/middleware.py
@@ -148,7 +148,7 @@
# Strip Authorization headers when responses are redirected away from
# the origin.
del headers["Authorization"]
- del headers["Host"]
+ headers["Host"] = url.authority
if method != request.method and method == "GET":
# If we've switch to a 'GET' request, then strip any headers which
| {"golden_diff": "diff --git a/httpx/middleware.py b/httpx/middleware.py\n--- a/httpx/middleware.py\n+++ b/httpx/middleware.py\n@@ -148,7 +148,7 @@\n # Strip Authorization headers when responses are redirected away from\n # the origin.\n del headers[\"Authorization\"]\n- del headers[\"Host\"]\n+ headers[\"Host\"] = url.authority\n \n if method != request.method and method == \"GET\":\n # If we've switch to a 'GET' request, then strip any headers which\n", "issue": "Away-from-origin redirects should add a new host header\nPrompted by #310\r\n\r\nAway-from-origin redirects currently remove the `Host` header, without adding a new one.\r\nI think we ought to be using `headers['Host'] = url.authority.encode(\"ascii\")` instead of simply deleting the header.\n", "before_files": [{"content": "import functools\nimport typing\nfrom base64 import b64encode\n\nfrom .config import DEFAULT_MAX_REDIRECTS\nfrom .exceptions import RedirectBodyUnavailable, RedirectLoop, TooManyRedirects\nfrom .models import URL, AsyncRequest, AsyncResponse, Cookies, Headers\nfrom .status_codes import codes\n\n\nclass BaseMiddleware:\n async def __call__(\n self, request: AsyncRequest, get_response: typing.Callable\n ) -> AsyncResponse:\n raise NotImplementedError # pragma: no cover\n\n\nclass BasicAuthMiddleware(BaseMiddleware):\n def __init__(\n self, username: typing.Union[str, bytes], password: typing.Union[str, bytes]\n ):\n if isinstance(username, str):\n username = username.encode(\"latin1\")\n\n if isinstance(password, str):\n password = password.encode(\"latin1\")\n\n userpass = b\":\".join((username, password))\n token = b64encode(userpass).decode().strip()\n\n self.authorization_header = f\"Basic {token}\"\n\n async def __call__(\n self, request: AsyncRequest, get_response: typing.Callable\n ) -> AsyncResponse:\n request.headers[\"Authorization\"] = self.authorization_header\n return await get_response(request)\n\n\nclass CustomAuthMiddleware(BaseMiddleware):\n def __init__(self, auth: typing.Callable[[AsyncRequest], AsyncRequest]):\n self.auth = auth\n\n async def __call__(\n self, request: AsyncRequest, get_response: typing.Callable\n ) -> AsyncResponse:\n request = self.auth(request)\n return await get_response(request)\n\n\nclass RedirectMiddleware(BaseMiddleware):\n def __init__(\n self,\n allow_redirects: bool = True,\n max_redirects: int = DEFAULT_MAX_REDIRECTS,\n cookies: typing.Optional[Cookies] = None,\n ):\n self.allow_redirects = allow_redirects\n self.max_redirects = max_redirects\n self.cookies = cookies\n self.history: typing.List[AsyncResponse] = []\n\n async def __call__(\n self, request: AsyncRequest, get_response: typing.Callable\n ) -> AsyncResponse:\n if len(self.history) > self.max_redirects:\n raise TooManyRedirects()\n if request.url in (response.url for response in self.history):\n raise RedirectLoop()\n\n response = await get_response(request)\n response.history = list(self.history)\n\n if not response.is_redirect:\n return response\n\n self.history.append(response)\n next_request = self.build_redirect_request(request, response)\n\n if self.allow_redirects:\n return await self(next_request, get_response)\n\n response.call_next = functools.partial(self, next_request, get_response)\n return response\n\n def build_redirect_request(\n self, request: AsyncRequest, response: AsyncResponse\n ) -> AsyncRequest:\n method = self.redirect_method(request, response)\n url = self.redirect_url(request, response)\n headers = self.redirect_headers(request, url, method) # TODO: merge headers?\n content = self.redirect_content(request, method)\n cookies = Cookies(self.cookies)\n cookies.update(request.cookies)\n return AsyncRequest(\n method=method, url=url, headers=headers, data=content, cookies=cookies\n )\n\n def redirect_method(self, request: AsyncRequest, response: AsyncResponse) -> str:\n \"\"\"\n When being redirected we may want to change the method of the request\n based on certain specs or browser behavior.\n \"\"\"\n method = request.method\n\n # https://tools.ietf.org/html/rfc7231#section-6.4.4\n if response.status_code == codes.SEE_OTHER and method != \"HEAD\":\n method = \"GET\"\n\n # Do what the browsers do, despite standards...\n # Turn 302s into GETs.\n if response.status_code == codes.FOUND and method != \"HEAD\":\n method = \"GET\"\n\n # If a POST is responded to with a 301, turn it into a GET.\n # This bizarre behaviour is explained in 'requests' issue 1704.\n if response.status_code == codes.MOVED_PERMANENTLY and method == \"POST\":\n method = \"GET\"\n\n return method\n\n def redirect_url(self, request: AsyncRequest, response: AsyncResponse) -> URL:\n \"\"\"\n Return the URL for the redirect to follow.\n \"\"\"\n location = response.headers[\"Location\"]\n\n url = URL(location, allow_relative=True)\n\n # Facilitate relative 'Location' headers, as allowed by RFC 7231.\n # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')\n if url.is_relative_url:\n url = request.url.join(url)\n\n # Attach previous fragment if needed (RFC 7231 7.1.2)\n if request.url.fragment and not url.fragment:\n url = url.copy_with(fragment=request.url.fragment)\n\n return url\n\n def redirect_headers(self, request: AsyncRequest, url: URL, method: str) -> Headers:\n \"\"\"\n Return the headers that should be used for the redirect request.\n \"\"\"\n headers = Headers(request.headers)\n\n if url.origin != request.url.origin:\n # Strip Authorization headers when responses are redirected away from\n # the origin.\n del headers[\"Authorization\"]\n del headers[\"Host\"]\n\n if method != request.method and method == \"GET\":\n # If we've switch to a 'GET' request, then strip any headers which\n # are only relevant to the request body.\n del headers[\"Content-Length\"]\n del headers[\"Transfer-Encoding\"]\n\n return headers\n\n def redirect_content(self, request: AsyncRequest, method: str) -> bytes:\n \"\"\"\n Return the body that should be used for the redirect request.\n \"\"\"\n if method != request.method and method == \"GET\":\n return b\"\"\n if request.is_streaming:\n raise RedirectBodyUnavailable()\n return request.content\n", "path": "httpx/middleware.py"}]} | 2,290 | 118 |
gh_patches_debug_3583 | rasdani/github-patches | git_diff | frappe__frappe-14370 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
frappe.db.exists is not available to Server Scripts
**Is your feature request related to a problem? Please describe.**
`frappe.db.exists` is not exposed to **Server Scripts**.
**Describe alternatives you've considered**
Currently I am using `frappe.get_all` with `filters` and `limit=1` to check for existence.
</issue>
<code>
[start of frappe/utils/safe_exec.py]
1
2 import os, json, inspect
3 import mimetypes
4 from html2text import html2text
5 from RestrictedPython import compile_restricted, safe_globals
6 import RestrictedPython.Guards
7 import frappe
8 from frappe import _
9 import frappe.utils
10 import frappe.utils.data
11 from frappe.website.utils import (get_shade, get_toc, get_next_link)
12 from frappe.modules import scrub
13 from frappe.www.printview import get_visible_columns
14 import frappe.exceptions
15 import frappe.integrations.utils
16 from frappe.frappeclient import FrappeClient
17
18 class ServerScriptNotEnabled(frappe.PermissionError):
19 pass
20
21 class NamespaceDict(frappe._dict):
22 """Raise AttributeError if function not found in namespace"""
23 def __getattr__(self, key):
24 ret = self.get(key)
25 if (not ret and key.startswith("__")) or (key not in self):
26 def default_function(*args, **kwargs):
27 raise AttributeError(f"module has no attribute '{key}'")
28 return default_function
29 return ret
30
31
32 def safe_exec(script, _globals=None, _locals=None):
33 # server scripts can be disabled via site_config.json
34 # they are enabled by default
35 if 'server_script_enabled' in frappe.conf:
36 enabled = frappe.conf.server_script_enabled
37 else:
38 enabled = True
39
40 if not enabled:
41 frappe.throw(_('Please Enable Server Scripts'), ServerScriptNotEnabled)
42
43 # build globals
44 exec_globals = get_safe_globals()
45 if _globals:
46 exec_globals.update(_globals)
47
48 # execute script compiled by RestrictedPython
49 exec(compile_restricted(script), exec_globals, _locals) # pylint: disable=exec-used
50
51 return exec_globals, _locals
52
53 def get_safe_globals():
54 datautils = frappe._dict()
55 if frappe.db:
56 date_format = frappe.db.get_default("date_format") or "yyyy-mm-dd"
57 time_format = frappe.db.get_default("time_format") or "HH:mm:ss"
58 else:
59 date_format = "yyyy-mm-dd"
60 time_format = "HH:mm:ss"
61
62 add_data_utils(datautils)
63
64 if "_" in getattr(frappe.local, 'form_dict', {}):
65 del frappe.local.form_dict["_"]
66
67 user = getattr(frappe.local, "session", None) and frappe.local.session.user or "Guest"
68
69 out = NamespaceDict(
70 # make available limited methods of frappe
71 json=NamespaceDict(
72 loads = json.loads,
73 dumps = json.dumps),
74 dict=dict,
75 log=frappe.log,
76 _dict=frappe._dict,
77 frappe=NamespaceDict(
78 flags=frappe._dict(),
79 format=frappe.format_value,
80 format_value=frappe.format_value,
81 date_format=date_format,
82 time_format=time_format,
83 format_date=frappe.utils.data.global_date_format,
84 form_dict=getattr(frappe.local, 'form_dict', {}),
85 bold=frappe.bold,
86 copy_doc=frappe.copy_doc,
87 errprint=frappe.errprint,
88
89 get_meta=frappe.get_meta,
90 get_doc=frappe.get_doc,
91 get_cached_doc=frappe.get_cached_doc,
92 get_list=frappe.get_list,
93 get_all=frappe.get_all,
94 get_system_settings=frappe.get_system_settings,
95 rename_doc=frappe.rename_doc,
96
97 utils=datautils,
98 get_url=frappe.utils.get_url,
99 render_template=frappe.render_template,
100 msgprint=frappe.msgprint,
101 throw=frappe.throw,
102 sendmail = frappe.sendmail,
103 get_print = frappe.get_print,
104 attach_print = frappe.attach_print,
105
106 user=user,
107 get_fullname=frappe.utils.get_fullname,
108 get_gravatar=frappe.utils.get_gravatar_url,
109 full_name=frappe.local.session.data.full_name if getattr(frappe.local, "session", None) else "Guest",
110 request=getattr(frappe.local, 'request', {}),
111 session=frappe._dict(
112 user=user,
113 csrf_token=frappe.local.session.data.csrf_token if getattr(frappe.local, "session", None) else ''
114 ),
115 make_get_request = frappe.integrations.utils.make_get_request,
116 make_post_request = frappe.integrations.utils.make_post_request,
117 socketio_port=frappe.conf.socketio_port,
118 get_hooks=frappe.get_hooks,
119 sanitize_html=frappe.utils.sanitize_html,
120 log_error=frappe.log_error
121 ),
122 FrappeClient=FrappeClient,
123 style=frappe._dict(
124 border_color='#d1d8dd'
125 ),
126 get_toc=get_toc,
127 get_next_link=get_next_link,
128 _=frappe._,
129 get_shade=get_shade,
130 scrub=scrub,
131 guess_mimetype=mimetypes.guess_type,
132 html2text=html2text,
133 dev_server=1 if frappe._dev_server else 0,
134 run_script=run_script
135 )
136
137 add_module_properties(frappe.exceptions, out.frappe, lambda obj: inspect.isclass(obj) and issubclass(obj, Exception))
138
139 if not frappe.flags.in_setup_help:
140 out.get_visible_columns = get_visible_columns
141 out.frappe.date_format = date_format
142 out.frappe.time_format = time_format
143 out.frappe.db = NamespaceDict(
144 get_list = frappe.get_list,
145 get_all = frappe.get_all,
146 get_value = frappe.db.get_value,
147 set_value = frappe.db.set_value,
148 get_single_value = frappe.db.get_single_value,
149 get_default = frappe.db.get_default,
150 count = frappe.db.count,
151 min = frappe.db.min,
152 max = frappe.db.max,
153 avg = frappe.db.avg,
154 sum = frappe.db.sum,
155 escape = frappe.db.escape,
156 sql = read_sql
157 )
158
159 if frappe.response:
160 out.frappe.response = frappe.response
161
162 out.update(safe_globals)
163
164 # default writer allows write access
165 out._write_ = _write
166 out._getitem_ = _getitem
167 out._getattr_ = _getattr
168
169 # allow iterators and list comprehension
170 out._getiter_ = iter
171 out._iter_unpack_sequence_ = RestrictedPython.Guards.guarded_iter_unpack_sequence
172 out.sorted = sorted
173
174 return out
175
176 def read_sql(query, *args, **kwargs):
177 '''a wrapper for frappe.db.sql to allow reads'''
178 if query.strip().split(None, 1)[0].lower() == 'select':
179 return frappe.db.sql(query, *args, **kwargs)
180 else:
181 raise frappe.PermissionError('Only SELECT SQL allowed in scripting')
182
183 def run_script(script):
184 '''run another server script'''
185 return frappe.get_doc('Server Script', script).execute_method()
186
187 def _getitem(obj, key):
188 # guard function for RestrictedPython
189 # allow any key to be accessed as long as it does not start with underscore
190 if isinstance(key, str) and key.startswith('_'):
191 raise SyntaxError('Key starts with _')
192 return obj[key]
193
194 def _getattr(object, name, default=None):
195 # guard function for RestrictedPython
196 # allow any key to be accessed as long as
197 # 1. it does not start with an underscore (safer_getattr)
198 # 2. it is not an UNSAFE_ATTRIBUTES
199
200 UNSAFE_ATTRIBUTES = {
201 # Generator Attributes
202 "gi_frame", "gi_code",
203 # Coroutine Attributes
204 "cr_frame", "cr_code", "cr_origin",
205 # Async Generator Attributes
206 "ag_code", "ag_frame",
207 # Traceback Attributes
208 "tb_frame", "tb_next",
209 }
210
211 if isinstance(name, str) and (name in UNSAFE_ATTRIBUTES):
212 raise SyntaxError("{name} is an unsafe attribute".format(name=name))
213 return RestrictedPython.Guards.safer_getattr(object, name, default=default)
214
215 def _write(obj):
216 # guard function for RestrictedPython
217 # allow writing to any object
218 return obj
219
220 def add_data_utils(data):
221 for key, obj in frappe.utils.data.__dict__.items():
222 if key in VALID_UTILS:
223 data[key] = obj
224
225 def add_module_properties(module, data, filter_method):
226 for key, obj in module.__dict__.items():
227 if key.startswith("_"):
228 # ignore
229 continue
230
231 if filter_method(obj):
232 # only allow functions
233 data[key] = obj
234
235 VALID_UTILS = (
236 "DATE_FORMAT",
237 "TIME_FORMAT",
238 "DATETIME_FORMAT",
239 "is_invalid_date_string",
240 "getdate",
241 "get_datetime",
242 "to_timedelta",
243 "get_timedelta",
244 "add_to_date",
245 "add_days",
246 "add_months",
247 "add_years",
248 "date_diff",
249 "month_diff",
250 "time_diff",
251 "time_diff_in_seconds",
252 "time_diff_in_hours",
253 "now_datetime",
254 "get_timestamp",
255 "get_eta",
256 "get_time_zone",
257 "convert_utc_to_user_timezone",
258 "now",
259 "nowdate",
260 "today",
261 "nowtime",
262 "get_first_day",
263 "get_quarter_start",
264 "get_first_day_of_week",
265 "get_year_start",
266 "get_last_day_of_week",
267 "get_last_day",
268 "get_time",
269 "get_datetime_in_timezone",
270 "get_datetime_str",
271 "get_date_str",
272 "get_time_str",
273 "get_user_date_format",
274 "get_user_time_format",
275 "format_date",
276 "format_time",
277 "format_datetime",
278 "format_duration",
279 "get_weekdays",
280 "get_weekday",
281 "get_timespan_date_range",
282 "global_date_format",
283 "has_common",
284 "flt",
285 "cint",
286 "floor",
287 "ceil",
288 "cstr",
289 "rounded",
290 "remainder",
291 "safe_div",
292 "round_based_on_smallest_currency_fraction",
293 "encode",
294 "parse_val",
295 "fmt_money",
296 "get_number_format_info",
297 "money_in_words",
298 "in_words",
299 "is_html",
300 "is_image",
301 "get_thumbnail_base64_for_image",
302 "image_to_base64",
303 "strip_html",
304 "escape_html",
305 "pretty_date",
306 "comma_or",
307 "comma_and",
308 "comma_sep",
309 "new_line_sep",
310 "filter_strip_join",
311 "get_url",
312 "get_host_name_from_request",
313 "url_contains_port",
314 "get_host_name",
315 "get_link_to_form",
316 "get_link_to_report",
317 "get_absolute_url",
318 "get_url_to_form",
319 "get_url_to_list",
320 "get_url_to_report",
321 "get_url_to_report_with_filters",
322 "evaluate_filters",
323 "compare",
324 "get_filter",
325 "make_filter_tuple",
326 "make_filter_dict",
327 "sanitize_column",
328 "scrub_urls",
329 "expand_relative_urls",
330 "quoted",
331 "quote_urls",
332 "unique",
333 "strip",
334 "to_markdown",
335 "md_to_html",
336 "markdown",
337 "is_subset",
338 "generate_hash",
339 "formatdate",
340 "get_user_info_for_avatar",
341 "get_abbr"
342 )
343
[end of frappe/utils/safe_exec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/frappe/utils/safe_exec.py b/frappe/utils/safe_exec.py
--- a/frappe/utils/safe_exec.py
+++ b/frappe/utils/safe_exec.py
@@ -147,6 +147,7 @@
set_value = frappe.db.set_value,
get_single_value = frappe.db.get_single_value,
get_default = frappe.db.get_default,
+ exists = frappe.db.exists,
count = frappe.db.count,
min = frappe.db.min,
max = frappe.db.max,
| {"golden_diff": "diff --git a/frappe/utils/safe_exec.py b/frappe/utils/safe_exec.py\n--- a/frappe/utils/safe_exec.py\n+++ b/frappe/utils/safe_exec.py\n@@ -147,6 +147,7 @@\n \t\t\tset_value = frappe.db.set_value,\n \t\t\tget_single_value = frappe.db.get_single_value,\n \t\t\tget_default = frappe.db.get_default,\n+\t\t\texists = frappe.db.exists,\n \t\t\tcount = frappe.db.count,\n \t\t\tmin = frappe.db.min,\n \t\t\tmax = frappe.db.max,\n", "issue": "frappe.db.exists is not available to Server Scripts\n**Is your feature request related to a problem? Please describe.**\r\n`frappe.db.exists` is not exposed to **Server Scripts**.\r\n\r\n**Describe alternatives you've considered**\r\nCurrently I am using `frappe.get_all` with `filters` and `limit=1` to check for existence.\r\n\n", "before_files": [{"content": "\nimport os, json, inspect\nimport mimetypes\nfrom html2text import html2text\nfrom RestrictedPython import compile_restricted, safe_globals\nimport RestrictedPython.Guards\nimport frappe\nfrom frappe import _\nimport frappe.utils\nimport frappe.utils.data\nfrom frappe.website.utils import (get_shade, get_toc, get_next_link)\nfrom frappe.modules import scrub\nfrom frappe.www.printview import get_visible_columns\nimport frappe.exceptions\nimport frappe.integrations.utils\nfrom frappe.frappeclient import FrappeClient\n\nclass ServerScriptNotEnabled(frappe.PermissionError):\n\tpass\n\nclass NamespaceDict(frappe._dict):\n\t\"\"\"Raise AttributeError if function not found in namespace\"\"\"\n\tdef __getattr__(self, key):\n\t\tret = self.get(key)\n\t\tif (not ret and key.startswith(\"__\")) or (key not in self):\n\t\t\tdef default_function(*args, **kwargs):\n\t\t\t\traise AttributeError(f\"module has no attribute '{key}'\")\n\t\t\treturn default_function\n\t\treturn ret\n\n\ndef safe_exec(script, _globals=None, _locals=None):\n\t# server scripts can be disabled via site_config.json\n\t# they are enabled by default\n\tif 'server_script_enabled' in frappe.conf:\n\t\tenabled = frappe.conf.server_script_enabled\n\telse:\n\t\tenabled = True\n\n\tif not enabled:\n\t\tfrappe.throw(_('Please Enable Server Scripts'), ServerScriptNotEnabled)\n\n\t# build globals\n\texec_globals = get_safe_globals()\n\tif _globals:\n\t\texec_globals.update(_globals)\n\n\t# execute script compiled by RestrictedPython\n\texec(compile_restricted(script), exec_globals, _locals) # pylint: disable=exec-used\n\n\treturn exec_globals, _locals\n\ndef get_safe_globals():\n\tdatautils = frappe._dict()\n\tif frappe.db:\n\t\tdate_format = frappe.db.get_default(\"date_format\") or \"yyyy-mm-dd\"\n\t\ttime_format = frappe.db.get_default(\"time_format\") or \"HH:mm:ss\"\n\telse:\n\t\tdate_format = \"yyyy-mm-dd\"\n\t\ttime_format = \"HH:mm:ss\"\n\n\tadd_data_utils(datautils)\n\n\tif \"_\" in getattr(frappe.local, 'form_dict', {}):\n\t\tdel frappe.local.form_dict[\"_\"]\n\n\tuser = getattr(frappe.local, \"session\", None) and frappe.local.session.user or \"Guest\"\n\n\tout = NamespaceDict(\n\t\t# make available limited methods of frappe\n\t\tjson=NamespaceDict(\n\t\t\tloads = json.loads,\n\t\t\tdumps = json.dumps),\n\t\tdict=dict,\n\t\tlog=frappe.log,\n\t\t_dict=frappe._dict,\n\t\tfrappe=NamespaceDict(\n\t\t\tflags=frappe._dict(),\n\t\t\tformat=frappe.format_value,\n\t\t\tformat_value=frappe.format_value,\n\t\t\tdate_format=date_format,\n\t\t\ttime_format=time_format,\n\t\t\tformat_date=frappe.utils.data.global_date_format,\n\t\t\tform_dict=getattr(frappe.local, 'form_dict', {}),\n\t\t\tbold=frappe.bold,\n\t\t\tcopy_doc=frappe.copy_doc,\n\t\t\terrprint=frappe.errprint,\n\n\t\t\tget_meta=frappe.get_meta,\n\t\t\tget_doc=frappe.get_doc,\n\t\t\tget_cached_doc=frappe.get_cached_doc,\n\t\t\tget_list=frappe.get_list,\n\t\t\tget_all=frappe.get_all,\n\t\t\tget_system_settings=frappe.get_system_settings,\n\t\t\trename_doc=frappe.rename_doc,\n\n\t\t\tutils=datautils,\n\t\t\tget_url=frappe.utils.get_url,\n\t\t\trender_template=frappe.render_template,\n\t\t\tmsgprint=frappe.msgprint,\n\t\t\tthrow=frappe.throw,\n\t\t\tsendmail = frappe.sendmail,\n\t\t\tget_print = frappe.get_print,\n\t\t\tattach_print = frappe.attach_print,\n\n\t\t\tuser=user,\n\t\t\tget_fullname=frappe.utils.get_fullname,\n\t\t\tget_gravatar=frappe.utils.get_gravatar_url,\n\t\t\tfull_name=frappe.local.session.data.full_name if getattr(frappe.local, \"session\", None) else \"Guest\",\n\t\t\trequest=getattr(frappe.local, 'request', {}),\n\t\t\tsession=frappe._dict(\n\t\t\t\tuser=user,\n\t\t\t\tcsrf_token=frappe.local.session.data.csrf_token if getattr(frappe.local, \"session\", None) else ''\n\t\t\t),\n\t\t\tmake_get_request = frappe.integrations.utils.make_get_request,\n\t\t\tmake_post_request = frappe.integrations.utils.make_post_request,\n\t\t\tsocketio_port=frappe.conf.socketio_port,\n\t\t\tget_hooks=frappe.get_hooks,\n\t\t\tsanitize_html=frappe.utils.sanitize_html,\n\t\t\tlog_error=frappe.log_error\n\t\t),\n\t\tFrappeClient=FrappeClient,\n\t\tstyle=frappe._dict(\n\t\t\tborder_color='#d1d8dd'\n\t\t),\n\t\tget_toc=get_toc,\n\t\tget_next_link=get_next_link,\n\t\t_=frappe._,\n\t\tget_shade=get_shade,\n\t\tscrub=scrub,\n\t\tguess_mimetype=mimetypes.guess_type,\n\t\thtml2text=html2text,\n\t\tdev_server=1 if frappe._dev_server else 0,\n\t\trun_script=run_script\n\t)\n\n\tadd_module_properties(frappe.exceptions, out.frappe, lambda obj: inspect.isclass(obj) and issubclass(obj, Exception))\n\n\tif not frappe.flags.in_setup_help:\n\t\tout.get_visible_columns = get_visible_columns\n\t\tout.frappe.date_format = date_format\n\t\tout.frappe.time_format = time_format\n\t\tout.frappe.db = NamespaceDict(\n\t\t\tget_list = frappe.get_list,\n\t\t\tget_all = frappe.get_all,\n\t\t\tget_value = frappe.db.get_value,\n\t\t\tset_value = frappe.db.set_value,\n\t\t\tget_single_value = frappe.db.get_single_value,\n\t\t\tget_default = frappe.db.get_default,\n\t\t\tcount = frappe.db.count,\n\t\t\tmin = frappe.db.min,\n\t\t\tmax = frappe.db.max,\n\t\t\tavg = frappe.db.avg,\n\t\t\tsum = frappe.db.sum,\n\t\t\tescape = frappe.db.escape,\n\t\t\tsql = read_sql\n\t\t)\n\n\tif frappe.response:\n\t\tout.frappe.response = frappe.response\n\n\tout.update(safe_globals)\n\n\t# default writer allows write access\n\tout._write_ = _write\n\tout._getitem_ = _getitem\n\tout._getattr_ = _getattr\n\n\t# allow iterators and list comprehension\n\tout._getiter_ = iter\n\tout._iter_unpack_sequence_ = RestrictedPython.Guards.guarded_iter_unpack_sequence\n\tout.sorted = sorted\n\n\treturn out\n\ndef read_sql(query, *args, **kwargs):\n\t'''a wrapper for frappe.db.sql to allow reads'''\n\tif query.strip().split(None, 1)[0].lower() == 'select':\n\t\treturn frappe.db.sql(query, *args, **kwargs)\n\telse:\n\t\traise frappe.PermissionError('Only SELECT SQL allowed in scripting')\n\ndef run_script(script):\n\t'''run another server script'''\n\treturn frappe.get_doc('Server Script', script).execute_method()\n\ndef _getitem(obj, key):\n\t# guard function for RestrictedPython\n\t# allow any key to be accessed as long as it does not start with underscore\n\tif isinstance(key, str) and key.startswith('_'):\n\t\traise SyntaxError('Key starts with _')\n\treturn obj[key]\n\ndef _getattr(object, name, default=None):\n\t# guard function for RestrictedPython\n\t# allow any key to be accessed as long as\n\t# 1. it does not start with an underscore (safer_getattr)\n\t# 2. it is not an UNSAFE_ATTRIBUTES\n\n\tUNSAFE_ATTRIBUTES = {\n\t\t# Generator Attributes\n\t\t\"gi_frame\", \"gi_code\",\n\t\t# Coroutine Attributes\n\t\t\"cr_frame\", \"cr_code\", \"cr_origin\",\n\t\t# Async Generator Attributes\n\t\t\"ag_code\", \"ag_frame\",\n\t\t# Traceback Attributes\n\t\t\"tb_frame\", \"tb_next\",\n\t}\n\n\tif isinstance(name, str) and (name in UNSAFE_ATTRIBUTES):\n\t\traise SyntaxError(\"{name} is an unsafe attribute\".format(name=name))\n\treturn RestrictedPython.Guards.safer_getattr(object, name, default=default)\n\ndef _write(obj):\n\t# guard function for RestrictedPython\n\t# allow writing to any object\n\treturn obj\n\ndef add_data_utils(data):\n\tfor key, obj in frappe.utils.data.__dict__.items():\n\t\tif key in VALID_UTILS:\n\t\t\tdata[key] = obj\n\ndef add_module_properties(module, data, filter_method):\n\tfor key, obj in module.__dict__.items():\n\t\tif key.startswith(\"_\"):\n\t\t\t# ignore\n\t\t\tcontinue\n\n\t\tif filter_method(obj):\n\t\t\t# only allow functions\n\t\t\tdata[key] = obj\n\nVALID_UTILS = (\n\"DATE_FORMAT\",\n\"TIME_FORMAT\",\n\"DATETIME_FORMAT\",\n\"is_invalid_date_string\",\n\"getdate\",\n\"get_datetime\",\n\"to_timedelta\",\n\"get_timedelta\",\n\"add_to_date\",\n\"add_days\",\n\"add_months\",\n\"add_years\",\n\"date_diff\",\n\"month_diff\",\n\"time_diff\",\n\"time_diff_in_seconds\",\n\"time_diff_in_hours\",\n\"now_datetime\",\n\"get_timestamp\",\n\"get_eta\",\n\"get_time_zone\",\n\"convert_utc_to_user_timezone\",\n\"now\",\n\"nowdate\",\n\"today\",\n\"nowtime\",\n\"get_first_day\",\n\"get_quarter_start\",\n\"get_first_day_of_week\",\n\"get_year_start\",\n\"get_last_day_of_week\",\n\"get_last_day\",\n\"get_time\",\n\"get_datetime_in_timezone\",\n\"get_datetime_str\",\n\"get_date_str\",\n\"get_time_str\",\n\"get_user_date_format\",\n\"get_user_time_format\",\n\"format_date\",\n\"format_time\",\n\"format_datetime\",\n\"format_duration\",\n\"get_weekdays\",\n\"get_weekday\",\n\"get_timespan_date_range\",\n\"global_date_format\",\n\"has_common\",\n\"flt\",\n\"cint\",\n\"floor\",\n\"ceil\",\n\"cstr\",\n\"rounded\",\n\"remainder\",\n\"safe_div\",\n\"round_based_on_smallest_currency_fraction\",\n\"encode\",\n\"parse_val\",\n\"fmt_money\",\n\"get_number_format_info\",\n\"money_in_words\",\n\"in_words\",\n\"is_html\",\n\"is_image\",\n\"get_thumbnail_base64_for_image\",\n\"image_to_base64\",\n\"strip_html\",\n\"escape_html\",\n\"pretty_date\",\n\"comma_or\",\n\"comma_and\",\n\"comma_sep\",\n\"new_line_sep\",\n\"filter_strip_join\",\n\"get_url\",\n\"get_host_name_from_request\",\n\"url_contains_port\",\n\"get_host_name\",\n\"get_link_to_form\",\n\"get_link_to_report\",\n\"get_absolute_url\",\n\"get_url_to_form\",\n\"get_url_to_list\",\n\"get_url_to_report\",\n\"get_url_to_report_with_filters\",\n\"evaluate_filters\",\n\"compare\",\n\"get_filter\",\n\"make_filter_tuple\",\n\"make_filter_dict\",\n\"sanitize_column\",\n\"scrub_urls\",\n\"expand_relative_urls\",\n\"quoted\",\n\"quote_urls\",\n\"unique\",\n\"strip\",\n\"to_markdown\",\n\"md_to_html\",\n\"markdown\",\n\"is_subset\",\n\"generate_hash\",\n\"formatdate\",\n\"get_user_info_for_avatar\",\n\"get_abbr\"\n)\n", "path": "frappe/utils/safe_exec.py"}]} | 3,956 | 120 |
gh_patches_debug_57932 | rasdani/github-patches | git_diff | scrapy__scrapy-3825 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue with Twisted and Python 3.4
Twisted had a patch 3 days ago and it's causing test suite to fail for py34 environment.
Twisted , according to their Readme, support Python 3.5+. This needs to be fixed if the builds need to pass
</issue>
<code>
[start of setup.py]
1 from os.path import dirname, join
2 from pkg_resources import parse_version
3 from setuptools import setup, find_packages, __version__ as setuptools_version
4
5
6 with open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:
7 version = f.read().decode('ascii').strip()
8
9
10 def has_environment_marker_platform_impl_support():
11 """Code extracted from 'pytest/setup.py'
12 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
13
14 The first known release to support environment marker with range operators
15 it is 18.5, see:
16 https://setuptools.readthedocs.io/en/latest/history.html#id235
17 """
18 return parse_version(setuptools_version) >= parse_version('18.5')
19
20
21 extras_require = {}
22
23 if has_environment_marker_platform_impl_support():
24 extras_require[':platform_python_implementation == "PyPy"'] = [
25 'PyPyDispatcher>=2.1.0',
26 ]
27
28
29 setup(
30 name='Scrapy',
31 version=version,
32 url='https://scrapy.org',
33 description='A high-level Web Crawling and Web Scraping framework',
34 long_description=open('README.rst').read(),
35 author='Scrapy developers',
36 maintainer='Pablo Hoffman',
37 maintainer_email='[email protected]',
38 license='BSD',
39 packages=find_packages(exclude=('tests', 'tests.*')),
40 include_package_data=True,
41 zip_safe=False,
42 entry_points={
43 'console_scripts': ['scrapy = scrapy.cmdline:execute']
44 },
45 classifiers=[
46 'Framework :: Scrapy',
47 'Development Status :: 5 - Production/Stable',
48 'Environment :: Console',
49 'Intended Audience :: Developers',
50 'License :: OSI Approved :: BSD License',
51 'Operating System :: OS Independent',
52 'Programming Language :: Python',
53 'Programming Language :: Python :: 2',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.4',
57 'Programming Language :: Python :: 3.5',
58 'Programming Language :: Python :: 3.6',
59 'Programming Language :: Python :: 3.7',
60 'Programming Language :: Python :: Implementation :: CPython',
61 'Programming Language :: Python :: Implementation :: PyPy',
62 'Topic :: Internet :: WWW/HTTP',
63 'Topic :: Software Development :: Libraries :: Application Frameworks',
64 'Topic :: Software Development :: Libraries :: Python Modules',
65 ],
66 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
67 install_requires=[
68 'Twisted>=13.1.0',
69 'w3lib>=1.17.0',
70 'queuelib',
71 'lxml',
72 'pyOpenSSL',
73 'cssselect>=0.9',
74 'six>=1.5.2',
75 'parsel>=1.5',
76 'PyDispatcher>=2.0.5',
77 'service_identity',
78 ],
79 extras_require=extras_require,
80 )
81
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,7 +65,8 @@
],
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
install_requires=[
- 'Twisted>=13.1.0',
+ 'Twisted>=13.1.0;python_version!="3.4"',
+ 'Twisted>=13.1.0,<=19.2.0;python_version=="3.4"',
'w3lib>=1.17.0',
'queuelib',
'lxml',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,7 +65,8 @@\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n install_requires=[\n- 'Twisted>=13.1.0',\n+ 'Twisted>=13.1.0;python_version!=\"3.4\"',\n+ 'Twisted>=13.1.0,<=19.2.0;python_version==\"3.4\"',\n 'w3lib>=1.17.0',\n 'queuelib',\n 'lxml',\n", "issue": "Issue with Twisted and Python 3.4 \nTwisted had a patch 3 days ago and it's causing test suite to fail for py34 environment. \r\nTwisted , according to their Readme, support Python 3.5+. This needs to be fixed if the builds need to pass\n", "before_files": [{"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\nextras_require = {}\n\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n 'PyPyDispatcher>=2.1.0',\n ]\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n install_requires=[\n 'Twisted>=13.1.0',\n 'w3lib>=1.17.0',\n 'queuelib',\n 'lxml',\n 'pyOpenSSL',\n 'cssselect>=0.9',\n 'six>=1.5.2',\n 'parsel>=1.5',\n 'PyDispatcher>=2.0.5',\n 'service_identity',\n ],\n extras_require=extras_require,\n)\n", "path": "setup.py"}]} | 1,425 | 153 |
gh_patches_debug_31720 | rasdani/github-patches | git_diff | meltano__meltano-7115 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow vendor-specific annotations in `meltano.yml`
Discussed in https://github.com/meltano/meltano/discussions/7053
We should update `meltano.schema.json` to permit an object with arbitrary fields (i.e. `"additionalProperties": true`) within any object in the schema that does not already impose restrictions on the permitted additional properties. That is to say:
- If the object has `"additionalProperties": true`, then it can have `annotations`, and no changes are required to the schema.
- If the object has `"additionalProperties": false`, then it can have `annotations` and we must explicitly add it to the properties.
- If the object has `additionalProperties` set to anything else (e.g. limiting it to strings as `env` does), then annotations are not supported, and no change to the schema is necessary.
So the only places in the schema that need to be updated are where it says `"additionalProperties": false`.
Documentation around what this field is for should be added. It should be made clear that the core Meltano library/CLI does not use this field, but it may be used for Meltano Cloud, or third-party tools.
The documentation should also state that we impose no limitation on how it can be used except for requiring that the top-level of each annotation object may only contain objects as properties, and that we recommend that third-party tools keep their annotations under a sensible key, such as the name of the tool.
Testing should be done by adding annotations liberally to integration tests, and ensuring that they are ignored.
</issue>
<code>
[start of src/meltano/core/meltano_file.py]
1 """Module for working with meltano.yml files."""
2 from __future__ import annotations
3
4 import copy
5 from typing import Iterable
6
7 from meltano.core.behavior.canonical import Canonical
8 from meltano.core.environment import Environment
9 from meltano.core.plugin import PluginType
10 from meltano.core.plugin.project_plugin import ProjectPlugin
11 from meltano.core.schedule import Schedule
12 from meltano.core.task_sets import TaskSets
13
14 VERSION = 1
15
16
17 class MeltanoFile(Canonical):
18 """Data and loading methods for meltano.yml files."""
19
20 def __init__(
21 self,
22 version: int = VERSION,
23 plugins: dict[str, dict] = None,
24 schedules: list[dict] = None,
25 environments: list[dict] = None,
26 jobs: list[dict] = None,
27 env: dict[str, str] = None,
28 **extras,
29 ):
30 """Construct a new MeltanoFile object from meltano.yml file.
31
32 Args:
33 version: The meltano.yml version, currently always 1.
34 plugins: Plugin configuration for this project.
35 schedules: Schedule configuration for this project.
36 environments: Environment configuration for this project.
37 jobs: Job configuration for this project.
38 env: Environment variables for this project.
39 extras: Additional configuration for this project.
40 """
41 super().__init__(
42 # Attributes will be listed in meltano.yml in this order:
43 version=version,
44 extras=extras,
45 plugins=self.load_plugins(plugins or {}),
46 schedules=self.load_schedules(schedules or []),
47 environments=self.load_environments(environments or []),
48 jobs=self.load_job_tasks(jobs or []),
49 env=env or {},
50 )
51
52 def load_plugins(self, plugins: dict[str, dict]) -> Canonical:
53 """Parse the `meltano.yml` file and return it as `ProjectPlugin` instances.
54
55 Args:
56 plugins: Dictionary of plugin configurations.
57
58 Returns:
59 New ProjectPlugin instances.
60 """
61 plugin_type_plugins = Canonical()
62
63 for ptype in PluginType:
64 plugin_type_plugins[ptype] = []
65
66 # this will parse the meltano.yml file and create an instance of the
67 # corresponding `plugin_class` for all the plugins.
68 for plugin_type, raw_plugins in plugins.items():
69 if plugin_type == PluginType.MAPPERS:
70 for mapper in raw_plugins:
71 plugin_type_plugins[PluginType.MAPPERS].append(
72 ProjectPlugin(PluginType.MAPPERS, **mapper)
73 )
74 plugin_type_plugins[PluginType.MAPPERS].extend(
75 self.get_plugins_for_mappings(mapper)
76 )
77 else:
78 for raw_plugin in raw_plugins:
79 plugin = ProjectPlugin(PluginType(plugin_type), **raw_plugin)
80 plugin_type_plugins[plugin.type].append(plugin)
81
82 return plugin_type_plugins
83
84 def load_schedules(self, schedules: list[dict]) -> list[Schedule]:
85 """Parse the meltano.yml file and return it as Schedule instances.
86
87 Args:
88 schedules: List of schedule configurations.
89
90 Returns:
91 List of new Schedule instances.
92 """
93 return list(map(Schedule.parse, schedules))
94
95 @staticmethod
96 def load_environments(environments: Iterable[dict]) -> list[Environment]:
97 """Parse `Environment` objects from python objects.
98
99 Args:
100 environments: Sequence of environment dictionaries.
101
102 Returns:
103 A list of `Environment` objects.
104 """
105 return [Environment.parse(obj) for obj in environments]
106
107 @staticmethod
108 def load_job_tasks(jobs: Iterable[dict]) -> list[TaskSets]:
109 """Parse `TaskSets` objects from python objects.
110
111 Args:
112 jobs: Sequence of job dictionaries.
113
114 Returns:
115 A list of `Job` objects.
116 """
117 return [TaskSets.parse(obj) for obj in jobs]
118
119 @staticmethod
120 def get_plugins_for_mappings(mapper_config: dict) -> list[ProjectPlugin]:
121 """Mapper plugins are a special case. They are not a single plugin, but actually a list of plugins generated from the mapping config defined within the mapper config.
122
123 Args:
124 mapper_config: The dict representation of a mapper config found in in meltano.yml.
125
126 Returns:
127 A list of `ProjectPlugin` instances.
128 """
129 mapping_plugins: list[ProjectPlugin] = []
130 for mapping in mapper_config.get("mappings", []):
131 raw_mapping_plugin = copy.deepcopy(mapper_config)
132 raw_mapping_plugin["mapping"] = True
133 raw_mapping_plugin["mapping_name"] = mapping.get("name")
134 raw_mapping_plugin["config"] = mapping.get("config")
135 mapping_plugins.append(
136 ProjectPlugin(PluginType.MAPPERS, **raw_mapping_plugin)
137 )
138 return mapping_plugins
139
[end of src/meltano/core/meltano_file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/core/meltano_file.py b/src/meltano/core/meltano_file.py
--- a/src/meltano/core/meltano_file.py
+++ b/src/meltano/core/meltano_file.py
@@ -1,8 +1,9 @@
"""Module for working with meltano.yml files."""
+
from __future__ import annotations
import copy
-from typing import Iterable
+from typing import Any, Iterable
from meltano.core.behavior.canonical import Canonical
from meltano.core.environment import Environment
@@ -25,6 +26,7 @@
environments: list[dict] = None,
jobs: list[dict] = None,
env: dict[str, str] = None,
+ annotations: dict[str, dict[Any, Any]] | None = None, # noqa: WPS442
**extras,
):
"""Construct a new MeltanoFile object from meltano.yml file.
@@ -36,6 +38,7 @@
environments: Environment configuration for this project.
jobs: Job configuration for this project.
env: Environment variables for this project.
+ annotations: Annotations for external tools/vendors - do not access.
extras: Additional configuration for this project.
"""
super().__init__(
@@ -47,6 +50,7 @@
environments=self.load_environments(environments or []),
jobs=self.load_job_tasks(jobs or []),
env=env or {},
+ annotations=annotations,
)
def load_plugins(self, plugins: dict[str, dict]) -> Canonical:
| {"golden_diff": "diff --git a/src/meltano/core/meltano_file.py b/src/meltano/core/meltano_file.py\n--- a/src/meltano/core/meltano_file.py\n+++ b/src/meltano/core/meltano_file.py\n@@ -1,8 +1,9 @@\n \"\"\"Module for working with meltano.yml files.\"\"\"\n+\n from __future__ import annotations\n \n import copy\n-from typing import Iterable\n+from typing import Any, Iterable\n \n from meltano.core.behavior.canonical import Canonical\n from meltano.core.environment import Environment\n@@ -25,6 +26,7 @@\n environments: list[dict] = None,\n jobs: list[dict] = None,\n env: dict[str, str] = None,\n+ annotations: dict[str, dict[Any, Any]] | None = None, # noqa: WPS442\n **extras,\n ):\n \"\"\"Construct a new MeltanoFile object from meltano.yml file.\n@@ -36,6 +38,7 @@\n environments: Environment configuration for this project.\n jobs: Job configuration for this project.\n env: Environment variables for this project.\n+ annotations: Annotations for external tools/vendors - do not access.\n extras: Additional configuration for this project.\n \"\"\"\n super().__init__(\n@@ -47,6 +50,7 @@\n environments=self.load_environments(environments or []),\n jobs=self.load_job_tasks(jobs or []),\n env=env or {},\n+ annotations=annotations,\n )\n \n def load_plugins(self, plugins: dict[str, dict]) -> Canonical:\n", "issue": "Allow vendor-specific annotations in `meltano.yml`\nDiscussed in https://github.com/meltano/meltano/discussions/7053\r\n\r\nWe should update `meltano.schema.json` to permit an object with arbitrary fields (i.e. `\"additionalProperties\": true`) within any object in the schema that does not already impose restrictions on the permitted additional properties. That is to say:\r\n- If the object has `\"additionalProperties\": true`, then it can have `annotations`, and no changes are required to the schema.\r\n- If the object has `\"additionalProperties\": false`, then it can have `annotations` and we must explicitly add it to the properties.\r\n- If the object has `additionalProperties` set to anything else (e.g. limiting it to strings as `env` does), then annotations are not supported, and no change to the schema is necessary.\r\n\r\nSo the only places in the schema that need to be updated are where it says `\"additionalProperties\": false`.\r\n\r\nDocumentation around what this field is for should be added. It should be made clear that the core Meltano library/CLI does not use this field, but it may be used for Meltano Cloud, or third-party tools.\r\n\r\nThe documentation should also state that we impose no limitation on how it can be used except for requiring that the top-level of each annotation object may only contain objects as properties, and that we recommend that third-party tools keep their annotations under a sensible key, such as the name of the tool.\r\n\r\nTesting should be done by adding annotations liberally to integration tests, and ensuring that they are ignored.\n", "before_files": [{"content": "\"\"\"Module for working with meltano.yml files.\"\"\"\nfrom __future__ import annotations\n\nimport copy\nfrom typing import Iterable\n\nfrom meltano.core.behavior.canonical import Canonical\nfrom meltano.core.environment import Environment\nfrom meltano.core.plugin import PluginType\nfrom meltano.core.plugin.project_plugin import ProjectPlugin\nfrom meltano.core.schedule import Schedule\nfrom meltano.core.task_sets import TaskSets\n\nVERSION = 1\n\n\nclass MeltanoFile(Canonical):\n \"\"\"Data and loading methods for meltano.yml files.\"\"\"\n\n def __init__(\n self,\n version: int = VERSION,\n plugins: dict[str, dict] = None,\n schedules: list[dict] = None,\n environments: list[dict] = None,\n jobs: list[dict] = None,\n env: dict[str, str] = None,\n **extras,\n ):\n \"\"\"Construct a new MeltanoFile object from meltano.yml file.\n\n Args:\n version: The meltano.yml version, currently always 1.\n plugins: Plugin configuration for this project.\n schedules: Schedule configuration for this project.\n environments: Environment configuration for this project.\n jobs: Job configuration for this project.\n env: Environment variables for this project.\n extras: Additional configuration for this project.\n \"\"\"\n super().__init__(\n # Attributes will be listed in meltano.yml in this order:\n version=version,\n extras=extras,\n plugins=self.load_plugins(plugins or {}),\n schedules=self.load_schedules(schedules or []),\n environments=self.load_environments(environments or []),\n jobs=self.load_job_tasks(jobs or []),\n env=env or {},\n )\n\n def load_plugins(self, plugins: dict[str, dict]) -> Canonical:\n \"\"\"Parse the `meltano.yml` file and return it as `ProjectPlugin` instances.\n\n Args:\n plugins: Dictionary of plugin configurations.\n\n Returns:\n New ProjectPlugin instances.\n \"\"\"\n plugin_type_plugins = Canonical()\n\n for ptype in PluginType:\n plugin_type_plugins[ptype] = []\n\n # this will parse the meltano.yml file and create an instance of the\n # corresponding `plugin_class` for all the plugins.\n for plugin_type, raw_plugins in plugins.items():\n if plugin_type == PluginType.MAPPERS:\n for mapper in raw_plugins:\n plugin_type_plugins[PluginType.MAPPERS].append(\n ProjectPlugin(PluginType.MAPPERS, **mapper)\n )\n plugin_type_plugins[PluginType.MAPPERS].extend(\n self.get_plugins_for_mappings(mapper)\n )\n else:\n for raw_plugin in raw_plugins:\n plugin = ProjectPlugin(PluginType(plugin_type), **raw_plugin)\n plugin_type_plugins[plugin.type].append(plugin)\n\n return plugin_type_plugins\n\n def load_schedules(self, schedules: list[dict]) -> list[Schedule]:\n \"\"\"Parse the meltano.yml file and return it as Schedule instances.\n\n Args:\n schedules: List of schedule configurations.\n\n Returns:\n List of new Schedule instances.\n \"\"\"\n return list(map(Schedule.parse, schedules))\n\n @staticmethod\n def load_environments(environments: Iterable[dict]) -> list[Environment]:\n \"\"\"Parse `Environment` objects from python objects.\n\n Args:\n environments: Sequence of environment dictionaries.\n\n Returns:\n A list of `Environment` objects.\n \"\"\"\n return [Environment.parse(obj) for obj in environments]\n\n @staticmethod\n def load_job_tasks(jobs: Iterable[dict]) -> list[TaskSets]:\n \"\"\"Parse `TaskSets` objects from python objects.\n\n Args:\n jobs: Sequence of job dictionaries.\n\n Returns:\n A list of `Job` objects.\n \"\"\"\n return [TaskSets.parse(obj) for obj in jobs]\n\n @staticmethod\n def get_plugins_for_mappings(mapper_config: dict) -> list[ProjectPlugin]:\n \"\"\"Mapper plugins are a special case. They are not a single plugin, but actually a list of plugins generated from the mapping config defined within the mapper config.\n\n Args:\n mapper_config: The dict representation of a mapper config found in in meltano.yml.\n\n Returns:\n A list of `ProjectPlugin` instances.\n \"\"\"\n mapping_plugins: list[ProjectPlugin] = []\n for mapping in mapper_config.get(\"mappings\", []):\n raw_mapping_plugin = copy.deepcopy(mapper_config)\n raw_mapping_plugin[\"mapping\"] = True\n raw_mapping_plugin[\"mapping_name\"] = mapping.get(\"name\")\n raw_mapping_plugin[\"config\"] = mapping.get(\"config\")\n mapping_plugins.append(\n ProjectPlugin(PluginType.MAPPERS, **raw_mapping_plugin)\n )\n return mapping_plugins\n", "path": "src/meltano/core/meltano_file.py"}]} | 2,185 | 339 |
gh_patches_debug_41084 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Penalty weights are incorrect
**Describe the bug**
See title.
**To Reproduce**
Steps to reproduce the behavior:
1. Any segmentation/classification problem
2. Take a print at the end of `GANDLF.utils.tensor.get_class_imbalance_weights`
3. See that `penalty_dict` and `weights_dict` are the same
For example, for a small I-SPY dataset:
```
penalty_dict: {0: 0.9928165871714076, 1: 0.0071834128285923945}
weights_dict: {0: 0.9928165871714076, 1: 0.0071834128285923945}
```
**Expected behavior**
They should be `1-each_other`. In the above example, it should be this:
```
penalty_dict: {0: 0.0071834128285923945, 1: 0.9928165871714076}
weights_dict: {0: 0.9928165871714076, 1: 0.0071834128285923945}
```
**Screenshots**
N.A.
**GaNDLF Version**
0.0.12-dev
**Desktop (please complete the following information):**
N.A.
**Additional context**
Reported by @iskobleva
</issue>
<code>
[start of GANDLF/utils/tensor.py]
1 import os, sys
2 import numpy as np
3 import torch
4 import torch.nn as nn
5 import torchio
6
7
8 def one_hot(segmask_array, class_list):
9 """
10 This function creates a one-hot-encoded mask from the segmentation mask Tensor and specified class list
11
12 Args:
13 segmask_array (torch.Tensor): The segmentation mask Tensor.
14 class_list (list): The list of classes based on which one-hot encoding needs to happen.
15
16 Returns:
17 torch.Tensor: The one-hot encoded torch.Tensor
18 """
19 batch_size = segmask_array.shape[0]
20 batch_stack = []
21 for b in range(batch_size):
22 one_hot_stack = []
23 segmask_array_iter = segmask_array[b, 0]
24 bin_mask = segmask_array_iter == 0 # initialize bin_mask
25 for (
26 _class
27 ) in class_list: # this implementation allows users to combine logical operands
28 if isinstance(_class, str):
29 if "||" in _class: # special case
30 class_split = _class.split("||")
31 bin_mask = segmask_array_iter == int(class_split[0])
32 for i in range(1, len(class_split)):
33 bin_mask = bin_mask | (
34 segmask_array_iter == int(class_split[i])
35 )
36 elif "|" in _class: # special case
37 class_split = _class.split("|")
38 bin_mask = segmask_array_iter == int(class_split[0])
39 for i in range(1, len(class_split)):
40 bin_mask = bin_mask | (
41 segmask_array_iter == int(class_split[i])
42 )
43 else:
44 # assume that it is a simple int
45 bin_mask = segmask_array_iter == int(_class)
46 else:
47 bin_mask = segmask_array_iter == int(_class)
48 bin_mask = bin_mask.long()
49 one_hot_stack.append(bin_mask)
50 one_hot_stack = torch.stack(one_hot_stack)
51 batch_stack.append(one_hot_stack)
52 batch_stack = torch.stack(batch_stack)
53 return batch_stack
54
55
56 def reverse_one_hot(predmask_array, class_list):
57 """
58 This function creates a full segmentation mask Tensor from a one-hot-encoded mask and specified class list
59
60 Args:
61 predmask_array (torch.Tensor): The predicted segmentation mask Tensor.
62 class_list (list): The list of classes based on which one-hot encoding needs to happen.
63
64 Returns:
65 torch.Tensor: The final mask torch.Tensor.
66 """
67 if isinstance(predmask_array, torch.Tensor):
68 array_to_consider = predmask_array.cpu().numpy()
69 else:
70 array_to_consider = predmask_array
71 idx_argmax = np.argmax(array_to_consider, axis=0)
72 final_mask = 0
73 special_cases_to_check = ["||"]
74 special_case_detected = False
75 max_current = 0
76
77 for _class in class_list:
78 for case in special_cases_to_check:
79 if isinstance(_class, str):
80 if case in _class: # check if any of the special cases are present
81 special_case_detected = True
82 class_split = _class.split(
83 case
84 ) # if present, then split the sub-class
85 for i in class_split: # find the max for computation later on
86 if int(i) > max_current:
87 max_current = int(i)
88
89 if special_case_detected:
90 start_idx = 0
91 if (class_list[0] == 0) or (class_list[0] == "0"):
92 start_idx = 1
93
94 final_mask = np.asarray(
95 predmask_array[start_idx, :, :, :], dtype=int
96 ) # predmask_array[0,:,:,:].long()
97 start_idx += 1
98 for i in range(start_idx, len(class_list)):
99 final_mask += np.asarray(
100 predmask_array[0, :, :, :], dtype=int
101 ) # predmask_array[i,:,:,:].long()
102 # temp_sum = torch.sum(output)
103 # output_2 = (max_current - torch.sum(output)) % max_current
104 # test_2 = 1
105 else:
106 for idx, _class in enumerate(class_list):
107 final_mask = final_mask + (idx_argmax == idx) * _class
108 return final_mask
109
110
111 def send_model_to_device(model, amp, device, optimizer):
112 """
113 This function reads the environment variable(s) and send model to correct device
114
115 Args:
116 model (torch.nn.Module): The model that needs to be sent to specified device.
117 amp (bool): Whether automatic mixed precision is to be used.
118 device (str): Device type.
119 optimizer (torch.optim): The optimizer for training.
120
121 Returns:
122 torch.nn.Module: The model after it has been sent to specified device
123 bool: Whether automatic mixed precision is to be used or not.
124 torch.device: Device type.
125 """
126 if device != "cpu":
127 if os.environ.get("CUDA_VISIBLE_DEVICES") is None:
128 sys.exit(
129 "Please set the environment variable 'CUDA_VISIBLE_DEVICES' correctly before trying to run GANDLF on GPU"
130 )
131
132 dev = os.environ.get("CUDA_VISIBLE_DEVICES")
133 # multi-gpu support
134 # ###
135 # # https://discuss.pytorch.org/t/cuda-visible-devices-make-gpu-disappear/21439/17?u=sarthakpati
136 # ###
137 if "," in dev:
138 device = torch.device("cuda")
139 model = nn.DataParallel(model, "[" + dev + "]")
140 else:
141 print("Device requested via CUDA_VISIBLE_DEVICES: ", dev)
142 print("Total number of CUDA devices: ", torch.cuda.device_count())
143
144 # if only a single visible device, it will be indexed as '0'
145 if torch.cuda.device_count() == 1:
146 dev = "0"
147
148 dev_int = int(dev)
149 print("Device finally used: ", dev)
150 # device = torch.device('cuda:' + dev)
151 device = torch.device("cuda")
152 print("Sending model to aforementioned device")
153 model = model.to(device)
154 print(
155 "Memory Total : ",
156 round(
157 torch.cuda.get_device_properties(dev_int).total_memory / 1024 ** 3,
158 1,
159 ),
160 "GB, Allocated: ",
161 round(torch.cuda.memory_allocated(dev_int) / 1024 ** 3, 1),
162 "GB, Cached: ",
163 round(torch.cuda.memory_reserved(dev_int) / 1024 ** 3, 1),
164 "GB",
165 )
166
167 print(
168 "Device - Current: %s Count: %d Name: %s Availability: %s"
169 % (
170 torch.cuda.current_device(),
171 torch.cuda.device_count(),
172 torch.cuda.get_device_name(device),
173 torch.cuda.is_available(),
174 )
175 )
176
177 if not (optimizer is None):
178 # ensuring optimizer is in correct device - https://github.com/pytorch/pytorch/issues/8741
179 optimizer.load_state_dict(optimizer.state_dict())
180
181 else:
182 dev = -1
183 device = torch.device("cpu")
184 model.cpu()
185 amp = False
186 print("Since Device is CPU, Mixed Precision Training is set to False")
187
188 return model, amp, device
189
190
191 def get_class_imbalance_weights(training_data_loader, parameters):
192 """
193 This function calculates the penalty that is used for validation loss in multi-class problems
194
195 Args:
196 training_data_loader (torch.utils.data.DataLoader): The training data loader.
197 parameters (dict): The parameters passed by the user yaml.
198
199 Returns:
200 dict: The penalty weights for different classes under consideration.
201 """
202 abs_dict = {} # absolute counts for each class
203 weights_dict = {} # average for "weighted averaging"
204 penalty_dict = None # penalty for misclassification
205 # basically, do this for segmentation/classification tasks
206
207 if parameters["problem_type"] != "regression":
208 penalty_dict = {}
209 for i in range(0, len(parameters["model"]["class_list"])):
210 abs_dict[i] = 0
211 penalty_dict[i] = 0
212
213 penalty_loader = training_data_loader
214
215 # get the weights for use for dice loss
216 total_counter = 0
217
218 # For regression dice penalty need not be taken account
219 # For classification this should be calculated on the basis of predicted labels and mask
220 # iterate through full penalty data
221 for _, (subject) in enumerate(penalty_loader):
222
223 # segmentation needs masks to be one-hot encoded
224 if parameters["problem_type"] == "segmentation":
225 # accumulate dice weights for each label
226 mask = subject["label"][torchio.DATA]
227 one_hot_mask = one_hot(mask, parameters["model"]["class_list"])
228 for i in range(0, len(parameters["model"]["class_list"])):
229 currentNumber = torch.nonzero(
230 one_hot_mask[:, i, :, :, :], as_tuple=False
231 ).size(0)
232 # class-specific non-zero voxels
233 abs_dict[i] += currentNumber
234 # total number of non-zero voxels to be considered
235 total_counter += currentNumber
236
237 # for classification, the value needs to be used directly
238 elif parameters["problem_type"] == "classification":
239 # accumulate weights for each label
240 value_to_predict = subject["value_0"][0]
241 for i in range(0, len(parameters["model"]["class_list"])):
242 if value_to_predict == i:
243 abs_dict[i] += 1
244 # we only want to increase the counter for those subjects that are defined in the class_list
245 total_counter += 1
246
247 # Normalize class weights
248 weights_dict = {key: val / total_counter for key, val in abs_dict.items()}
249
250 # get the penalty values - abs_dict contains the overall number for each class in the penalty data
251 for i in range(0, len(parameters["model"]["class_list"])):
252 penalty = total_counter # start with the assumption that all the non-zero voxels (segmentation) or activate labels (classification) make up the penalty
253 for j in range(0, len(parameters["model"]["class_list"])):
254 if i != j: # for differing classes, subtract the current weight
255 penalty -= abs_dict[j]
256
257 # finally, the "penalty" variable contains the total number of voxels/activations that are not part of the current class
258 # this is to be used to weight the loss function
259 # adding epsilon to avoid division by zero
260 penalty_dict[i] = (penalty + sys.float_info.epsilon) / (
261 total_counter + sys.float_info.epsilon
262 )
263
264 return penalty_dict, weights_dict
265
[end of GANDLF/utils/tensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/GANDLF/utils/tensor.py b/GANDLF/utils/tensor.py
--- a/GANDLF/utils/tensor.py
+++ b/GANDLF/utils/tensor.py
@@ -22,9 +22,8 @@
one_hot_stack = []
segmask_array_iter = segmask_array[b, 0]
bin_mask = segmask_array_iter == 0 # initialize bin_mask
- for (
- _class
- ) in class_list: # this implementation allows users to combine logical operands
+ # this implementation allows users to combine logical operands
+ for _class in class_list:
if isinstance(_class, str):
if "||" in _class: # special case
class_split = _class.split("||")
@@ -79,9 +78,8 @@
if isinstance(_class, str):
if case in _class: # check if any of the special cases are present
special_case_detected = True
- class_split = _class.split(
- case
- ) # if present, then split the sub-class
+ # if present, then split the sub-class
+ class_split = _class.split(case)
for i in class_split: # find the max for computation later on
if int(i) > max_current:
max_current = int(i)
@@ -91,9 +89,7 @@
if (class_list[0] == 0) or (class_list[0] == "0"):
start_idx = 1
- final_mask = np.asarray(
- predmask_array[start_idx, :, :, :], dtype=int
- ) # predmask_array[0,:,:,:].long()
+ final_mask = np.asarray(predmask_array[start_idx, :, :, :], dtype=int)
start_idx += 1
for i in range(start_idx, len(class_list)):
final_mask += np.asarray(
@@ -245,20 +241,21 @@
total_counter += 1
# Normalize class weights
- weights_dict = {key: val / total_counter for key, val in abs_dict.items()}
-
- # get the penalty values - abs_dict contains the overall number for each class in the penalty data
- for i in range(0, len(parameters["model"]["class_list"])):
- penalty = total_counter # start with the assumption that all the non-zero voxels (segmentation) or activate labels (classification) make up the penalty
- for j in range(0, len(parameters["model"]["class_list"])):
- if i != j: # for differing classes, subtract the current weight
- penalty -= abs_dict[j]
-
- # finally, the "penalty" variable contains the total number of voxels/activations that are not part of the current class
- # this is to be used to weight the loss function
- # adding epsilon to avoid division by zero
- penalty_dict[i] = (penalty + sys.float_info.epsilon) / (
- total_counter + sys.float_info.epsilon
- )
+ weights_dict = {
+ key: (val + sys.float_info.epsilon) / total_counter
+ for key, val in abs_dict.items()
+ }
+
+ # get the raw penalty values
+ penalty = {
+ key: total_counter / (len(abs_dict) * (val + sys.float_info.epsilon))
+ for key, val in abs_dict.items()
+ }
+ # normalize penalty to sum of 1
+ penalty_sum = np.fromiter(penalty.values(), dtype=np.float64).sum()
+ penalty_dict = {
+ key: (val + sys.float_info.epsilon) / penalty_sum
+ for key, val in penalty.items()
+ }
return penalty_dict, weights_dict
| {"golden_diff": "diff --git a/GANDLF/utils/tensor.py b/GANDLF/utils/tensor.py\n--- a/GANDLF/utils/tensor.py\n+++ b/GANDLF/utils/tensor.py\n@@ -22,9 +22,8 @@\n one_hot_stack = []\n segmask_array_iter = segmask_array[b, 0]\n bin_mask = segmask_array_iter == 0 # initialize bin_mask\n- for (\n- _class\n- ) in class_list: # this implementation allows users to combine logical operands\n+ # this implementation allows users to combine logical operands\n+ for _class in class_list:\n if isinstance(_class, str):\n if \"||\" in _class: # special case\n class_split = _class.split(\"||\")\n@@ -79,9 +78,8 @@\n if isinstance(_class, str):\n if case in _class: # check if any of the special cases are present\n special_case_detected = True\n- class_split = _class.split(\n- case\n- ) # if present, then split the sub-class\n+ # if present, then split the sub-class\n+ class_split = _class.split(case)\n for i in class_split: # find the max for computation later on\n if int(i) > max_current:\n max_current = int(i)\n@@ -91,9 +89,7 @@\n if (class_list[0] == 0) or (class_list[0] == \"0\"):\n start_idx = 1\n \n- final_mask = np.asarray(\n- predmask_array[start_idx, :, :, :], dtype=int\n- ) # predmask_array[0,:,:,:].long()\n+ final_mask = np.asarray(predmask_array[start_idx, :, :, :], dtype=int)\n start_idx += 1\n for i in range(start_idx, len(class_list)):\n final_mask += np.asarray(\n@@ -245,20 +241,21 @@\n total_counter += 1\n \n # Normalize class weights\n- weights_dict = {key: val / total_counter for key, val in abs_dict.items()}\n-\n- # get the penalty values - abs_dict contains the overall number for each class in the penalty data\n- for i in range(0, len(parameters[\"model\"][\"class_list\"])):\n- penalty = total_counter # start with the assumption that all the non-zero voxels (segmentation) or activate labels (classification) make up the penalty\n- for j in range(0, len(parameters[\"model\"][\"class_list\"])):\n- if i != j: # for differing classes, subtract the current weight\n- penalty -= abs_dict[j]\n-\n- # finally, the \"penalty\" variable contains the total number of voxels/activations that are not part of the current class\n- # this is to be used to weight the loss function\n- # adding epsilon to avoid division by zero\n- penalty_dict[i] = (penalty + sys.float_info.epsilon) / (\n- total_counter + sys.float_info.epsilon\n- )\n+ weights_dict = {\n+ key: (val + sys.float_info.epsilon) / total_counter\n+ for key, val in abs_dict.items()\n+ }\n+\n+ # get the raw penalty values\n+ penalty = {\n+ key: total_counter / (len(abs_dict) * (val + sys.float_info.epsilon))\n+ for key, val in abs_dict.items()\n+ }\n+ # normalize penalty to sum of 1\n+ penalty_sum = np.fromiter(penalty.values(), dtype=np.float64).sum()\n+ penalty_dict = {\n+ key: (val + sys.float_info.epsilon) / penalty_sum\n+ for key, val in penalty.items()\n+ }\n \n return penalty_dict, weights_dict\n", "issue": "Penalty weights are incorrect\n**Describe the bug**\r\nSee title.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Any segmentation/classification problem\r\n2. Take a print at the end of `GANDLF.utils.tensor.get_class_imbalance_weights`\r\n3. See that `penalty_dict` and `weights_dict` are the same\r\n\r\nFor example, for a small I-SPY dataset:\r\n```\r\npenalty_dict: {0: 0.9928165871714076, 1: 0.0071834128285923945} \r\nweights_dict: {0: 0.9928165871714076, 1: 0.0071834128285923945}\r\n```\r\n\r\n**Expected behavior**\r\nThey should be `1-each_other`. In the above example, it should be this:\r\n\r\n```\r\npenalty_dict: {0: 0.0071834128285923945, 1: 0.9928165871714076} \r\nweights_dict: {0: 0.9928165871714076, 1: 0.0071834128285923945}\r\n```\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n0.0.12-dev\r\n\r\n**Desktop (please complete the following information):**\r\nN.A.\r\n\r\n**Additional context**\r\nReported by @iskobleva \r\n\n", "before_files": [{"content": "import os, sys\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torchio\n\n\ndef one_hot(segmask_array, class_list):\n \"\"\"\n This function creates a one-hot-encoded mask from the segmentation mask Tensor and specified class list\n\n Args:\n segmask_array (torch.Tensor): The segmentation mask Tensor.\n class_list (list): The list of classes based on which one-hot encoding needs to happen.\n\n Returns:\n torch.Tensor: The one-hot encoded torch.Tensor\n \"\"\"\n batch_size = segmask_array.shape[0]\n batch_stack = []\n for b in range(batch_size):\n one_hot_stack = []\n segmask_array_iter = segmask_array[b, 0]\n bin_mask = segmask_array_iter == 0 # initialize bin_mask\n for (\n _class\n ) in class_list: # this implementation allows users to combine logical operands\n if isinstance(_class, str):\n if \"||\" in _class: # special case\n class_split = _class.split(\"||\")\n bin_mask = segmask_array_iter == int(class_split[0])\n for i in range(1, len(class_split)):\n bin_mask = bin_mask | (\n segmask_array_iter == int(class_split[i])\n )\n elif \"|\" in _class: # special case\n class_split = _class.split(\"|\")\n bin_mask = segmask_array_iter == int(class_split[0])\n for i in range(1, len(class_split)):\n bin_mask = bin_mask | (\n segmask_array_iter == int(class_split[i])\n )\n else:\n # assume that it is a simple int\n bin_mask = segmask_array_iter == int(_class)\n else:\n bin_mask = segmask_array_iter == int(_class)\n bin_mask = bin_mask.long()\n one_hot_stack.append(bin_mask)\n one_hot_stack = torch.stack(one_hot_stack)\n batch_stack.append(one_hot_stack)\n batch_stack = torch.stack(batch_stack)\n return batch_stack\n\n\ndef reverse_one_hot(predmask_array, class_list):\n \"\"\"\n This function creates a full segmentation mask Tensor from a one-hot-encoded mask and specified class list\n\n Args:\n predmask_array (torch.Tensor): The predicted segmentation mask Tensor.\n class_list (list): The list of classes based on which one-hot encoding needs to happen.\n\n Returns:\n torch.Tensor: The final mask torch.Tensor.\n \"\"\"\n if isinstance(predmask_array, torch.Tensor):\n array_to_consider = predmask_array.cpu().numpy()\n else:\n array_to_consider = predmask_array\n idx_argmax = np.argmax(array_to_consider, axis=0)\n final_mask = 0\n special_cases_to_check = [\"||\"]\n special_case_detected = False\n max_current = 0\n\n for _class in class_list:\n for case in special_cases_to_check:\n if isinstance(_class, str):\n if case in _class: # check if any of the special cases are present\n special_case_detected = True\n class_split = _class.split(\n case\n ) # if present, then split the sub-class\n for i in class_split: # find the max for computation later on\n if int(i) > max_current:\n max_current = int(i)\n\n if special_case_detected:\n start_idx = 0\n if (class_list[0] == 0) or (class_list[0] == \"0\"):\n start_idx = 1\n\n final_mask = np.asarray(\n predmask_array[start_idx, :, :, :], dtype=int\n ) # predmask_array[0,:,:,:].long()\n start_idx += 1\n for i in range(start_idx, len(class_list)):\n final_mask += np.asarray(\n predmask_array[0, :, :, :], dtype=int\n ) # predmask_array[i,:,:,:].long()\n # temp_sum = torch.sum(output)\n # output_2 = (max_current - torch.sum(output)) % max_current\n # test_2 = 1\n else:\n for idx, _class in enumerate(class_list):\n final_mask = final_mask + (idx_argmax == idx) * _class\n return final_mask\n\n\ndef send_model_to_device(model, amp, device, optimizer):\n \"\"\"\n This function reads the environment variable(s) and send model to correct device\n\n Args:\n model (torch.nn.Module): The model that needs to be sent to specified device.\n amp (bool): Whether automatic mixed precision is to be used.\n device (str): Device type.\n optimizer (torch.optim): The optimizer for training.\n\n Returns:\n torch.nn.Module: The model after it has been sent to specified device\n bool: Whether automatic mixed precision is to be used or not.\n torch.device: Device type.\n \"\"\"\n if device != \"cpu\":\n if os.environ.get(\"CUDA_VISIBLE_DEVICES\") is None:\n sys.exit(\n \"Please set the environment variable 'CUDA_VISIBLE_DEVICES' correctly before trying to run GANDLF on GPU\"\n )\n\n dev = os.environ.get(\"CUDA_VISIBLE_DEVICES\")\n # multi-gpu support\n # ###\n # # https://discuss.pytorch.org/t/cuda-visible-devices-make-gpu-disappear/21439/17?u=sarthakpati\n # ###\n if \",\" in dev:\n device = torch.device(\"cuda\")\n model = nn.DataParallel(model, \"[\" + dev + \"]\")\n else:\n print(\"Device requested via CUDA_VISIBLE_DEVICES: \", dev)\n print(\"Total number of CUDA devices: \", torch.cuda.device_count())\n\n # if only a single visible device, it will be indexed as '0'\n if torch.cuda.device_count() == 1:\n dev = \"0\"\n\n dev_int = int(dev)\n print(\"Device finally used: \", dev)\n # device = torch.device('cuda:' + dev)\n device = torch.device(\"cuda\")\n print(\"Sending model to aforementioned device\")\n model = model.to(device)\n print(\n \"Memory Total : \",\n round(\n torch.cuda.get_device_properties(dev_int).total_memory / 1024 ** 3,\n 1,\n ),\n \"GB, Allocated: \",\n round(torch.cuda.memory_allocated(dev_int) / 1024 ** 3, 1),\n \"GB, Cached: \",\n round(torch.cuda.memory_reserved(dev_int) / 1024 ** 3, 1),\n \"GB\",\n )\n\n print(\n \"Device - Current: %s Count: %d Name: %s Availability: %s\"\n % (\n torch.cuda.current_device(),\n torch.cuda.device_count(),\n torch.cuda.get_device_name(device),\n torch.cuda.is_available(),\n )\n )\n\n if not (optimizer is None):\n # ensuring optimizer is in correct device - https://github.com/pytorch/pytorch/issues/8741\n optimizer.load_state_dict(optimizer.state_dict())\n\n else:\n dev = -1\n device = torch.device(\"cpu\")\n model.cpu()\n amp = False\n print(\"Since Device is CPU, Mixed Precision Training is set to False\")\n\n return model, amp, device\n\n\ndef get_class_imbalance_weights(training_data_loader, parameters):\n \"\"\"\n This function calculates the penalty that is used for validation loss in multi-class problems\n\n Args:\n training_data_loader (torch.utils.data.DataLoader): The training data loader.\n parameters (dict): The parameters passed by the user yaml.\n\n Returns:\n dict: The penalty weights for different classes under consideration.\n \"\"\"\n abs_dict = {} # absolute counts for each class\n weights_dict = {} # average for \"weighted averaging\"\n penalty_dict = None # penalty for misclassification\n # basically, do this for segmentation/classification tasks\n\n if parameters[\"problem_type\"] != \"regression\":\n penalty_dict = {}\n for i in range(0, len(parameters[\"model\"][\"class_list\"])):\n abs_dict[i] = 0\n penalty_dict[i] = 0\n\n penalty_loader = training_data_loader\n\n # get the weights for use for dice loss\n total_counter = 0\n\n # For regression dice penalty need not be taken account\n # For classification this should be calculated on the basis of predicted labels and mask\n # iterate through full penalty data\n for _, (subject) in enumerate(penalty_loader):\n\n # segmentation needs masks to be one-hot encoded\n if parameters[\"problem_type\"] == \"segmentation\":\n # accumulate dice weights for each label\n mask = subject[\"label\"][torchio.DATA]\n one_hot_mask = one_hot(mask, parameters[\"model\"][\"class_list\"])\n for i in range(0, len(parameters[\"model\"][\"class_list\"])):\n currentNumber = torch.nonzero(\n one_hot_mask[:, i, :, :, :], as_tuple=False\n ).size(0)\n # class-specific non-zero voxels\n abs_dict[i] += currentNumber\n # total number of non-zero voxels to be considered\n total_counter += currentNumber\n\n # for classification, the value needs to be used directly\n elif parameters[\"problem_type\"] == \"classification\":\n # accumulate weights for each label\n value_to_predict = subject[\"value_0\"][0]\n for i in range(0, len(parameters[\"model\"][\"class_list\"])):\n if value_to_predict == i:\n abs_dict[i] += 1\n # we only want to increase the counter for those subjects that are defined in the class_list\n total_counter += 1\n\n # Normalize class weights\n weights_dict = {key: val / total_counter for key, val in abs_dict.items()}\n\n # get the penalty values - abs_dict contains the overall number for each class in the penalty data\n for i in range(0, len(parameters[\"model\"][\"class_list\"])):\n penalty = total_counter # start with the assumption that all the non-zero voxels (segmentation) or activate labels (classification) make up the penalty\n for j in range(0, len(parameters[\"model\"][\"class_list\"])):\n if i != j: # for differing classes, subtract the current weight\n penalty -= abs_dict[j]\n\n # finally, the \"penalty\" variable contains the total number of voxels/activations that are not part of the current class\n # this is to be used to weight the loss function\n # adding epsilon to avoid division by zero\n penalty_dict[i] = (penalty + sys.float_info.epsilon) / (\n total_counter + sys.float_info.epsilon\n )\n\n return penalty_dict, weights_dict\n", "path": "GANDLF/utils/tensor.py"}]} | 3,915 | 839 |
gh_patches_debug_2557 | rasdani/github-patches | git_diff | ManimCommunity__manim-235 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
-f broken on windows
Basically the title.
When passing -f on windows in show the video file with the default video browser (like -p does) and not in the file explorer.
</issue>
<code>
[start of manim/__main__.py]
1 import inspect
2 import os
3 import platform
4 import subprocess as sp
5 import sys
6 import re
7 import traceback
8 import importlib.util
9 import types
10
11 from .config import file_writer_config
12 from .scene.scene import Scene
13 from .utils.sounds import play_error_sound
14 from .utils.sounds import play_finish_sound
15 from . import constants
16 from .logger import logger, console
17
18
19 def open_file_if_needed(file_writer):
20 if file_writer_config["quiet"]:
21 curr_stdout = sys.stdout
22 sys.stdout = open(os.devnull, "w")
23
24 open_file = any(
25 [file_writer_config["preview"], file_writer_config["show_file_in_finder"]]
26 )
27 if open_file:
28 current_os = platform.system()
29 file_paths = []
30
31 if file_writer_config["save_last_frame"]:
32 file_paths.append(file_writer.get_image_file_path())
33 if file_writer_config["write_to_movie"]:
34 file_paths.append(file_writer.get_movie_file_path())
35
36 for file_path in file_paths:
37 if current_os == "Windows":
38 os.startfile(file_path)
39 else:
40 commands = []
41 if current_os == "Linux":
42 commands.append("xdg-open")
43 elif current_os.startswith("CYGWIN"):
44 commands.append("cygstart")
45 else: # Assume macOS
46 commands.append("open")
47
48 if file_writer_config["show_file_in_finder"]:
49 commands.append("-R")
50
51 commands.append(file_path)
52
53 # commands.append("-g")
54 FNULL = open(os.devnull, "w")
55 sp.call(commands, stdout=FNULL, stderr=sp.STDOUT)
56 FNULL.close()
57
58 if file_writer_config["quiet"]:
59 sys.stdout.close()
60 sys.stdout = curr_stdout
61
62
63 def is_child_scene(obj, module):
64 return (
65 inspect.isclass(obj)
66 and issubclass(obj, Scene)
67 and obj != Scene
68 and obj.__module__.startswith(module.__name__)
69 )
70
71
72 def prompt_user_for_choice(scene_classes):
73 num_to_class = {}
74 for count, scene_class in enumerate(scene_classes):
75 count += 1 # start with 1 instead of 0
76 name = scene_class.__name__
77 console.print(f"{count}: {name}", style="logging.level.info")
78 num_to_class[count] = scene_class
79 try:
80 user_input = console.input(
81 f"[log.message] {constants.CHOOSE_NUMBER_MESSAGE} [/log.message]"
82 )
83 return [
84 num_to_class[int(num_str)]
85 for num_str in re.split(r"\s*,\s*", user_input.strip())
86 ]
87 except KeyError:
88 logger.error(constants.INVALID_NUMBER_MESSAGE)
89 sys.exit(2)
90 except EOFError:
91 sys.exit(1)
92
93
94 def get_scenes_to_render(scene_classes):
95 if not scene_classes:
96 logger.error(constants.NO_SCENE_MESSAGE)
97 return []
98 if file_writer_config["write_all"]:
99 return scene_classes
100 result = []
101 for scene_name in file_writer_config["scene_names"]:
102 found = False
103 for scene_class in scene_classes:
104 if scene_class.__name__ == scene_name:
105 result.append(scene_class)
106 found = True
107 break
108 if not found and (scene_name != ""):
109 logger.error(constants.SCENE_NOT_FOUND_MESSAGE.format(scene_name))
110 if result:
111 return result
112 return (
113 [scene_classes[0]]
114 if len(scene_classes) == 1
115 else prompt_user_for_choice(scene_classes)
116 )
117
118
119 def get_scene_classes_from_module(module):
120 return [
121 member[1]
122 for member in inspect.getmembers(module, lambda x: is_child_scene(x, module))
123 ]
124
125
126 def get_module(file_name):
127 if file_name == "-":
128 module = types.ModuleType("input_scenes")
129 logger.info(
130 "Enter the animation's code & end with an EOF (CTRL+D on Linux/Unix, CTRL+Z on Windows):"
131 )
132 code = sys.stdin.read()
133 if not code.startswith("from manim import"):
134 logger.warn(
135 "Didn't find an import statement for Manim. Importing automatically..."
136 )
137 code = "from manim import *\n" + code
138 logger.info("Rendering animation from typed code...")
139 try:
140 exec(code, module.__dict__)
141 return module
142 except Exception as e:
143 logger.error(f"Failed to render scene: {str(e)}")
144 sys.exit(2)
145 else:
146 if os.path.exists(file_name):
147 if file_name[-3:] != ".py":
148 raise Exception(f"{file_name} is not a valid Manim python script.")
149 module_name = file_name[:-3].replace(os.sep, ".").split(".")[-1]
150 spec = importlib.util.spec_from_file_location(module_name, file_name)
151 module = importlib.util.module_from_spec(spec)
152 spec.loader.exec_module(module)
153 return module
154 else:
155 raise FileNotFoundError(f"{file_name} not found")
156
157
158 def main():
159 module = get_module(file_writer_config["input_file"])
160 all_scene_classes = get_scene_classes_from_module(module)
161 scene_classes_to_render = get_scenes_to_render(all_scene_classes)
162 sound_on = file_writer_config["sound"]
163 for SceneClass in scene_classes_to_render:
164 try:
165 # By invoking, this renders the full scene
166 scene = SceneClass()
167 open_file_if_needed(scene.file_writer)
168 if sound_on:
169 play_finish_sound()
170 except Exception:
171 print("\n\n")
172 traceback.print_exc()
173 print("\n\n")
174 if sound_on:
175 play_error_sound()
176
177
178 if __name__ == "__main__":
179 main()
180
[end of manim/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/manim/__main__.py b/manim/__main__.py
--- a/manim/__main__.py
+++ b/manim/__main__.py
@@ -36,7 +36,7 @@
for file_path in file_paths:
if current_os == "Windows":
- os.startfile(file_path)
+ os.startfile(os.path.dirname(file_path))
else:
commands = []
if current_os == "Linux":
| {"golden_diff": "diff --git a/manim/__main__.py b/manim/__main__.py\n--- a/manim/__main__.py\n+++ b/manim/__main__.py\n@@ -36,7 +36,7 @@\n \n for file_path in file_paths:\n if current_os == \"Windows\":\n- os.startfile(file_path)\n+ os.startfile(os.path.dirname(file_path))\n else:\n commands = []\n if current_os == \"Linux\":\n", "issue": "-f broken on windows\nBasically the title.\r\nWhen passing -f on windows in show the video file with the default video browser (like -p does) and not in the file explorer. \n", "before_files": [{"content": "import inspect\nimport os\nimport platform\nimport subprocess as sp\nimport sys\nimport re\nimport traceback\nimport importlib.util\nimport types\n\nfrom .config import file_writer_config\nfrom .scene.scene import Scene\nfrom .utils.sounds import play_error_sound\nfrom .utils.sounds import play_finish_sound\nfrom . import constants\nfrom .logger import logger, console\n\n\ndef open_file_if_needed(file_writer):\n if file_writer_config[\"quiet\"]:\n curr_stdout = sys.stdout\n sys.stdout = open(os.devnull, \"w\")\n\n open_file = any(\n [file_writer_config[\"preview\"], file_writer_config[\"show_file_in_finder\"]]\n )\n if open_file:\n current_os = platform.system()\n file_paths = []\n\n if file_writer_config[\"save_last_frame\"]:\n file_paths.append(file_writer.get_image_file_path())\n if file_writer_config[\"write_to_movie\"]:\n file_paths.append(file_writer.get_movie_file_path())\n\n for file_path in file_paths:\n if current_os == \"Windows\":\n os.startfile(file_path)\n else:\n commands = []\n if current_os == \"Linux\":\n commands.append(\"xdg-open\")\n elif current_os.startswith(\"CYGWIN\"):\n commands.append(\"cygstart\")\n else: # Assume macOS\n commands.append(\"open\")\n\n if file_writer_config[\"show_file_in_finder\"]:\n commands.append(\"-R\")\n\n commands.append(file_path)\n\n # commands.append(\"-g\")\n FNULL = open(os.devnull, \"w\")\n sp.call(commands, stdout=FNULL, stderr=sp.STDOUT)\n FNULL.close()\n\n if file_writer_config[\"quiet\"]:\n sys.stdout.close()\n sys.stdout = curr_stdout\n\n\ndef is_child_scene(obj, module):\n return (\n inspect.isclass(obj)\n and issubclass(obj, Scene)\n and obj != Scene\n and obj.__module__.startswith(module.__name__)\n )\n\n\ndef prompt_user_for_choice(scene_classes):\n num_to_class = {}\n for count, scene_class in enumerate(scene_classes):\n count += 1 # start with 1 instead of 0\n name = scene_class.__name__\n console.print(f\"{count}: {name}\", style=\"logging.level.info\")\n num_to_class[count] = scene_class\n try:\n user_input = console.input(\n f\"[log.message] {constants.CHOOSE_NUMBER_MESSAGE} [/log.message]\"\n )\n return [\n num_to_class[int(num_str)]\n for num_str in re.split(r\"\\s*,\\s*\", user_input.strip())\n ]\n except KeyError:\n logger.error(constants.INVALID_NUMBER_MESSAGE)\n sys.exit(2)\n except EOFError:\n sys.exit(1)\n\n\ndef get_scenes_to_render(scene_classes):\n if not scene_classes:\n logger.error(constants.NO_SCENE_MESSAGE)\n return []\n if file_writer_config[\"write_all\"]:\n return scene_classes\n result = []\n for scene_name in file_writer_config[\"scene_names\"]:\n found = False\n for scene_class in scene_classes:\n if scene_class.__name__ == scene_name:\n result.append(scene_class)\n found = True\n break\n if not found and (scene_name != \"\"):\n logger.error(constants.SCENE_NOT_FOUND_MESSAGE.format(scene_name))\n if result:\n return result\n return (\n [scene_classes[0]]\n if len(scene_classes) == 1\n else prompt_user_for_choice(scene_classes)\n )\n\n\ndef get_scene_classes_from_module(module):\n return [\n member[1]\n for member in inspect.getmembers(module, lambda x: is_child_scene(x, module))\n ]\n\n\ndef get_module(file_name):\n if file_name == \"-\":\n module = types.ModuleType(\"input_scenes\")\n logger.info(\n \"Enter the animation's code & end with an EOF (CTRL+D on Linux/Unix, CTRL+Z on Windows):\"\n )\n code = sys.stdin.read()\n if not code.startswith(\"from manim import\"):\n logger.warn(\n \"Didn't find an import statement for Manim. Importing automatically...\"\n )\n code = \"from manim import *\\n\" + code\n logger.info(\"Rendering animation from typed code...\")\n try:\n exec(code, module.__dict__)\n return module\n except Exception as e:\n logger.error(f\"Failed to render scene: {str(e)}\")\n sys.exit(2)\n else:\n if os.path.exists(file_name):\n if file_name[-3:] != \".py\":\n raise Exception(f\"{file_name} is not a valid Manim python script.\")\n module_name = file_name[:-3].replace(os.sep, \".\").split(\".\")[-1]\n spec = importlib.util.spec_from_file_location(module_name, file_name)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n else:\n raise FileNotFoundError(f\"{file_name} not found\")\n\n\ndef main():\n module = get_module(file_writer_config[\"input_file\"])\n all_scene_classes = get_scene_classes_from_module(module)\n scene_classes_to_render = get_scenes_to_render(all_scene_classes)\n sound_on = file_writer_config[\"sound\"]\n for SceneClass in scene_classes_to_render:\n try:\n # By invoking, this renders the full scene\n scene = SceneClass()\n open_file_if_needed(scene.file_writer)\n if sound_on:\n play_finish_sound()\n except Exception:\n print(\"\\n\\n\")\n traceback.print_exc()\n print(\"\\n\\n\")\n if sound_on:\n play_error_sound()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "manim/__main__.py"}]} | 2,211 | 99 |
gh_patches_debug_40384 | rasdani/github-patches | git_diff | carpentries__amy-85 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Paginate display of all sites
The display of sites should be broken into pages, and each page should have navigation links to jump to other pages.
</issue>
<code>
[start of workshops/views.py]
1 import yaml
2
3 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
4 from django.core.urlresolvers import reverse
5 from django.http import HttpResponseRedirect
6 from django.shortcuts import render, get_object_or_404
7 from django.views.generic.edit import CreateView, UpdateView
8 from django.db.models import Count
9
10 from workshops.models import Site, Airport, Event, Person, Task, Cohort, Skill, Trainee, Badge
11 from workshops.forms import InstructorMatchForm
12 from workshops.util import earth_distance
13
14 #------------------------------------------------------------
15
16 ITEMS_PER_PAGE = 25
17
18 #------------------------------------------------------------
19
20 def index(request):
21 '''Home page.'''
22 upcoming_events = Event.objects.upcoming_events()
23 context = {'title' : 'Home Page',
24 'upcoming_events' : upcoming_events}
25 return render(request, 'workshops/index.html', context)
26
27 #------------------------------------------------------------
28
29 SITE_FIELDS = ['domain', 'fullname', 'country', 'notes']
30
31 def all_sites(request):
32 '''List all sites.'''
33 all_sites = Site.objects.order_by('domain')
34 user_can_add = request.user.has_perm('edit')
35 context = {'title' : 'All Sites',
36 'all_sites' : all_sites,
37 'user_can_add' : user_can_add}
38 return render(request, 'workshops/all_sites.html', context)
39
40 def site_details(request, site_domain):
41 '''List details of a particular site.'''
42 site = Site.objects.get(domain=site_domain)
43 events = Event.objects.filter(site=site)
44 context = {'title' : 'Site {0}'.format(site),
45 'site' : site,
46 'events' : events}
47 return render(request, 'workshops/site.html', context)
48
49 class SiteCreate(CreateView):
50 model = Site
51 fields = SITE_FIELDS
52
53 class SiteUpdate(UpdateView):
54 model = Site
55 fields = SITE_FIELDS
56 slug_field = 'domain'
57 slug_url_kwarg = 'site_domain'
58
59 #------------------------------------------------------------
60
61 AIRPORT_FIELDS = ['iata', 'fullname', 'country', 'latitude', 'longitude']
62
63 def all_airports(request):
64 '''List all airports.'''
65 all_airports = Airport.objects.order_by('iata')
66 user_can_add = request.user.has_perm('edit')
67 context = {'title' : 'All Airports',
68 'all_airports' : all_airports,
69 'user_can_add' : user_can_add}
70 return render(request, 'workshops/all_airports.html', context)
71
72 def airport_details(request, airport_iata):
73 '''List details of a particular airport.'''
74 airport = Airport.objects.get(iata=airport_iata)
75 context = {'title' : 'Airport {0}'.format(airport),
76 'airport' : airport}
77 return render(request, 'workshops/airport.html', context)
78
79 class AirportCreate(CreateView):
80 model = Airport
81 fields = AIRPORT_FIELDS
82
83 class AirportUpdate(UpdateView):
84 model = Airport
85 fields = AIRPORT_FIELDS
86 slug_field = 'iata'
87 slug_url_kwarg = 'airport_iata'
88
89 #------------------------------------------------------------
90
91 def all_persons(request):
92 '''List all persons.'''
93 all_persons = Person.objects.order_by('family', 'personal')
94 context = {'title' : 'All Persons',
95 'all_persons' : all_persons}
96 return render(request, 'workshops/all_persons.html', context)
97
98 def person_details(request, person_id):
99 '''List details of a particular person.'''
100 person = Person.objects.get(id=person_id)
101 context = {'title' : 'Person {0}'.format(person),
102 'person' : person}
103 return render(request, 'workshops/person.html', context)
104
105 #------------------------------------------------------------
106
107 def all_events(request):
108 '''List all events.'''
109
110 all_events = Event.objects.order_by('slug')
111
112 # Get the number of items requested per page, default to 25
113 # This is important for unit testing, we need to be
114 # able to specify how many items to expect
115 items = request.GET.get('items_per_page', ITEMS_PER_PAGE)
116
117 # Only paginate if the number of items is not 'all'
118 if not items == 'all':
119
120 # If items is not an integer, set it to the default
121 try:
122 items = int(items)
123 except ValueError:
124 items = ITEMS_PER_PAGE
125
126 events_paginator = Paginator(all_events, items)
127
128 # Get the page number requested, if any
129 page = request.GET.get('page')
130
131 try:
132 events = events_paginator.page(page)
133 except PageNotAnInteger:
134 # If page is not an integer, deliver first page.
135 events = events_paginator.page(1)
136 except EmptyPage:
137 # If page is out of range (e.g. 9999), deliver last page of results.
138 events = events_paginator.page(events_paginator.num_pages)
139 else:
140 events = all_events
141
142 context = {'title' : 'All Events',
143 'all_events' : events}
144
145 return render(request, 'workshops/all_events.html', context)
146
147 def event_details(request, event_slug):
148 '''List details of a particular event.'''
149 event = Event.objects.get(slug=event_slug)
150 context = {'title' : 'Event {0}'.format(event),
151 'event' : event}
152 return render(request, 'workshops/event.html', context)
153
154 #------------------------------------------------------------
155
156 TASK_FIELDS = ['event', 'person', 'role']
157
158 def all_tasks(request):
159 '''List all tasks.'''
160 all_tasks = Task.objects.order_by('event', 'person', 'role')
161 user_can_add = request.user.has_perm('edit')
162 context = {'title' : 'All Tasks',
163 'all_tasks' : all_tasks,
164 'user_can_add' : user_can_add}
165 return render(request, 'workshops/all_tasks.html', context)
166
167 def task_details(request, event_slug, person_id, role_name):
168 '''List details of a particular task.'''
169 task = Task.objects.get(event__slug=event_slug, person__id=person_id, role__name=role_name)
170 context = {'title' : 'Task {0}'.format(task),
171 'task' : task}
172 return render(request, 'workshops/task.html', context)
173
174 class TaskCreate(CreateView):
175 model = Task
176 fields = TASK_FIELDS
177
178 class TaskUpdate(UpdateView):
179 model = Task
180 fields = TASK_FIELDS
181 pk_url_kwarg = 'task_id'
182
183 def get_object(self):
184 """
185 Returns the object the view is displaying.
186 """
187
188 event_slug = self.kwargs.get('event_slug', None)
189 person_id = self.kwargs.get('person_id', None)
190 role_name = self.kwargs.get('role_name', None)
191
192 return get_object_or_404(Task, event__slug=event_slug, person__id=person_id, role__name=role_name)
193
194 #------------------------------------------------------------
195
196 COHORT_FIELDS = ['name', 'start', 'active', 'venue', 'qualifies']
197
198 def all_cohorts(request):
199 '''List all cohorts.'''
200 all_cohorts = Cohort.objects.order_by('start')
201 user_can_add = request.user.has_perm('edit')
202 context = {'title' : 'All Cohorts',
203 'all_cohorts' : all_cohorts,
204 'user_can_add' : user_can_add}
205 return render(request, 'workshops/all_cohorts.html', context)
206
207 def cohort_details(request, cohort_name):
208 '''List details of a particular cohort.'''
209 cohort = Cohort.objects.get(name=cohort_name)
210 trainees = Trainee.objects.filter(cohort_id=cohort.id)
211 context = {'title' : 'Cohort {0}'.format(cohort),
212 'cohort' : cohort,
213 'trainees' : trainees}
214 return render(request, 'workshops/cohort.html', context)
215
216 class CohortCreate(CreateView):
217 model = Cohort
218 fields = COHORT_FIELDS
219
220 class CohortUpdate(UpdateView):
221 model = Cohort
222 fields = COHORT_FIELDS
223 slug_field = 'name'
224 slug_url_kwarg = 'cohort_name'
225
226 #------------------------------------------------------------
227
228 def match(request):
229 persons = None
230
231 if request.method == 'POST':
232 form = InstructorMatchForm(request.POST)
233 if form.is_valid():
234
235 # Filter by skills.
236 persons = Person.objects.filter(airport__isnull=False)
237 skills = []
238 for s in Skill.objects.all():
239 if form.cleaned_data[s.name]:
240 skills.append(s)
241 persons = persons.have_skills(skills)
242
243 # Sort by location.
244 loc = (float(form.cleaned_data['latitude']),
245 float(form.cleaned_data['longitude']))
246 persons = [(earth_distance(loc, (p.airport.latitude, p.airport.longitude)), p)
247 for p in persons]
248 persons.sort()
249 persons = [x[1] for x in persons[:10]]
250
251 else:
252 pass # FIXME: error message
253
254 # if a GET (or any other method) we'll create a blank form
255 else:
256 form = InstructorMatchForm()
257
258 return render(request, 'workshops/match.html', {'form': form, 'persons' : persons})
259
260 #------------------------------------------------------------
261
262 def _export_badges():
263 '''Collect badge data as YAML.'''
264 result = {}
265 for badge in Badge.objects.all():
266 persons = Person.objects.filter(award__badge_id=badge.id)
267 result[badge.name] = [{"user" : p.slug, "name" : p.fullname()} for p in persons]
268 return result
269
270 def _export_instructors():
271 '''Collect instructor airport locations as YAML.'''
272 # Exclude airports with no instructors, and add the number of instructors per airport
273 airports = Airport.objects.exclude(person=None).annotate(num_persons=Count('person'))
274 return [{'airport' : str(a.fullname),
275 'latlng' : '{0},{1}'.format(a.latitude, a.longitude),
276 'count' : a.num_persons}
277 for a in airports]
278
279 def export(request, name):
280 '''Export data as YAML for inclusion in main web site.'''
281 data = None
282 if name == 'badges':
283 title, data = 'Badges', _export_badges()
284 elif name == 'instructors':
285 title, data = 'Instructor Locations', _export_instructors()
286 else:
287 title, data = 'Error', None # FIXME - need an error message
288 context = {'title' : title,
289 'data' : data}
290 return render(request, 'workshops/export.html', context)
291
[end of workshops/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/workshops/views.py b/workshops/views.py
--- a/workshops/views.py
+++ b/workshops/views.py
@@ -30,10 +30,33 @@
def all_sites(request):
'''List all sites.'''
+
all_sites = Site.objects.order_by('domain')
+ items, page = _get_pagination_items(request)
+
+ # Show everything.
+ if items == 'all':
+ sites = all_sites
+
+ # Show selected items.
+ else:
+ sites_paginator = Paginator(all_sites, items)
+
+ # Select the sites.
+ try:
+ sites = sites_paginator.page(page)
+
+ # If page is not an integer, deliver first page.
+ except PageNotAnInteger:
+ sites = sites_paginator.page(1)
+
+ # If page is out of range, deliver last page of results.
+ except EmptyPage:
+ sites = sites_paginator.page(sites_paginator.num_pages)
+
user_can_add = request.user.has_perm('edit')
context = {'title' : 'All Sites',
- 'all_sites' : all_sites,
+ 'all_sites' : sites,
'user_can_add' : user_can_add}
return render(request, 'workshops/all_sites.html', context)
@@ -108,36 +131,27 @@
'''List all events.'''
all_events = Event.objects.order_by('slug')
+ items, page = _get_pagination_items(request)
- # Get the number of items requested per page, default to 25
- # This is important for unit testing, we need to be
- # able to specify how many items to expect
- items = request.GET.get('items_per_page', ITEMS_PER_PAGE)
-
- # Only paginate if the number of items is not 'all'
- if not items == 'all':
-
- # If items is not an integer, set it to the default
- try:
- items = int(items)
- except ValueError:
- items = ITEMS_PER_PAGE
+ # Show everything.
+ if items == 'all':
+ events = all_events
+ # Show selected items.
+ else:
events_paginator = Paginator(all_events, items)
- # Get the page number requested, if any
- page = request.GET.get('page')
-
+ # Select the events.
try:
events = events_paginator.page(page)
+
+ # If page is not an integer, deliver first page.
except PageNotAnInteger:
- # If page is not an integer, deliver first page.
events = events_paginator.page(1)
+
+ # If page is out of range, deliver last page of results.
except EmptyPage:
- # If page is out of range (e.g. 9999), deliver last page of results.
events = events_paginator.page(events_paginator.num_pages)
- else:
- events = all_events
context = {'title' : 'All Events',
'all_events' : events}
@@ -288,3 +302,20 @@
context = {'title' : title,
'data' : data}
return render(request, 'workshops/export.html', context)
+
+#------------------------------------------------------------
+
+def _get_pagination_items(request):
+ '''Determine how much pagination to do.'''
+
+ items = request.GET.get('items_per_page', ITEMS_PER_PAGE)
+
+ if items != 'all':
+ try:
+ items = int(items)
+ except ValueError:
+ items = ITEMS_PER_PAGE
+
+ page = request.GET.get('page')
+
+ return items, page
| {"golden_diff": "diff --git a/workshops/views.py b/workshops/views.py\n--- a/workshops/views.py\n+++ b/workshops/views.py\n@@ -30,10 +30,33 @@\n \n def all_sites(request):\n '''List all sites.'''\n+\n all_sites = Site.objects.order_by('domain')\n+ items, page = _get_pagination_items(request)\n+\n+ # Show everything.\n+ if items == 'all':\n+ sites = all_sites\n+\n+ # Show selected items.\n+ else:\n+ sites_paginator = Paginator(all_sites, items)\n+\n+ # Select the sites.\n+ try:\n+ sites = sites_paginator.page(page)\n+\n+ # If page is not an integer, deliver first page.\n+ except PageNotAnInteger:\n+ sites = sites_paginator.page(1)\n+\n+ # If page is out of range, deliver last page of results.\n+ except EmptyPage:\n+ sites = sites_paginator.page(sites_paginator.num_pages)\n+\n user_can_add = request.user.has_perm('edit')\n context = {'title' : 'All Sites',\n- 'all_sites' : all_sites,\n+ 'all_sites' : sites,\n 'user_can_add' : user_can_add}\n return render(request, 'workshops/all_sites.html', context)\n \n@@ -108,36 +131,27 @@\n '''List all events.'''\n \n all_events = Event.objects.order_by('slug')\n+ items, page = _get_pagination_items(request)\n \n- # Get the number of items requested per page, default to 25\n- # This is important for unit testing, we need to be\n- # able to specify how many items to expect\n- items = request.GET.get('items_per_page', ITEMS_PER_PAGE)\n-\n- # Only paginate if the number of items is not 'all'\n- if not items == 'all':\n-\n- # If items is not an integer, set it to the default\n- try:\n- items = int(items)\n- except ValueError:\n- items = ITEMS_PER_PAGE\n+ # Show everything.\n+ if items == 'all':\n+ events = all_events\n \n+ # Show selected items.\n+ else:\n events_paginator = Paginator(all_events, items)\n \n- # Get the page number requested, if any\n- page = request.GET.get('page')\n-\n+ # Select the events.\n try:\n events = events_paginator.page(page)\n+\n+ # If page is not an integer, deliver first page.\n except PageNotAnInteger:\n- # If page is not an integer, deliver first page.\n events = events_paginator.page(1)\n+\n+ # If page is out of range, deliver last page of results.\n except EmptyPage:\n- # If page is out of range (e.g. 9999), deliver last page of results.\n events = events_paginator.page(events_paginator.num_pages)\n- else:\n- events = all_events\n \n context = {'title' : 'All Events',\n 'all_events' : events}\n@@ -288,3 +302,20 @@\n context = {'title' : title,\n 'data' : data}\n return render(request, 'workshops/export.html', context)\n+\n+#------------------------------------------------------------\n+\n+def _get_pagination_items(request):\n+ '''Determine how much pagination to do.'''\n+\n+ items = request.GET.get('items_per_page', ITEMS_PER_PAGE)\n+\n+ if items != 'all':\n+ try:\n+ items = int(items)\n+ except ValueError:\n+ items = ITEMS_PER_PAGE\n+\n+ page = request.GET.get('page')\n+\n+ return items, page\n", "issue": "Paginate display of all sites\nThe display of sites should be broken into pages, and each page should have navigation links to jump to other pages.\n\n", "before_files": [{"content": "import yaml\n\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import render, get_object_or_404\nfrom django.views.generic.edit import CreateView, UpdateView\nfrom django.db.models import Count\n\nfrom workshops.models import Site, Airport, Event, Person, Task, Cohort, Skill, Trainee, Badge\nfrom workshops.forms import InstructorMatchForm\nfrom workshops.util import earth_distance\n\n#------------------------------------------------------------\n\nITEMS_PER_PAGE = 25\n\n#------------------------------------------------------------\n\ndef index(request):\n '''Home page.'''\n upcoming_events = Event.objects.upcoming_events()\n context = {'title' : 'Home Page',\n 'upcoming_events' : upcoming_events}\n return render(request, 'workshops/index.html', context)\n\n#------------------------------------------------------------\n\nSITE_FIELDS = ['domain', 'fullname', 'country', 'notes']\n\ndef all_sites(request):\n '''List all sites.'''\n all_sites = Site.objects.order_by('domain')\n user_can_add = request.user.has_perm('edit')\n context = {'title' : 'All Sites',\n 'all_sites' : all_sites,\n 'user_can_add' : user_can_add}\n return render(request, 'workshops/all_sites.html', context)\n\ndef site_details(request, site_domain):\n '''List details of a particular site.'''\n site = Site.objects.get(domain=site_domain)\n events = Event.objects.filter(site=site)\n context = {'title' : 'Site {0}'.format(site),\n 'site' : site,\n 'events' : events}\n return render(request, 'workshops/site.html', context)\n\nclass SiteCreate(CreateView):\n model = Site\n fields = SITE_FIELDS\n\nclass SiteUpdate(UpdateView):\n model = Site\n fields = SITE_FIELDS\n slug_field = 'domain'\n slug_url_kwarg = 'site_domain'\n\n#------------------------------------------------------------\n\nAIRPORT_FIELDS = ['iata', 'fullname', 'country', 'latitude', 'longitude']\n\ndef all_airports(request):\n '''List all airports.'''\n all_airports = Airport.objects.order_by('iata')\n user_can_add = request.user.has_perm('edit')\n context = {'title' : 'All Airports',\n 'all_airports' : all_airports,\n 'user_can_add' : user_can_add}\n return render(request, 'workshops/all_airports.html', context)\n\ndef airport_details(request, airport_iata):\n '''List details of a particular airport.'''\n airport = Airport.objects.get(iata=airport_iata)\n context = {'title' : 'Airport {0}'.format(airport),\n 'airport' : airport}\n return render(request, 'workshops/airport.html', context)\n\nclass AirportCreate(CreateView):\n model = Airport\n fields = AIRPORT_FIELDS\n\nclass AirportUpdate(UpdateView):\n model = Airport\n fields = AIRPORT_FIELDS\n slug_field = 'iata'\n slug_url_kwarg = 'airport_iata'\n\n#------------------------------------------------------------\n\ndef all_persons(request):\n '''List all persons.'''\n all_persons = Person.objects.order_by('family', 'personal')\n context = {'title' : 'All Persons',\n 'all_persons' : all_persons}\n return render(request, 'workshops/all_persons.html', context)\n\ndef person_details(request, person_id):\n '''List details of a particular person.'''\n person = Person.objects.get(id=person_id)\n context = {'title' : 'Person {0}'.format(person),\n 'person' : person}\n return render(request, 'workshops/person.html', context)\n\n#------------------------------------------------------------\n\ndef all_events(request):\n '''List all events.'''\n\n all_events = Event.objects.order_by('slug')\n\n # Get the number of items requested per page, default to 25\n # This is important for unit testing, we need to be\n # able to specify how many items to expect\n items = request.GET.get('items_per_page', ITEMS_PER_PAGE)\n\n # Only paginate if the number of items is not 'all'\n if not items == 'all':\n\n # If items is not an integer, set it to the default\n try:\n items = int(items)\n except ValueError:\n items = ITEMS_PER_PAGE\n\n events_paginator = Paginator(all_events, items)\n\n # Get the page number requested, if any\n page = request.GET.get('page')\n\n try:\n events = events_paginator.page(page)\n except PageNotAnInteger:\n # If page is not an integer, deliver first page.\n events = events_paginator.page(1)\n except EmptyPage:\n # If page is out of range (e.g. 9999), deliver last page of results.\n events = events_paginator.page(events_paginator.num_pages)\n else:\n events = all_events\n\n context = {'title' : 'All Events',\n 'all_events' : events}\n\n return render(request, 'workshops/all_events.html', context)\n\ndef event_details(request, event_slug):\n '''List details of a particular event.'''\n event = Event.objects.get(slug=event_slug)\n context = {'title' : 'Event {0}'.format(event),\n 'event' : event}\n return render(request, 'workshops/event.html', context)\n\n#------------------------------------------------------------\n\nTASK_FIELDS = ['event', 'person', 'role']\n\ndef all_tasks(request):\n '''List all tasks.'''\n all_tasks = Task.objects.order_by('event', 'person', 'role')\n user_can_add = request.user.has_perm('edit')\n context = {'title' : 'All Tasks',\n 'all_tasks' : all_tasks,\n 'user_can_add' : user_can_add}\n return render(request, 'workshops/all_tasks.html', context)\n\ndef task_details(request, event_slug, person_id, role_name):\n '''List details of a particular task.'''\n task = Task.objects.get(event__slug=event_slug, person__id=person_id, role__name=role_name)\n context = {'title' : 'Task {0}'.format(task),\n 'task' : task}\n return render(request, 'workshops/task.html', context)\n\nclass TaskCreate(CreateView):\n model = Task\n fields = TASK_FIELDS\n\nclass TaskUpdate(UpdateView):\n model = Task\n fields = TASK_FIELDS\n pk_url_kwarg = 'task_id'\n\n def get_object(self):\n \"\"\"\n Returns the object the view is displaying.\n \"\"\"\n\n event_slug = self.kwargs.get('event_slug', None)\n person_id = self.kwargs.get('person_id', None)\n role_name = self.kwargs.get('role_name', None)\n\n return get_object_or_404(Task, event__slug=event_slug, person__id=person_id, role__name=role_name)\n\n#------------------------------------------------------------\n\nCOHORT_FIELDS = ['name', 'start', 'active', 'venue', 'qualifies']\n\ndef all_cohorts(request):\n '''List all cohorts.'''\n all_cohorts = Cohort.objects.order_by('start')\n user_can_add = request.user.has_perm('edit')\n context = {'title' : 'All Cohorts',\n 'all_cohorts' : all_cohorts,\n 'user_can_add' : user_can_add}\n return render(request, 'workshops/all_cohorts.html', context)\n\ndef cohort_details(request, cohort_name):\n '''List details of a particular cohort.'''\n cohort = Cohort.objects.get(name=cohort_name)\n trainees = Trainee.objects.filter(cohort_id=cohort.id)\n context = {'title' : 'Cohort {0}'.format(cohort),\n 'cohort' : cohort,\n 'trainees' : trainees}\n return render(request, 'workshops/cohort.html', context)\n\nclass CohortCreate(CreateView):\n model = Cohort\n fields = COHORT_FIELDS\n\nclass CohortUpdate(UpdateView):\n model = Cohort\n fields = COHORT_FIELDS\n slug_field = 'name'\n slug_url_kwarg = 'cohort_name'\n\n#------------------------------------------------------------\n\ndef match(request):\n persons = None\n\n if request.method == 'POST':\n form = InstructorMatchForm(request.POST)\n if form.is_valid():\n\n # Filter by skills.\n persons = Person.objects.filter(airport__isnull=False)\n skills = []\n for s in Skill.objects.all():\n if form.cleaned_data[s.name]:\n skills.append(s)\n persons = persons.have_skills(skills)\n\n # Sort by location.\n loc = (float(form.cleaned_data['latitude']),\n float(form.cleaned_data['longitude']))\n persons = [(earth_distance(loc, (p.airport.latitude, p.airport.longitude)), p)\n for p in persons]\n persons.sort()\n persons = [x[1] for x in persons[:10]]\n\n else:\n pass # FIXME: error message\n\n # if a GET (or any other method) we'll create a blank form\n else:\n form = InstructorMatchForm()\n\n return render(request, 'workshops/match.html', {'form': form, 'persons' : persons})\n\n#------------------------------------------------------------\n\ndef _export_badges():\n '''Collect badge data as YAML.'''\n result = {}\n for badge in Badge.objects.all():\n persons = Person.objects.filter(award__badge_id=badge.id)\n result[badge.name] = [{\"user\" : p.slug, \"name\" : p.fullname()} for p in persons]\n return result\n\ndef _export_instructors():\n '''Collect instructor airport locations as YAML.'''\n # Exclude airports with no instructors, and add the number of instructors per airport\n airports = Airport.objects.exclude(person=None).annotate(num_persons=Count('person'))\n return [{'airport' : str(a.fullname),\n 'latlng' : '{0},{1}'.format(a.latitude, a.longitude),\n 'count' : a.num_persons}\n for a in airports]\n\ndef export(request, name):\n '''Export data as YAML for inclusion in main web site.'''\n data = None\n if name == 'badges':\n title, data = 'Badges', _export_badges()\n elif name == 'instructors':\n title, data = 'Instructor Locations', _export_instructors()\n else:\n title, data = 'Error', None # FIXME - need an error message\n context = {'title' : title,\n 'data' : data}\n return render(request, 'workshops/export.html', context)\n", "path": "workshops/views.py"}]} | 3,605 | 820 |
gh_patches_debug_8898 | rasdani/github-patches | git_diff | speechbrain__speechbrain-71 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Learning rate printing
Now it will only print 0.00. Maybe we should print it with scientific notation.
</issue>
<code>
[start of speechbrain/utils/train_logger.py]
1 import logging
2 from speechbrain.utils.edit_distance import wer_summary
3
4 logger = logging.getLogger(__name__)
5
6
7 class TrainLogger:
8 """Abstract class defining an interface for training loggers."""
9
10 def log_stats(
11 self,
12 stats_meta,
13 train_stats=None,
14 valid_stats=None,
15 test_stats=None,
16 verbose=False,
17 ):
18 """Log the stats for one epoch.
19
20 Arguments
21 ---------
22 stats_meta : dict of str:scalar pairs
23 Meta information about the stats (e.g. epoch, learning-rate, etc.)
24 train_stats : dict of str:list pairs
25 Each loss type is represented with a str : list pair including
26 all the values for the training pass.
27 valid_stats : dict of str:list pairs
28 Each loss type is represented with a str : list pair including
29 all the values for the validation pass.
30 test_stats : dict of str:list pairs
31 Each loss type is represented with a str : list pair including
32 all the values for the test pass.
33 verbose : bool
34 Whether to also put logging information to the standard logger.
35 """
36 raise NotImplementedError
37
38
39 class FileTrainLogger(TrainLogger):
40 """Text logger of training information
41
42 Arguments
43 ---------
44 save_file : str
45 The file to use for logging train information.
46 summary_fns : dict of str:function pairs
47 Each summary function should take a list produced as output
48 from a training/validation pass and summarize it to a single scalar.
49 """
50
51 def __init__(self, save_file, summary_fns):
52 self.save_file = save_file
53 self.summary_fns = summary_fns
54
55 def _item_to_string(self, key, value, dataset=None):
56 """Convert one item to string, handling floats"""
57 if isinstance(value, float):
58 value = f"{value:.2f}"
59 if dataset is not None:
60 key = f"{dataset} {key}"
61 return f"{key}: {value}"
62
63 def _stats_to_string(self, stats, dataset=None):
64 """Convert all stats to a single string summary"""
65 return ", ".join(
66 [self._item_to_string(k, v, dataset) for k, v in stats.items()]
67 )
68
69 def log_stats(
70 self,
71 stats_meta,
72 train_stats=None,
73 valid_stats=None,
74 test_stats=None,
75 verbose=True,
76 ):
77 """See TrainLogger.log_stats()"""
78 string_summary = self._stats_to_string(stats_meta)
79 for dataset, stats in [
80 ("train", train_stats),
81 ("valid", valid_stats),
82 ("test", test_stats),
83 ]:
84 if stats is None:
85 continue
86 summary = {}
87 for stat, value_list in stats.items():
88 summary[stat] = self.summary_fns[stat](value_list)
89 string_summary += " - " + self._stats_to_string(summary, dataset)
90
91 with open(self.save_file, "a") as fout:
92 print(string_summary, file=fout)
93 if verbose:
94 logger.info(string_summary)
95
96
97 class TensorboardLogger(TrainLogger):
98 """Logs training information in the format required by Tensorboard.
99
100 Arguments
101 ---------
102 save_dir : str
103 A directory for storing all the relevant logs
104
105 Raises
106 ------
107 ImportError if Tensorboard is not installed.
108 """
109
110 def __init__(self, save_dir):
111 self.save_dir = save_dir
112
113 # Raises ImportError if TensorBoard is not installed
114 from torch.utils.tensorboard import SummaryWriter
115
116 self.writer = SummaryWriter(self.save_dir)
117 self.global_step = {"train": {}, "valid": {}, "meta": 0}
118
119 def log_stats(
120 self,
121 stats_meta,
122 train_stats=None,
123 valid_stats=None,
124 test_stats=None,
125 verbose=False,
126 ):
127 """See TrainLogger.log_stats()"""
128 self.global_step["meta"] += 1
129 for name, value in stats_meta.items():
130 self.writer.add_scalar(name, value, self.global_step["meta"])
131
132 for dataset, stats in [
133 ("train", train_stats),
134 ("valid", valid_stats),
135 ("test", test_stats),
136 ]:
137 if stats is None:
138 continue
139 for stat, value_list in stats.items():
140 if stat not in self.global_step[dataset]:
141 self.global_step[dataset][stat] = 0
142 tag = f"{stat}/{dataset}"
143 for value in value_list:
144 new_global_step = self.global_step[dataset][stat] + 1
145 self.writer.add_scalar(tag, value, new_global_step)
146 self.global_step[dataset][stat] = new_global_step
147
148
149 def summarize_average(stat_list):
150 return float(sum(stat_list) / len(stat_list))
151
152
153 def summarize_error_rate(stat_list):
154 summary = wer_summary(stat_list)
155 return summary["WER"]
156
[end of speechbrain/utils/train_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py
--- a/speechbrain/utils/train_logger.py
+++ b/speechbrain/utils/train_logger.py
@@ -54,8 +54,10 @@
def _item_to_string(self, key, value, dataset=None):
"""Convert one item to string, handling floats"""
- if isinstance(value, float):
+ if isinstance(value, float) and 0.01 < value < 100.0:
value = f"{value:.2f}"
+ elif isinstance(value, float):
+ value = f"{value:.2e}"
if dataset is not None:
key = f"{dataset} {key}"
return f"{key}: {value}"
| {"golden_diff": "diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py\n--- a/speechbrain/utils/train_logger.py\n+++ b/speechbrain/utils/train_logger.py\n@@ -54,8 +54,10 @@\n \n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n- if isinstance(value, float):\n+ if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n+ elif isinstance(value, float):\n+ value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n", "issue": "Learning rate printing\nNow it will only print 0.00. Maybe we should print it with scientific notation.\n", "before_files": [{"content": "import logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns):\n self.save_file = save_file\n self.summary_fns = summary_fns\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float):\n value = f\"{value:.2f}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n summary[stat] = self.summary_fns[stat](value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}]} | 1,964 | 167 |
gh_patches_debug_5285 | rasdani/github-patches | git_diff | freedomofpress__securedrop-7140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release SecureDrop 2.8.0
This is a tracking issue for the release of SecureDrop 2.8.0
Tentatively scheduled as follows:
**Pre-release announcement:** 2023-03-05
**Release date:** 2024-03-12
**Release manager:** @zenmonkeykstop
**Deputy release manager:** @legoktm
**Localization manager:** @cfm
**Communications manager:** @eloquence
_SecureDrop maintainers and testers:_ As you QA 2.8.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them "QA: Release".
Test debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).
# [QA Matrix for 2.8.0](https://docs.google.com/spreadsheets/d/1hcSrgbid03so0tQz3zfwvMaWJ8x7OOZsCfEz1I_PjAE/edit#gid=96348658)
# [Test Plan for 2.8.0](https://github.com/freedomofpress/securedrop/wiki/2.8.0-Test-Plan)
# [Tails-only test plan for 2.8.0-rc2](https://github.com/freedomofpress/securedrop/issues/7121#issuecomment-1988954749)
(complete if you've already tested 2.8.0-rc1, there are no server changes in rc2)
# Prepare release candidate (2.8.0~rc1)
- [ ] Link to latest version of Tails, including release candidates, to test against during QA
- [ ] Tails 5
- [ ] Tails 6
- [x] Prepare 2.8.0~rc1 release changelog
- [x] Branch off release/2.8.0 from develop
- [x] Prepare 2.8.0
- [ ] Build debs, preserving build log, and put up `2.8.0~rc1` on test apt server
- [ ] Commit build log.
# Prepare release candidate (2.8.0~rc2)
- [ ] Link to latest version of Tails, including release candidates, to test against during QA
- [x] Tails 5
- [x] Tails 6
- [x] Prepare 2.8.0~rc2 release changelog
- [x] Branch off release/2.8.0 from develop
- [x] Prepare 2.8.0-rc2
- [ ] ~Build debs, preserving build log, and put up `2.8.0~rc1` on test apt server~ skipped, as changes are Tails-only.
- [ ] ~Commit build log.~
After each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.
# Final release
- [ ] ~Ensure builder in release branch is updated and/or update builder image~ (no longer in use)
- [x] Push signed tag
- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)
- [x] Build final Debian packages(and preserve build log)
- [x] Commit package build log to https://github.com/freedomofpress/build-logs
- [x] Pre-Flight: Test that install and upgrade from 2.7.0 to 2.8.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)
- [ ] Flip apt QA server to prod status (merge to `main` in the LFS repo)
- [ ] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo
- [ ] Prepare release messaging
# Post release
- [ ] Create GitHub release object
- [ ] Once release object is created, update versions in `securedrop-docs` and Wagtail
- [ ] Verify new docs show up on https://docs.securedrop.org
- [ ] Publish announcements
- [ ] Merge changelog back to `develop`
- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap
</issue>
<code>
[start of securedrop/version.py]
1 __version__ = "2.8.0~rc1"
2
[end of securedrop/version.py]
[start of securedrop/setup.py]
1 import setuptools
2
3 long_description = "The SecureDrop whistleblower platform."
4
5 setuptools.setup(
6 name="securedrop-app-code",
7 version="2.8.0~rc1",
8 author="Freedom of the Press Foundation",
9 author_email="[email protected]",
10 description="SecureDrop Server",
11 long_description=long_description,
12 long_description_content_type="text/markdown",
13 license="AGPLv3+",
14 python_requires=">=3.8",
15 url="https://github.com/freedomofpress/securedrop",
16 classifiers=[
17 "Development Status :: 5 - Stable",
18 "Programming Language :: Python :: 3",
19 "Topic :: Software Development :: Libraries :: Python Modules",
20 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
21 "Intended Audience :: Developers",
22 "Operating System :: OS Independent",
23 ],
24 )
25
[end of securedrop/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/setup.py b/securedrop/setup.py
--- a/securedrop/setup.py
+++ b/securedrop/setup.py
@@ -4,7 +4,7 @@
setuptools.setup(
name="securedrop-app-code",
- version="2.8.0~rc1",
+ version="2.9.0~rc1",
author="Freedom of the Press Foundation",
author_email="[email protected]",
description="SecureDrop Server",
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = "2.8.0~rc1"
+__version__ = "2.9.0~rc1"
| {"golden_diff": "diff --git a/securedrop/setup.py b/securedrop/setup.py\n--- a/securedrop/setup.py\n+++ b/securedrop/setup.py\n@@ -4,7 +4,7 @@\n \n setuptools.setup(\n name=\"securedrop-app-code\",\n- version=\"2.8.0~rc1\",\n+ version=\"2.9.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = \"2.8.0~rc1\"\n+__version__ = \"2.9.0~rc1\"\n", "issue": "Release SecureDrop 2.8.0\nThis is a tracking issue for the release of SecureDrop 2.8.0\r\n\r\nTentatively scheduled as follows:\r\n\r\n**Pre-release announcement:** 2023-03-05\r\n**Release date:** 2024-03-12\r\n\r\n**Release manager:** @zenmonkeykstop \r\n**Deputy release manager:** @legoktm \r\n**Localization manager:** @cfm\r\n**Communications manager:** @eloquence\r\n\r\n_SecureDrop maintainers and testers:_ As you QA 2.8.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them \"QA: Release\".\r\n\r\nTest debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).\r\n\r\n# [QA Matrix for 2.8.0](https://docs.google.com/spreadsheets/d/1hcSrgbid03so0tQz3zfwvMaWJ8x7OOZsCfEz1I_PjAE/edit#gid=96348658)\r\n# [Test Plan for 2.8.0](https://github.com/freedomofpress/securedrop/wiki/2.8.0-Test-Plan)\r\n# [Tails-only test plan for 2.8.0-rc2](https://github.com/freedomofpress/securedrop/issues/7121#issuecomment-1988954749)\r\n(complete if you've already tested 2.8.0-rc1, there are no server changes in rc2)\r\n\r\n# Prepare release candidate (2.8.0~rc1)\r\n- [ ] Link to latest version of Tails, including release candidates, to test against during QA\r\n - [ ] Tails 5 \r\n - [ ] Tails 6 \r\n- [x] Prepare 2.8.0~rc1 release changelog\r\n- [x] Branch off release/2.8.0 from develop\r\n- [x] Prepare 2.8.0\r\n- [ ] Build debs, preserving build log, and put up `2.8.0~rc1` on test apt server\r\n- [ ] Commit build log.\r\n\r\n# Prepare release candidate (2.8.0~rc2)\r\n- [ ] Link to latest version of Tails, including release candidates, to test against during QA\r\n - [x] Tails 5 \r\n - [x] Tails 6 \r\n- [x] Prepare 2.8.0~rc2 release changelog\r\n- [x] Branch off release/2.8.0 from develop\r\n- [x] Prepare 2.8.0-rc2\r\n- [ ] ~Build debs, preserving build log, and put up `2.8.0~rc1` on test apt server~ skipped, as changes are Tails-only.\r\n- [ ] ~Commit build log.~\r\n\r\n\r\nAfter each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.\r\n\r\n# Final release\r\n- [ ] ~Ensure builder in release branch is updated and/or update builder image~ (no longer in use)\r\n- [x] Push signed tag \r\n- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)\r\n- [x] Build final Debian packages(and preserve build log)\r\n- [x] Commit package build log to https://github.com/freedomofpress/build-logs\r\n- [x] Pre-Flight: Test that install and upgrade from 2.7.0 to 2.8.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)\r\n- [ ] Flip apt QA server to prod status (merge to `main` in the LFS repo)\r\n- [ ] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo\r\n- [ ] Prepare release messaging\r\n\r\n# Post release\r\n- [ ] Create GitHub release object \r\n- [ ] Once release object is created, update versions in `securedrop-docs` and Wagtail\r\n- [ ] Verify new docs show up on https://docs.securedrop.org\r\n- [ ] Publish announcements\r\n- [ ] Merge changelog back to `develop`\r\n- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap\n", "before_files": [{"content": "__version__ = \"2.8.0~rc1\"\n", "path": "securedrop/version.py"}, {"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.8.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=[\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ],\n)\n", "path": "securedrop/setup.py"}]} | 1,866 | 175 |
gh_patches_debug_29242 | rasdani/github-patches | git_diff | larq__larq-34 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tf.sign(0) = 0
</issue>
<code>
[start of xquant/quantizers.py]
1 import tensorflow as tf
2 from xquant import utils
3
4
5 @utils.register_keras_custom_object
6 @tf.custom_gradient
7 def ste_sign(x):
8 r"""
9 Sign binarization function.
10 \\[q(x) = \mathrm{Sign}(x)\\]
11
12 The gradient is estimated using the Straight-Through Estimator.
13 \\[\frac{\partial q(x)}{\partial x} = x\\]
14
15 # Arguments
16 x: Input tensor.
17
18 # Returns
19 Binarized tensor.
20
21 # References
22 - [Binarized Neural Networks: Training Deep Neural Networks with Weights and
23 Activations Constrained to +1 or -1](http://arxiv.org/abs/1602.02830)
24 """
25
26 def grad(dy):
27 return dy
28
29 return tf.sign(x), grad
30
31
32 @utils.register_keras_custom_object
33 @tf.custom_gradient
34 def approx_sign(x):
35 r"""
36 Sign binarization function.
37 \\[q(x) = \mathrm{Sign}(x)\\]
38
39 The gradient is estimated using the ApproxSign method.
40 \\[\frac{\partial q(x)}{\partial x} = (2 - 2 \left|x\right|))\\]
41
42 # Arguments
43 x: Input tensor.
44
45 # Returns
46 Binarized tensor.
47
48 # References
49 - [Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved
50 Representational Capability and Advanced
51 Training Algorithm](http://arxiv.org/abs/1808.00278)
52 """
53
54 def grad(dy):
55 return (1 - tf.abs(x)) * 2 * dy
56
57 return tf.sign(x), grad
58
59
60 def serialize(initializer):
61 return tf.keras.utils.serialize_keras_object(initializer)
62
63
64 def deserialize(name, custom_objects=None):
65 return tf.keras.utils.deserialize_keras_object(
66 name,
67 module_objects=globals(),
68 custom_objects=custom_objects,
69 printable_module_name="quantization function",
70 )
71
72
73 def get(identifier):
74 if identifier is None:
75 return None
76 if isinstance(identifier, str):
77 return deserialize(str(identifier))
78 if callable(identifier):
79 return identifier
80 raise ValueError(
81 "Could not interpret quantization function identifier:", identifier
82 )
83
[end of xquant/quantizers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xquant/quantizers.py b/xquant/quantizers.py
--- a/xquant/quantizers.py
+++ b/xquant/quantizers.py
@@ -2,12 +2,22 @@
from xquant import utils
+def sign(x):
+ """A sign function that will never be zero"""
+ return tf.sign(tf.sign(x) + 1e-10)
+
+
@utils.register_keras_custom_object
@tf.custom_gradient
def ste_sign(x):
r"""
Sign binarization function.
- \\[q(x) = \mathrm{Sign}(x)\\]
+ \\[
+ q(x) = \begin{cases}
+ -1 & x < 0 \\\
+ 1 & x \geq 0
+ \end{cases}
+ \\]
The gradient is estimated using the Straight-Through Estimator.
\\[\frac{\partial q(x)}{\partial x} = x\\]
@@ -26,7 +36,7 @@
def grad(dy):
return dy
- return tf.sign(x), grad
+ return sign(x), grad
@utils.register_keras_custom_object
@@ -34,7 +44,12 @@
def approx_sign(x):
r"""
Sign binarization function.
- \\[q(x) = \mathrm{Sign}(x)\\]
+ \\[
+ q(x) = \begin{cases}
+ -1 & x < 0 \\\
+ 1 & x \geq 0
+ \end{cases}
+ \\]
The gradient is estimated using the ApproxSign method.
\\[\frac{\partial q(x)}{\partial x} = (2 - 2 \left|x\right|))\\]
@@ -54,7 +69,7 @@
def grad(dy):
return (1 - tf.abs(x)) * 2 * dy
- return tf.sign(x), grad
+ return sign(x), grad
def serialize(initializer):
| {"golden_diff": "diff --git a/xquant/quantizers.py b/xquant/quantizers.py\n--- a/xquant/quantizers.py\n+++ b/xquant/quantizers.py\n@@ -2,12 +2,22 @@\n from xquant import utils\n \n \n+def sign(x):\n+ \"\"\"A sign function that will never be zero\"\"\"\n+ return tf.sign(tf.sign(x) + 1e-10)\n+\n+\n @utils.register_keras_custom_object\n @tf.custom_gradient\n def ste_sign(x):\n r\"\"\"\n Sign binarization function.\n- \\\\[q(x) = \\mathrm{Sign}(x)\\\\]\n+ \\\\[\n+ q(x) = \\begin{cases}\n+ -1 & x < 0 \\\\\\\n+ 1 & x \\geq 0\n+ \\end{cases}\n+ \\\\]\n \n The gradient is estimated using the Straight-Through Estimator.\n \\\\[\\frac{\\partial q(x)}{\\partial x} = x\\\\]\n@@ -26,7 +36,7 @@\n def grad(dy):\n return dy\n \n- return tf.sign(x), grad\n+ return sign(x), grad\n \n \n @utils.register_keras_custom_object\n@@ -34,7 +44,12 @@\n def approx_sign(x):\n r\"\"\"\n Sign binarization function.\n- \\\\[q(x) = \\mathrm{Sign}(x)\\\\]\n+ \\\\[\n+ q(x) = \\begin{cases}\n+ -1 & x < 0 \\\\\\\n+ 1 & x \\geq 0\n+ \\end{cases}\n+ \\\\]\n \n The gradient is estimated using the ApproxSign method.\n \\\\[\\frac{\\partial q(x)}{\\partial x} = (2 - 2 \\left|x\\right|))\\\\]\n@@ -54,7 +69,7 @@\n def grad(dy):\n return (1 - tf.abs(x)) * 2 * dy\n \n- return tf.sign(x), grad\n+ return sign(x), grad\n \n \n def serialize(initializer):\n", "issue": "tf.sign(0) = 0\n\n", "before_files": [{"content": "import tensorflow as tf\nfrom xquant import utils\n\n\[email protected]_keras_custom_object\[email protected]_gradient\ndef ste_sign(x):\n r\"\"\"\n Sign binarization function.\n \\\\[q(x) = \\mathrm{Sign}(x)\\\\]\n\n The gradient is estimated using the Straight-Through Estimator.\n \\\\[\\frac{\\partial q(x)}{\\partial x} = x\\\\]\n\n # Arguments\n x: Input tensor.\n\n # Returns\n Binarized tensor.\n\n # References\n - [Binarized Neural Networks: Training Deep Neural Networks with Weights and\n Activations Constrained to +1 or -1](http://arxiv.org/abs/1602.02830)\n \"\"\"\n\n def grad(dy):\n return dy\n\n return tf.sign(x), grad\n\n\[email protected]_keras_custom_object\[email protected]_gradient\ndef approx_sign(x):\n r\"\"\"\n Sign binarization function.\n \\\\[q(x) = \\mathrm{Sign}(x)\\\\]\n\n The gradient is estimated using the ApproxSign method.\n \\\\[\\frac{\\partial q(x)}{\\partial x} = (2 - 2 \\left|x\\right|))\\\\]\n\n # Arguments\n x: Input tensor.\n\n # Returns\n Binarized tensor.\n\n # References\n - [Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved\n Representational Capability and Advanced\n Training Algorithm](http://arxiv.org/abs/1808.00278)\n \"\"\"\n\n def grad(dy):\n return (1 - tf.abs(x)) * 2 * dy\n\n return tf.sign(x), grad\n\n\ndef serialize(initializer):\n return tf.keras.utils.serialize_keras_object(initializer)\n\n\ndef deserialize(name, custom_objects=None):\n return tf.keras.utils.deserialize_keras_object(\n name,\n module_objects=globals(),\n custom_objects=custom_objects,\n printable_module_name=\"quantization function\",\n )\n\n\ndef get(identifier):\n if identifier is None:\n return None\n if isinstance(identifier, str):\n return deserialize(str(identifier))\n if callable(identifier):\n return identifier\n raise ValueError(\n \"Could not interpret quantization function identifier:\", identifier\n )\n", "path": "xquant/quantizers.py"}]} | 1,206 | 454 |
gh_patches_debug_13023 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-4833 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redirect full path to default version
I'd like to be able to create versionless links to the documentation, for use in error messages, code comments, etc. For example, a message like `see https://click.palletsprojects.com/windows for more information`. I don't want to use URLs with versions because I would have to remember to modify all instances of it before releasing a new version.
Currently, only the root path redirects to the default version, other paths raise a 404. Instead, the path should be preserved and appended to the default version path on redirect.
```
Works:
https://click.palletsprojects.com/ -> https://click.palletsprojects.com/en/7.x/
Doesn't work, 404:
https://click.palletsprojects.com/windows -> https://click.palletsprojects.com/en/7.x/windows
```
I do not want to use the "latest" or "stable" versions because I would like the URLs that people land on and share to contain the actual version.
I already do this with the transitional redirects I set up from `click.pocoo.org` to `click.palletsprojects.com`. A similar approach could probably be used to extend RTD's default redirect.
```nginx
location ~ ^/dev(.*)$ {
return 301 https://click.palletsprojects.com/en/master$1;
}
location ~ ^/(\d)(.*)$ {
return 301 https://click.palletsprojects.com/en/$1.x$2;
}
location ~ ^/latest(.*)$ {
return 301 https://click.palletsprojects.com/en/7.x$1;
}
location / {
return 301 https://click.palletsprojects.com/en/7.x$request_uri;
}
```
</issue>
<code>
[start of readthedocs/core/views/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 Core views, including the main homepage,
5
6 documentation and header rendering, and server errors.
7 """
8
9 from __future__ import absolute_import
10 from __future__ import division
11 import os
12 import logging
13
14 from django.conf import settings
15 from django.http import HttpResponseRedirect, Http404, JsonResponse
16 from django.shortcuts import render, get_object_or_404, redirect
17 from django.views.decorators.csrf import csrf_exempt
18 from django.views.generic import TemplateView
19
20 from readthedocs.builds.models import Version
21 from readthedocs.core.utils import broadcast
22 from readthedocs.projects.models import Project, ImportedFile
23 from readthedocs.projects.tasks import remove_dir
24 from readthedocs.redirects.utils import get_redirect_response
25
26 log = logging.getLogger(__name__)
27
28
29 class NoProjectException(Exception):
30 pass
31
32
33 class HomepageView(TemplateView):
34
35 template_name = 'homepage.html'
36
37 def get_context_data(self, **kwargs):
38 """Add latest builds and featured projects."""
39 context = super(HomepageView, self).get_context_data(**kwargs)
40 context['featured_list'] = Project.objects.filter(featured=True)
41 context['projects_count'] = Project.objects.count()
42 return context
43
44
45 class SupportView(TemplateView):
46 template_name = 'support.html'
47
48 def get_context_data(self, **kwargs):
49 context = super(SupportView, self).get_context_data(**kwargs)
50 support_email = getattr(settings, 'SUPPORT_EMAIL', None)
51 if not support_email:
52 support_email = 'support@{domain}'.format(
53 domain=getattr(
54 settings,
55 'PRODUCTION_DOMAIN',
56 'readthedocs.org',
57 ),
58 )
59
60 context['support_email'] = support_email
61 return context
62
63
64 def random_page(request, project_slug=None): # pylint: disable=unused-argument
65 imported_file = ImportedFile.objects.order_by('?')
66 if project_slug:
67 imported_file = imported_file.filter(project__slug=project_slug)
68 imported_file = imported_file.first()
69 if imported_file is None:
70 raise Http404
71 url = imported_file.get_absolute_url()
72 return HttpResponseRedirect(url)
73
74
75 @csrf_exempt
76 def wipe_version(request, project_slug, version_slug):
77 version = get_object_or_404(
78 Version,
79 project__slug=project_slug,
80 slug=version_slug,
81 )
82 # We need to check by ``for_admin_user`` here to allow members of the
83 # ``Admin`` team (which doesn't own the project) under the corporate site.
84 if version.project not in Project.objects.for_admin_user(user=request.user):
85 raise Http404('You must own this project to wipe it.')
86
87 if request.method == 'POST':
88 del_dirs = [
89 os.path.join(version.project.doc_path, 'checkouts', version.slug),
90 os.path.join(version.project.doc_path, 'envs', version.slug),
91 os.path.join(version.project.doc_path, 'conda', version.slug),
92 ]
93 for del_dir in del_dirs:
94 broadcast(type='build', task=remove_dir, args=[del_dir])
95 return redirect('project_version_list', project_slug)
96 return render(
97 request,
98 'wipe_version.html',
99 {'version': version, 'project': version.project},
100 )
101
102
103 def server_error_500(request, template_name='500.html'):
104 """A simple 500 handler so we get media."""
105 r = render(request, template_name)
106 r.status_code = 500
107 return r
108
109
110 def server_error_404(request, exception=None, template_name='404.html'): # pylint: disable=unused-argument # noqa
111 """
112 A simple 404 handler so we get media.
113
114 .. note::
115
116 Marking exception as optional to make /404/ testing page to work.
117 """
118 response = get_redirect_response(request, path=request.get_full_path())
119 if response:
120 return response
121 r = render(request, template_name)
122 r.status_code = 404
123 return r
124
125
126 def do_not_track(request):
127 dnt_header = request.META.get('HTTP_DNT')
128
129 # https://w3c.github.io/dnt/drafts/tracking-dnt.html#status-representation
130 return JsonResponse({ # pylint: disable=redundant-content-type-for-json-response
131 'policy': 'https://docs.readthedocs.io/en/latest/privacy-policy.html',
132 'same-party': [
133 'readthedocs.org',
134 'readthedocs.com',
135 'readthedocs.io', # .org Documentation Sites
136 'readthedocs-hosted.com', # .com Documentation Sites
137 ],
138 'tracking': 'N' if dnt_header == '1' else 'T',
139 }, content_type='application/tracking-status+json')
140
[end of readthedocs/core/views/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/core/views/__init__.py b/readthedocs/core/views/__init__.py
--- a/readthedocs/core/views/__init__.py
+++ b/readthedocs/core/views/__init__.py
@@ -116,8 +116,16 @@
Marking exception as optional to make /404/ testing page to work.
"""
response = get_redirect_response(request, path=request.get_full_path())
+
if response:
- return response
+ if response.url == request.build_absolute_uri():
+ # check that we do have a response and avoid infinite redirect
+ log.warning(
+ 'Infinite Redirect: FROM URL is the same than TO URL. url=%s',
+ response.url,
+ )
+ else:
+ return response
r = render(request, template_name)
r.status_code = 404
return r
| {"golden_diff": "diff --git a/readthedocs/core/views/__init__.py b/readthedocs/core/views/__init__.py\n--- a/readthedocs/core/views/__init__.py\n+++ b/readthedocs/core/views/__init__.py\n@@ -116,8 +116,16 @@\n Marking exception as optional to make /404/ testing page to work.\n \"\"\"\n response = get_redirect_response(request, path=request.get_full_path())\n+\n if response:\n- return response\n+ if response.url == request.build_absolute_uri():\n+ # check that we do have a response and avoid infinite redirect\n+ log.warning(\n+ 'Infinite Redirect: FROM URL is the same than TO URL. url=%s',\n+ response.url,\n+ )\n+ else:\n+ return response\n r = render(request, template_name)\n r.status_code = 404\n return r\n", "issue": "Redirect full path to default version\nI'd like to be able to create versionless links to the documentation, for use in error messages, code comments, etc. For example, a message like `see https://click.palletsprojects.com/windows for more information`. I don't want to use URLs with versions because I would have to remember to modify all instances of it before releasing a new version.\r\n\r\nCurrently, only the root path redirects to the default version, other paths raise a 404. Instead, the path should be preserved and appended to the default version path on redirect.\r\n\r\n```\r\nWorks:\r\nhttps://click.palletsprojects.com/ -> https://click.palletsprojects.com/en/7.x/\r\n\r\nDoesn't work, 404:\r\nhttps://click.palletsprojects.com/windows -> https://click.palletsprojects.com/en/7.x/windows\r\n```\r\n\r\nI do not want to use the \"latest\" or \"stable\" versions because I would like the URLs that people land on and share to contain the actual version.\r\n\r\nI already do this with the transitional redirects I set up from `click.pocoo.org` to `click.palletsprojects.com`. A similar approach could probably be used to extend RTD's default redirect.\r\n\r\n```nginx\r\nlocation ~ ^/dev(.*)$ {\r\n return 301 https://click.palletsprojects.com/en/master$1;\r\n}\r\n\r\nlocation ~ ^/(\\d)(.*)$ {\r\n return 301 https://click.palletsprojects.com/en/$1.x$2;\r\n}\r\n\r\nlocation ~ ^/latest(.*)$ {\r\n return 301 https://click.palletsprojects.com/en/7.x$1;\r\n}\r\n\r\nlocation / {\r\n return 301 https://click.palletsprojects.com/en/7.x$request_uri;\r\n}\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nCore views, including the main homepage,\n\ndocumentation and header rendering, and server errors.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nimport os\nimport logging\n\nfrom django.conf import settings\nfrom django.http import HttpResponseRedirect, Http404, JsonResponse\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.generic import TemplateView\n\nfrom readthedocs.builds.models import Version\nfrom readthedocs.core.utils import broadcast\nfrom readthedocs.projects.models import Project, ImportedFile\nfrom readthedocs.projects.tasks import remove_dir\nfrom readthedocs.redirects.utils import get_redirect_response\n\nlog = logging.getLogger(__name__)\n\n\nclass NoProjectException(Exception):\n pass\n\n\nclass HomepageView(TemplateView):\n\n template_name = 'homepage.html'\n\n def get_context_data(self, **kwargs):\n \"\"\"Add latest builds and featured projects.\"\"\"\n context = super(HomepageView, self).get_context_data(**kwargs)\n context['featured_list'] = Project.objects.filter(featured=True)\n context['projects_count'] = Project.objects.count()\n return context\n\n\nclass SupportView(TemplateView):\n template_name = 'support.html'\n\n def get_context_data(self, **kwargs):\n context = super(SupportView, self).get_context_data(**kwargs)\n support_email = getattr(settings, 'SUPPORT_EMAIL', None)\n if not support_email:\n support_email = 'support@{domain}'.format(\n domain=getattr(\n settings,\n 'PRODUCTION_DOMAIN',\n 'readthedocs.org',\n ),\n )\n\n context['support_email'] = support_email\n return context\n\n\ndef random_page(request, project_slug=None): # pylint: disable=unused-argument\n imported_file = ImportedFile.objects.order_by('?')\n if project_slug:\n imported_file = imported_file.filter(project__slug=project_slug)\n imported_file = imported_file.first()\n if imported_file is None:\n raise Http404\n url = imported_file.get_absolute_url()\n return HttpResponseRedirect(url)\n\n\n@csrf_exempt\ndef wipe_version(request, project_slug, version_slug):\n version = get_object_or_404(\n Version,\n project__slug=project_slug,\n slug=version_slug,\n )\n # We need to check by ``for_admin_user`` here to allow members of the\n # ``Admin`` team (which doesn't own the project) under the corporate site.\n if version.project not in Project.objects.for_admin_user(user=request.user):\n raise Http404('You must own this project to wipe it.')\n\n if request.method == 'POST':\n del_dirs = [\n os.path.join(version.project.doc_path, 'checkouts', version.slug),\n os.path.join(version.project.doc_path, 'envs', version.slug),\n os.path.join(version.project.doc_path, 'conda', version.slug),\n ]\n for del_dir in del_dirs:\n broadcast(type='build', task=remove_dir, args=[del_dir])\n return redirect('project_version_list', project_slug)\n return render(\n request,\n 'wipe_version.html',\n {'version': version, 'project': version.project},\n )\n\n\ndef server_error_500(request, template_name='500.html'):\n \"\"\"A simple 500 handler so we get media.\"\"\"\n r = render(request, template_name)\n r.status_code = 500\n return r\n\n\ndef server_error_404(request, exception=None, template_name='404.html'): # pylint: disable=unused-argument # noqa\n \"\"\"\n A simple 404 handler so we get media.\n\n .. note::\n\n Marking exception as optional to make /404/ testing page to work.\n \"\"\"\n response = get_redirect_response(request, path=request.get_full_path())\n if response:\n return response\n r = render(request, template_name)\n r.status_code = 404\n return r\n\n\ndef do_not_track(request):\n dnt_header = request.META.get('HTTP_DNT')\n\n # https://w3c.github.io/dnt/drafts/tracking-dnt.html#status-representation\n return JsonResponse({ # pylint: disable=redundant-content-type-for-json-response\n 'policy': 'https://docs.readthedocs.io/en/latest/privacy-policy.html',\n 'same-party': [\n 'readthedocs.org',\n 'readthedocs.com',\n 'readthedocs.io', # .org Documentation Sites\n 'readthedocs-hosted.com', # .com Documentation Sites\n ],\n 'tracking': 'N' if dnt_header == '1' else 'T',\n }, content_type='application/tracking-status+json')\n", "path": "readthedocs/core/views/__init__.py"}]} | 2,284 | 198 |
gh_patches_debug_38792 | rasdani/github-patches | git_diff | openmc-dev__openmc-1732 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature: Easier indexing through ResultsList to find depletion step index
If a user wanted to perform a restart simulation not necessarily at the end point in time, there isn't a super simple way to do it from the API. Currently you would have to manually search through the `Result` instances and compare their time data one at a time.
This isn't a monumental task, and I think we could easily provide a method that would allow users to find the index corresponding to a specific point in time. Supporting similar `time_units` like we do for the `Operator` (days, seconds, MWd/kgHM) would also be super nice.
Looking at #1708, this would also be useful when exporting `material.xml` files either during the depletion run or after the fact. Either way, the user will need a way to link a depletion step to a point in calendar time for restarting or other analysis.
Other things to consider would be some tolerance on the search criteria. If I ask for the step where burnup was 12 MWd/kgHM, (for example), but the closest point is 12.1 MWd/kgHM, should that step be returned? Or error out?
</issue>
<code>
[start of openmc/deplete/results_list.py]
1 import h5py
2 import numpy as np
3
4 from .results import Results, VERSION_RESULTS
5 from openmc.checkvalue import check_filetype_version, check_value
6
7
8 __all__ = ["ResultsList"]
9
10
11 class ResultsList(list):
12 """A list of openmc.deplete.Results objects
13
14 It is recommended to use :meth:`from_hdf5` over
15 direct creation.
16 """
17
18 @classmethod
19 def from_hdf5(cls, filename):
20 """Load in depletion results from a previous file
21
22 Parameters
23 ----------
24 filename : str
25 Path to depletion result file
26
27 Returns
28 -------
29 new : ResultsList
30 New instance of depletion results
31 """
32 with h5py.File(str(filename), "r") as fh:
33 check_filetype_version(fh, 'depletion results', VERSION_RESULTS[0])
34 new = cls()
35
36 # Get number of results stored
37 n = fh["number"][...].shape[0]
38
39 for i in range(n):
40 new.append(Results.from_hdf5(fh, i))
41 return new
42
43 def get_atoms(self, mat, nuc, nuc_units="atoms", time_units="s"):
44 """Get number of nuclides over time from a single material
45
46 .. note::
47 Initial values for some isotopes that do not appear in
48 initial concentrations may be non-zero, depending on the
49 value of :class:`openmc.deplete.Operator` ``dilute_initial``.
50 The :class:`openmc.deplete.Operator` adds isotopes according
51 to this setting, which can be set to zero.
52
53 Parameters
54 ----------
55 mat : str
56 Material name to evaluate
57 nuc : str
58 Nuclide name to evaluate
59 nuc_units : {"atoms", "atom/b-cm", "atom/cm3"}, optional
60 Units for the returned concentration. Default is ``"atoms"``
61
62 .. versionadded:: 0.12
63 time_units : {"s", "min", "h", "d"}, optional
64 Units for the returned time array. Default is ``"s"`` to
65 return the value in seconds.
66
67 .. versionadded:: 0.12
68
69 Returns
70 -------
71 times : numpy.ndarray
72 Array of times in units of ``time_units``
73 concentrations : numpy.ndarray
74 Concentration of specified nuclide in units of ``nuc_units``
75
76 """
77 check_value("time_units", time_units, {"s", "d", "min", "h"})
78 check_value("nuc_units", nuc_units,
79 {"atoms", "atom/b-cm", "atom/cm3"})
80
81 times = np.empty_like(self, dtype=float)
82 concentrations = np.empty_like(self, dtype=float)
83
84 # Evaluate value in each region
85 for i, result in enumerate(self):
86 times[i] = result.time[0]
87 concentrations[i] = result[0, mat, nuc]
88
89 # Unit conversions
90 if time_units == "d":
91 times /= (60 * 60 * 24)
92 elif time_units == "h":
93 times /= (60 * 60)
94 elif time_units == "min":
95 times /= 60
96
97 if nuc_units != "atoms":
98 # Divide by volume to get density
99 concentrations /= self[0].volume[mat]
100 if nuc_units == "atom/b-cm":
101 # 1 barn = 1e-24 cm^2
102 concentrations *= 1e-24
103
104 return times, concentrations
105
106 def get_reaction_rate(self, mat, nuc, rx):
107 """Get reaction rate in a single material/nuclide over time
108
109 .. note::
110
111 Initial values for some isotopes that do not appear in
112 initial concentrations may be non-zero, depending on the
113 value of :class:`openmc.deplete.Operator` ``dilute_initial``
114 The :class:`openmc.deplete.Operator` adds isotopes according
115 to this setting, which can be set to zero.
116
117 Parameters
118 ----------
119 mat : str
120 Material name to evaluate
121 nuc : str
122 Nuclide name to evaluate
123 rx : str
124 Reaction rate to evaluate
125
126 Returns
127 -------
128 times : numpy.ndarray
129 Array of times in [s]
130 rates : numpy.ndarray
131 Array of reaction rates
132
133 """
134 times = np.empty_like(self, dtype=float)
135 rates = np.empty_like(self, dtype=float)
136
137 # Evaluate value in each region
138 for i, result in enumerate(self):
139 times[i] = result.time[0]
140 rates[i] = result.rates[0].get(mat, nuc, rx) * result[0, mat, nuc]
141
142 return times, rates
143
144 def get_eigenvalue(self):
145 """Evaluates the eigenvalue from a results list.
146
147 Returns
148 -------
149 times : numpy.ndarray
150 Array of times in [s]
151 eigenvalues : numpy.ndarray
152 k-eigenvalue at each time. Column 0
153 contains the eigenvalue, while column
154 1 contains the associated uncertainty
155
156 """
157 times = np.empty_like(self, dtype=float)
158 eigenvalues = np.empty((len(self), 2), dtype=float)
159
160 # Get time/eigenvalue at each point
161 for i, result in enumerate(self):
162 times[i] = result.time[0]
163 eigenvalues[i] = result.k[0]
164
165 return times, eigenvalues
166
167 def get_depletion_time(self):
168 """Return an array of the average time to deplete a material
169
170 .. note::
171
172 Will have one fewer row than number of other methods,
173 like :meth:`get_eigenvalues`, because no depletion
174 is performed at the final transport stage
175
176 Returns
177 -------
178 times : numpy.ndarray
179 Vector of average time to deplete a single material
180 across all processes and materials.
181
182 """
183 times = np.empty(len(self) - 1)
184 # Need special logic because the predictor
185 # writes EOS values for step i as BOS values
186 # for step i+1
187 # The first proc_time may be zero
188 if self[0].proc_time > 0.0:
189 items = self[:-1]
190 else:
191 items = self[1:]
192 for ix, res in enumerate(items):
193 times[ix] = res.proc_time
194 return times
195
[end of openmc/deplete/results_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openmc/deplete/results_list.py b/openmc/deplete/results_list.py
--- a/openmc/deplete/results_list.py
+++ b/openmc/deplete/results_list.py
@@ -1,8 +1,12 @@
+import numbers
+import bisect
+import math
+
import h5py
import numpy as np
from .results import Results, VERSION_RESULTS
-from openmc.checkvalue import check_filetype_version, check_value
+from openmc.checkvalue import check_filetype_version, check_value, check_type
__all__ = ["ResultsList"]
@@ -192,3 +196,103 @@
for ix, res in enumerate(items):
times[ix] = res.proc_time
return times
+
+ def get_times(self, time_units="d") -> np.ndarray:
+ """Return the points in time that define the depletion schedule
+
+
+ .. versionadded:: 0.12.1
+
+ Parameters
+ ----------
+ time_units : {"s", "d", "h", "min"}, optional
+ Return the vector in these units. Default is to
+ convert to days
+
+ Returns
+ -------
+ numpy.ndarray
+ 1-D vector of time points
+
+ """
+ check_type("time_units", time_units, str)
+
+ times = np.fromiter(
+ (r.time[0] for r in self),
+ dtype=self[0].time.dtype,
+ count=len(self),
+ )
+
+ if time_units == "d":
+ times /= (60 * 60 * 24)
+ elif time_units == "h":
+ times /= (60 * 60)
+ elif time_units == "min":
+ times /= 60
+ elif time_units != "s":
+ raise ValueError(
+ 'Unable to set "time_units" to {} since it is not '
+ 'in ("s", "d", "min", "h")'.format(time_units)
+ )
+ return times
+
+ def get_step_where(
+ self, time, time_units="d", atol=1e-6, rtol=1e-3
+ ) -> int:
+ """Return the index closest to a given point in time
+
+ In the event ``time`` lies exactly between two points, the
+ lower index will be returned. It is possible that the index
+ will be at most one past the point in time requested, but only
+ according to tolerances requested.
+
+ Passing ``atol=math.inf`` and ``rtol=math.inf`` will return
+ the closest index to the requested point.
+
+
+ .. versionadded:: 0.12.1
+
+ Parameters
+ ----------
+ time : float
+ Desired point in time
+ time_units : {"s", "d", "min", "h"}, optional
+ Units on ``time``. Default: days
+ atol : float, optional
+ Absolute tolerance (in ``time_units``) if ``time`` is not
+ found.
+ rtol : float, optional
+ Relative tolerance if ``time`` is not found.
+
+ Returns
+ -------
+ int
+
+ """
+ check_type("time", time, numbers.Real)
+ check_type("atol", atol, numbers.Real)
+ check_type("rtol", rtol, numbers.Real)
+
+ times = self.get_times(time_units)
+
+ if times[0] < time < times[-1]:
+ ix = bisect.bisect_left(times, time)
+ if ix == times.size:
+ ix -= 1
+ # Bisection will place us either directly on the point
+ # or one-past the first value less than time
+ elif time - times[ix - 1] <= times[ix] - time:
+ ix -= 1
+ elif times[0] >= time:
+ ix = 0
+ elif time >= times[-1]:
+ ix = times.size - 1
+
+ if math.isclose(time, times[ix], rel_tol=rtol, abs_tol=atol):
+ return ix
+
+ raise ValueError(
+ "A value of {} {} was not found given absolute and "
+ "relative tolerances {} and {}.".format(
+ time, time_units, atol, rtol)
+ )
| {"golden_diff": "diff --git a/openmc/deplete/results_list.py b/openmc/deplete/results_list.py\n--- a/openmc/deplete/results_list.py\n+++ b/openmc/deplete/results_list.py\n@@ -1,8 +1,12 @@\n+import numbers\n+import bisect\n+import math\n+\n import h5py\n import numpy as np\n \n from .results import Results, VERSION_RESULTS\n-from openmc.checkvalue import check_filetype_version, check_value\n+from openmc.checkvalue import check_filetype_version, check_value, check_type\n \n \n __all__ = [\"ResultsList\"]\n@@ -192,3 +196,103 @@\n for ix, res in enumerate(items):\n times[ix] = res.proc_time\n return times\n+\n+ def get_times(self, time_units=\"d\") -> np.ndarray:\n+ \"\"\"Return the points in time that define the depletion schedule\n+\n+\n+ .. versionadded:: 0.12.1\n+\n+ Parameters\n+ ----------\n+ time_units : {\"s\", \"d\", \"h\", \"min\"}, optional\n+ Return the vector in these units. Default is to\n+ convert to days\n+\n+ Returns\n+ -------\n+ numpy.ndarray\n+ 1-D vector of time points\n+\n+ \"\"\"\n+ check_type(\"time_units\", time_units, str)\n+\n+ times = np.fromiter(\n+ (r.time[0] for r in self),\n+ dtype=self[0].time.dtype,\n+ count=len(self),\n+ )\n+\n+ if time_units == \"d\":\n+ times /= (60 * 60 * 24)\n+ elif time_units == \"h\":\n+ times /= (60 * 60)\n+ elif time_units == \"min\":\n+ times /= 60\n+ elif time_units != \"s\":\n+ raise ValueError(\n+ 'Unable to set \"time_units\" to {} since it is not '\n+ 'in (\"s\", \"d\", \"min\", \"h\")'.format(time_units)\n+ )\n+ return times\n+\n+ def get_step_where(\n+ self, time, time_units=\"d\", atol=1e-6, rtol=1e-3\n+ ) -> int:\n+ \"\"\"Return the index closest to a given point in time\n+\n+ In the event ``time`` lies exactly between two points, the\n+ lower index will be returned. It is possible that the index\n+ will be at most one past the point in time requested, but only\n+ according to tolerances requested.\n+\n+ Passing ``atol=math.inf`` and ``rtol=math.inf`` will return\n+ the closest index to the requested point.\n+\n+\n+ .. versionadded:: 0.12.1\n+\n+ Parameters\n+ ----------\n+ time : float\n+ Desired point in time\n+ time_units : {\"s\", \"d\", \"min\", \"h\"}, optional\n+ Units on ``time``. Default: days\n+ atol : float, optional\n+ Absolute tolerance (in ``time_units``) if ``time`` is not\n+ found.\n+ rtol : float, optional\n+ Relative tolerance if ``time`` is not found.\n+\n+ Returns\n+ -------\n+ int\n+\n+ \"\"\"\n+ check_type(\"time\", time, numbers.Real)\n+ check_type(\"atol\", atol, numbers.Real)\n+ check_type(\"rtol\", rtol, numbers.Real)\n+\n+ times = self.get_times(time_units)\n+\n+ if times[0] < time < times[-1]:\n+ ix = bisect.bisect_left(times, time)\n+ if ix == times.size:\n+ ix -= 1\n+ # Bisection will place us either directly on the point\n+ # or one-past the first value less than time\n+ elif time - times[ix - 1] <= times[ix] - time:\n+ ix -= 1\n+ elif times[0] >= time:\n+ ix = 0\n+ elif time >= times[-1]:\n+ ix = times.size - 1\n+\n+ if math.isclose(time, times[ix], rel_tol=rtol, abs_tol=atol):\n+ return ix\n+\n+ raise ValueError(\n+ \"A value of {} {} was not found given absolute and \"\n+ \"relative tolerances {} and {}.\".format(\n+ time, time_units, atol, rtol)\n+ )\n", "issue": "Feature: Easier indexing through ResultsList to find depletion step index\nIf a user wanted to perform a restart simulation not necessarily at the end point in time, there isn't a super simple way to do it from the API. Currently you would have to manually search through the `Result` instances and compare their time data one at a time. \r\n\r\nThis isn't a monumental task, and I think we could easily provide a method that would allow users to find the index corresponding to a specific point in time. Supporting similar `time_units` like we do for the `Operator` (days, seconds, MWd/kgHM) would also be super nice. \r\n\r\nLooking at #1708, this would also be useful when exporting `material.xml` files either during the depletion run or after the fact. Either way, the user will need a way to link a depletion step to a point in calendar time for restarting or other analysis.\r\n\r\nOther things to consider would be some tolerance on the search criteria. If I ask for the step where burnup was 12 MWd/kgHM, (for example), but the closest point is 12.1 MWd/kgHM, should that step be returned? Or error out? \n", "before_files": [{"content": "import h5py\nimport numpy as np\n\nfrom .results import Results, VERSION_RESULTS\nfrom openmc.checkvalue import check_filetype_version, check_value\n\n\n__all__ = [\"ResultsList\"]\n\n\nclass ResultsList(list):\n \"\"\"A list of openmc.deplete.Results objects\n\n It is recommended to use :meth:`from_hdf5` over\n direct creation.\n \"\"\"\n\n @classmethod\n def from_hdf5(cls, filename):\n \"\"\"Load in depletion results from a previous file\n\n Parameters\n ----------\n filename : str\n Path to depletion result file\n\n Returns\n -------\n new : ResultsList\n New instance of depletion results\n \"\"\"\n with h5py.File(str(filename), \"r\") as fh:\n check_filetype_version(fh, 'depletion results', VERSION_RESULTS[0])\n new = cls()\n\n # Get number of results stored\n n = fh[\"number\"][...].shape[0]\n\n for i in range(n):\n new.append(Results.from_hdf5(fh, i))\n return new\n\n def get_atoms(self, mat, nuc, nuc_units=\"atoms\", time_units=\"s\"):\n \"\"\"Get number of nuclides over time from a single material\n\n .. note::\n Initial values for some isotopes that do not appear in\n initial concentrations may be non-zero, depending on the\n value of :class:`openmc.deplete.Operator` ``dilute_initial``.\n The :class:`openmc.deplete.Operator` adds isotopes according\n to this setting, which can be set to zero.\n\n Parameters\n ----------\n mat : str\n Material name to evaluate\n nuc : str\n Nuclide name to evaluate\n nuc_units : {\"atoms\", \"atom/b-cm\", \"atom/cm3\"}, optional\n Units for the returned concentration. Default is ``\"atoms\"``\n\n .. versionadded:: 0.12\n time_units : {\"s\", \"min\", \"h\", \"d\"}, optional\n Units for the returned time array. Default is ``\"s\"`` to\n return the value in seconds.\n\n .. versionadded:: 0.12\n\n Returns\n -------\n times : numpy.ndarray\n Array of times in units of ``time_units``\n concentrations : numpy.ndarray\n Concentration of specified nuclide in units of ``nuc_units``\n\n \"\"\"\n check_value(\"time_units\", time_units, {\"s\", \"d\", \"min\", \"h\"})\n check_value(\"nuc_units\", nuc_units,\n {\"atoms\", \"atom/b-cm\", \"atom/cm3\"})\n\n times = np.empty_like(self, dtype=float)\n concentrations = np.empty_like(self, dtype=float)\n\n # Evaluate value in each region\n for i, result in enumerate(self):\n times[i] = result.time[0]\n concentrations[i] = result[0, mat, nuc]\n\n # Unit conversions\n if time_units == \"d\":\n times /= (60 * 60 * 24)\n elif time_units == \"h\":\n times /= (60 * 60)\n elif time_units == \"min\":\n times /= 60\n\n if nuc_units != \"atoms\":\n # Divide by volume to get density\n concentrations /= self[0].volume[mat]\n if nuc_units == \"atom/b-cm\":\n # 1 barn = 1e-24 cm^2\n concentrations *= 1e-24\n\n return times, concentrations\n\n def get_reaction_rate(self, mat, nuc, rx):\n \"\"\"Get reaction rate in a single material/nuclide over time\n\n .. note::\n\n Initial values for some isotopes that do not appear in\n initial concentrations may be non-zero, depending on the\n value of :class:`openmc.deplete.Operator` ``dilute_initial``\n The :class:`openmc.deplete.Operator` adds isotopes according\n to this setting, which can be set to zero.\n\n Parameters\n ----------\n mat : str\n Material name to evaluate\n nuc : str\n Nuclide name to evaluate\n rx : str\n Reaction rate to evaluate\n\n Returns\n -------\n times : numpy.ndarray\n Array of times in [s]\n rates : numpy.ndarray\n Array of reaction rates\n\n \"\"\"\n times = np.empty_like(self, dtype=float)\n rates = np.empty_like(self, dtype=float)\n\n # Evaluate value in each region\n for i, result in enumerate(self):\n times[i] = result.time[0]\n rates[i] = result.rates[0].get(mat, nuc, rx) * result[0, mat, nuc]\n\n return times, rates\n\n def get_eigenvalue(self):\n \"\"\"Evaluates the eigenvalue from a results list.\n\n Returns\n -------\n times : numpy.ndarray\n Array of times in [s]\n eigenvalues : numpy.ndarray\n k-eigenvalue at each time. Column 0\n contains the eigenvalue, while column\n 1 contains the associated uncertainty\n\n \"\"\"\n times = np.empty_like(self, dtype=float)\n eigenvalues = np.empty((len(self), 2), dtype=float)\n\n # Get time/eigenvalue at each point\n for i, result in enumerate(self):\n times[i] = result.time[0]\n eigenvalues[i] = result.k[0]\n\n return times, eigenvalues\n\n def get_depletion_time(self):\n \"\"\"Return an array of the average time to deplete a material\n\n .. note::\n\n Will have one fewer row than number of other methods,\n like :meth:`get_eigenvalues`, because no depletion\n is performed at the final transport stage\n\n Returns\n -------\n times : numpy.ndarray\n Vector of average time to deplete a single material\n across all processes and materials.\n\n \"\"\"\n times = np.empty(len(self) - 1)\n # Need special logic because the predictor\n # writes EOS values for step i as BOS values\n # for step i+1\n # The first proc_time may be zero\n if self[0].proc_time > 0.0:\n items = self[:-1]\n else:\n items = self[1:]\n for ix, res in enumerate(items):\n times[ix] = res.proc_time\n return times\n", "path": "openmc/deplete/results_list.py"}]} | 2,678 | 1,000 |
gh_patches_debug_18534 | rasdani/github-patches | git_diff | google__flax-1937 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incompatible variables for Tensorboard hparams are recast to strings but never returned
### Core Problem
Tensorboard hparams only supports a subset of Python and Numpy variable types ([see hparams docstrings](https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py)). The `flax.metrics.tensorboard.SummaryWriter` class's method `SummaryWriter.hparams()` should handle this behavior via the `flax.metrics.tensorboard._flatten_dict()` function, casting incompatible types to strings (which hparams supports). However, despite performing the casting operation, the `_flatten_dict` function does not append the recast variables to the dictionary it returns.
The result, for the below example, is that the "hidden_layers" parameters are silently excluded and do not appear in Tensorboard's hparams.
```Python
from flax.metrics import tensorboard
experiment_dir = "./Example"
network_hyperparameters = {
"hidden_layers_list": [12,12],
"hidden_layers_tuple": (12,12),
"dropout_rate": 1.0,
}
summary_writer = tensorboard.SummaryWriter(experiment_dir)
summary_writer.hparams(network_hyperparameters)
summary_writer.scalar('Training loss', 0.1, 1)
summary_writer.flush()
```
### Colab Example:
[Example notebook](https://colab.research.google.com/gist/tttc3/8dd7ef04c4222bc18fb03b043d370120/falx_tensorboard_issue_demo.ipynb)
### Proposed fix
Modify `_flattened_dict` to explicitly check if a dictionary value is one of those supported by Tensorboard's hparams api, as defined [here](https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py). If the value is not supported, cast it to a string and append it to the dictionary that `_flattened_dict` normally returns.
**Current _flatten_dict code**
```Python
def _flatten_dict(input_dict, parent_key='', sep='.'):
"""Flattens and simplifies dict such that it can be used by hparams.
Args:
input_dict: Input dict, e.g., from ConfigDict.
parent_key: String used in recursion.
sep: String used to separate parent and child keys.
Returns:
Flattened dict.
"""
items = []
for k, v in input_dict.items():
new_key = parent_key + sep + k if parent_key else k
# Take special care of things hparams cannot handle.
if v is None:
v = 'None'
elif isinstance(v, list):
v = str(v)
elif isinstance(v, tuple):
v = str(v)
elif isinstance(v, dict):
# Recursively flatten the dict.
items.extend(_flatten_dict(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
```
**Proposed _flatten_dict code modification**
```Python
def _flatten_dict(input_dict, parent_key='', sep='.'):
"""Flattens and simplifies dict such that it can be used by hparams.
Args:
input_dict: Input dict, e.g., from ConfigDict.
parent_key: String used in recursion.
sep: String used to separate parent and child keys.
Returns:
Flattened dict.
"""
items = []
for k, v in input_dict.items():
new_key = parent_key + sep + k if parent_key else k
# Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py
valid_types = (bool, int, float, str, np.bool_, np.integer, np.floating, np.character)
if isinstance(v, dict):
# Recursively flatten the dict.
items.extend(_flatten_dict(v, new_key, sep=sep).items())
continue
elif not isinstance(v, valid_types):
# Cast any incompatible values as strings such that they can be handled by hparams
v = str(v)
items.append((new_key, v))
return dict(items)
```
I am happy submit a pull request with the modifications.
</issue>
<code>
[start of flax/metrics/tensorboard.py]
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Write Summaries from JAX for use with Tensorboard.
16 """
17
18 import os
19
20 # pylint: disable=g-import-not-at-top
21 import numpy as np
22
23 import tensorflow.compat.v2 as tf
24 from tensorboard.plugins.hparams import api as hparams_api
25
26
27 def _flatten_dict(input_dict, parent_key='', sep='.'):
28 """Flattens and simplifies dict such that it can be used by hparams.
29
30 Args:
31 input_dict: Input dict, e.g., from ConfigDict.
32 parent_key: String used in recursion.
33 sep: String used to separate parent and child keys.
34
35 Returns:
36 Flattened dict.
37 """
38 items = []
39 for k, v in input_dict.items():
40 new_key = parent_key + sep + k if parent_key else k
41
42 # Take special care of things hparams cannot handle.
43 if v is None:
44 v = 'None'
45 elif isinstance(v, list):
46 v = str(v)
47 elif isinstance(v, tuple):
48 v = str(v)
49 elif isinstance(v, dict):
50 # Recursively flatten the dict.
51 items.extend(_flatten_dict(v, new_key, sep=sep).items())
52 else:
53 items.append((new_key, v))
54 return dict(items)
55
56
57 class SummaryWriter(object):
58 """Saves data in event and summary protos for tensorboard."""
59
60 def __init__(self, log_dir):
61 """Create a new SummaryWriter.
62
63 Args:
64 log_dir: path to record tfevents files in.
65 """
66 log_dir = os.fspath(log_dir)
67
68 # If needed, create log_dir directory as well as missing parent directories.
69 if not tf.io.gfile.isdir(log_dir):
70 tf.io.gfile.makedirs(log_dir)
71
72 self._event_writer = tf.summary.create_file_writer(log_dir, 10, 120, None)
73 self._closed = False
74
75 def close(self):
76 """Close SummaryWriter. Final!"""
77 if not self._closed:
78 self._event_writer.close()
79 self._closed = True
80 del self._event_writer
81
82 def flush(self):
83 self._event_writer.flush()
84
85 def scalar(self, tag, value, step):
86 """Saves scalar value.
87
88 Args:
89 tag: str: label for this data
90 value: int/float: number to log
91 step: int: training step
92 """
93 value = float(np.array(value))
94 with self._event_writer.as_default():
95 tf.summary.scalar(name=tag, data=value, step=step)
96
97 def image(self, tag, image, step, max_outputs=3):
98 """Saves RGB image summary from np.ndarray [H,W], [H,W,1], or [H,W,3].
99
100 Args:
101 tag: str: label for this data
102 image: ndarray: [H,W], [H,W,1], [H,W,3], [K,H,W], [K,H,W,1], [K,H,W,3]
103 Save image in greyscale or colors.
104 Pixel values could be either uint8 or float.
105 Floating point values should be in range [0, 1).
106 step: int: training step
107 max_outputs: At most this many images will be emitted at each step.
108 Defaults to 3.
109 """
110 image = np.array(image)
111 # tf.summary.image expects image to have shape [k, h, w, c] where,
112 # k = number of samples, h = height, w = width, c = number of channels.
113 if len(np.shape(image)) == 2:
114 image = image[np.newaxis, :, :, np.newaxis]
115 elif len(np.shape(image)) == 3:
116 # this could be either [k, h, w] or [h, w, c]
117 if np.shape(image)[-1] in (1, 3):
118 image = image[np.newaxis, :, :, :]
119 else:
120 image = image[:, :, :, np.newaxis]
121 if np.shape(image)[-1] == 1:
122 image = np.repeat(image, 3, axis=-1)
123
124 # Convert to tensor value as tf.summary.image expects data to be a tensor.
125 image = tf.convert_to_tensor(image)
126 with self._event_writer.as_default():
127 tf.summary.image(name=tag, data=image, step=step, max_outputs=max_outputs)
128
129 def audio(self, tag, audiodata, step, sample_rate=44100, max_outputs=3):
130 """Saves audio as wave.
131
132 NB: single channel only right now.
133
134 Args:
135 tag: str: label for this data
136 audiodata: ndarray [Nsamples, Nframes, Nchannels]: audio data to
137 be saved as wave. The data will be clipped to [-1.0, 1.0].
138 step: int: training step
139 sample_rate: sample rate of passed in audio buffer
140 max_outputs: At most this many audio clips will be emitted at each
141 step. Defaults to 3.
142 """
143 # tf.summary.audio expects the audio data to have floating values in
144 # [-1.0, 1.0].
145 audiodata = np.clip(np.array(audiodata), -1, 1)
146
147 # Convert to tensor value as tf.summary.audio expects data to be a tensor.
148 audio = tf.convert_to_tensor(audiodata, dtype=tf.float32)
149 with self._event_writer.as_default():
150 tf.summary.audio(
151 name=tag, data=audio, sample_rate=sample_rate, step=step,
152 max_outputs=max_outputs, encoding='wav')
153
154 def histogram(self, tag, values, step, bins=None):
155 """Saves histogram of values.
156
157 Args:
158 tag: str: label for this data
159 values: ndarray: will be flattened by this routine
160 step: int: training step
161 bins: number of bins in histogram
162 """
163 values = np.array(values)
164 values = np.reshape(values, -1)
165 with self._event_writer.as_default():
166 tf.summary.histogram(name=tag, data=values, step=step, buckets=bins)
167
168 def text(self, tag, textdata, step):
169 """Saves a text summary.
170
171 Args:
172 tag: str: label for this data
173 textdata: string
174 step: int: training step
175 Note: markdown formatting is rendered by tensorboard.
176 """
177 if not isinstance(textdata, (str, bytes)):
178 raise ValueError('`textdata` should be of the type `str` or `bytes`.')
179 with self._event_writer.as_default():
180 tf.summary.text(name=tag, data=tf.constant(textdata), step=step)
181
182 def write(self, tag, tensor, step, metadata=None):
183 """Saves a arbitrary tensor summary.
184
185 Useful when working with custom plugins or constructing a summary directly.
186
187 Args:
188 tag: str: label for this data
189 tensor: ndarray: tensor data to save.
190 step: int: training step
191 metadata: Optional SummaryMetadata, as a proto or serialized bytes.
192 Note: markdown formatting is rendered by tensorboard.
193 """
194 with self._event_writer.as_default():
195 tf.summary.write(
196 tag=tag,
197 tensor=tensor,
198 step=step,
199 metadata=metadata)
200
201 def hparams(self, hparams):
202 """Saves hyper parameters.
203
204 Args:
205 hparams: Flat mapping from hyper parameter name to value.
206 """
207
208 with self._event_writer.as_default():
209 hparams_api.hparams(hparams=_flatten_dict(hparams))
210
[end of flax/metrics/tensorboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flax/metrics/tensorboard.py b/flax/metrics/tensorboard.py
--- a/flax/metrics/tensorboard.py
+++ b/flax/metrics/tensorboard.py
@@ -39,18 +39,17 @@
for k, v in input_dict.items():
new_key = parent_key + sep + k if parent_key else k
- # Take special care of things hparams cannot handle.
- if v is None:
- v = 'None'
- elif isinstance(v, list):
- v = str(v)
- elif isinstance(v, tuple):
- v = str(v)
- elif isinstance(v, dict):
+ # Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py
+ valid_types = (bool, int, float, str, np.bool_, np.integer, np.floating, np.character)
+
+ if isinstance(v, dict):
# Recursively flatten the dict.
items.extend(_flatten_dict(v, new_key, sep=sep).items())
- else:
- items.append((new_key, v))
+ continue
+ elif not isinstance(v, valid_types):
+ # Cast any incompatible values as strings such that they can be handled by hparams
+ v = str(v)
+ items.append((new_key, v))
return dict(items)
| {"golden_diff": "diff --git a/flax/metrics/tensorboard.py b/flax/metrics/tensorboard.py\n--- a/flax/metrics/tensorboard.py\n+++ b/flax/metrics/tensorboard.py\n@@ -39,18 +39,17 @@\n for k, v in input_dict.items():\n new_key = parent_key + sep + k if parent_key else k\n \n- # Take special care of things hparams cannot handle.\n- if v is None:\n- v = 'None'\n- elif isinstance(v, list):\n- v = str(v)\n- elif isinstance(v, tuple):\n- v = str(v)\n- elif isinstance(v, dict):\n+ # Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py\n+ valid_types = (bool, int, float, str, np.bool_, np.integer, np.floating, np.character)\n+\n+ if isinstance(v, dict):\n # Recursively flatten the dict.\n items.extend(_flatten_dict(v, new_key, sep=sep).items())\n- else:\n- items.append((new_key, v))\n+ continue\n+ elif not isinstance(v, valid_types):\n+ # Cast any incompatible values as strings such that they can be handled by hparams\n+ v = str(v)\n+ items.append((new_key, v))\n return dict(items)\n", "issue": "Incompatible variables for Tensorboard hparams are recast to strings but never returned\n### Core Problem\r\nTensorboard hparams only supports a subset of Python and Numpy variable types ([see hparams docstrings](https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py)). The `flax.metrics.tensorboard.SummaryWriter` class's method `SummaryWriter.hparams()` should handle this behavior via the `flax.metrics.tensorboard._flatten_dict()` function, casting incompatible types to strings (which hparams supports). However, despite performing the casting operation, the `_flatten_dict` function does not append the recast variables to the dictionary it returns. \r\n\r\nThe result, for the below example, is that the \"hidden_layers\" parameters are silently excluded and do not appear in Tensorboard's hparams.\r\n\r\n```Python \r\nfrom flax.metrics import tensorboard\r\n\r\nexperiment_dir = \"./Example\"\r\n\r\nnetwork_hyperparameters = {\r\n \"hidden_layers_list\": [12,12],\r\n \"hidden_layers_tuple\": (12,12),\r\n \"dropout_rate\": 1.0,\r\n}\r\n\r\nsummary_writer = tensorboard.SummaryWriter(experiment_dir)\r\nsummary_writer.hparams(network_hyperparameters)\r\nsummary_writer.scalar('Training loss', 0.1, 1)\r\nsummary_writer.flush()\r\n```\r\n\r\n### Colab Example:\r\n[Example notebook](https://colab.research.google.com/gist/tttc3/8dd7ef04c4222bc18fb03b043d370120/falx_tensorboard_issue_demo.ipynb)\r\n\r\n### Proposed fix\r\nModify `_flattened_dict` to explicitly check if a dictionary value is one of those supported by Tensorboard's hparams api, as defined [here](https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py). If the value is not supported, cast it to a string and append it to the dictionary that `_flattened_dict` normally returns.\r\n\r\n**Current _flatten_dict code**\r\n```Python\r\ndef _flatten_dict(input_dict, parent_key='', sep='.'):\r\n \"\"\"Flattens and simplifies dict such that it can be used by hparams.\r\n\r\n Args:\r\n input_dict: Input dict, e.g., from ConfigDict.\r\n parent_key: String used in recursion.\r\n sep: String used to separate parent and child keys.\r\n\r\n Returns:\r\n Flattened dict.\r\n \"\"\"\r\n items = []\r\n for k, v in input_dict.items():\r\n new_key = parent_key + sep + k if parent_key else k\r\n\r\n # Take special care of things hparams cannot handle.\r\n if v is None:\r\n v = 'None'\r\n elif isinstance(v, list):\r\n v = str(v)\r\n elif isinstance(v, tuple):\r\n v = str(v)\r\n elif isinstance(v, dict):\r\n # Recursively flatten the dict.\r\n items.extend(_flatten_dict(v, new_key, sep=sep).items())\r\n else:\r\n items.append((new_key, v))\r\n return dict(items)\r\n```\r\n\r\n**Proposed _flatten_dict code modification**\r\n```Python\r\ndef _flatten_dict(input_dict, parent_key='', sep='.'):\r\n \"\"\"Flattens and simplifies dict such that it can be used by hparams.\r\n\r\n Args:\r\n input_dict: Input dict, e.g., from ConfigDict.\r\n parent_key: String used in recursion.\r\n sep: String used to separate parent and child keys.\r\n\r\n Returns:\r\n Flattened dict.\r\n \"\"\"\r\n items = []\r\n for k, v in input_dict.items():\r\n new_key = parent_key + sep + k if parent_key else k\r\n\r\n # Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py\r\n valid_types = (bool, int, float, str, np.bool_, np.integer, np.floating, np.character)\r\n\r\n if isinstance(v, dict):\r\n # Recursively flatten the dict.\r\n items.extend(_flatten_dict(v, new_key, sep=sep).items())\r\n continue\r\n elif not isinstance(v, valid_types):\r\n # Cast any incompatible values as strings such that they can be handled by hparams\r\n v = str(v)\r\n items.append((new_key, v))\r\n return dict(items)\r\n```\r\n\r\nI am happy submit a pull request with the modifications.\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Write Summaries from JAX for use with Tensorboard.\n\"\"\"\n\nimport os\n\n# pylint: disable=g-import-not-at-top\nimport numpy as np\n\nimport tensorflow.compat.v2 as tf\nfrom tensorboard.plugins.hparams import api as hparams_api\n\n\ndef _flatten_dict(input_dict, parent_key='', sep='.'):\n \"\"\"Flattens and simplifies dict such that it can be used by hparams.\n\n Args:\n input_dict: Input dict, e.g., from ConfigDict.\n parent_key: String used in recursion.\n sep: String used to separate parent and child keys.\n\n Returns:\n Flattened dict.\n \"\"\"\n items = []\n for k, v in input_dict.items():\n new_key = parent_key + sep + k if parent_key else k\n\n # Take special care of things hparams cannot handle.\n if v is None:\n v = 'None'\n elif isinstance(v, list):\n v = str(v)\n elif isinstance(v, tuple):\n v = str(v)\n elif isinstance(v, dict):\n # Recursively flatten the dict.\n items.extend(_flatten_dict(v, new_key, sep=sep).items())\n else:\n items.append((new_key, v))\n return dict(items)\n\n\nclass SummaryWriter(object):\n \"\"\"Saves data in event and summary protos for tensorboard.\"\"\"\n\n def __init__(self, log_dir):\n \"\"\"Create a new SummaryWriter.\n\n Args:\n log_dir: path to record tfevents files in.\n \"\"\"\n log_dir = os.fspath(log_dir)\n\n # If needed, create log_dir directory as well as missing parent directories.\n if not tf.io.gfile.isdir(log_dir):\n tf.io.gfile.makedirs(log_dir)\n\n self._event_writer = tf.summary.create_file_writer(log_dir, 10, 120, None)\n self._closed = False\n\n def close(self):\n \"\"\"Close SummaryWriter. Final!\"\"\"\n if not self._closed:\n self._event_writer.close()\n self._closed = True\n del self._event_writer\n\n def flush(self):\n self._event_writer.flush()\n\n def scalar(self, tag, value, step):\n \"\"\"Saves scalar value.\n\n Args:\n tag: str: label for this data\n value: int/float: number to log\n step: int: training step\n \"\"\"\n value = float(np.array(value))\n with self._event_writer.as_default():\n tf.summary.scalar(name=tag, data=value, step=step)\n\n def image(self, tag, image, step, max_outputs=3):\n \"\"\"Saves RGB image summary from np.ndarray [H,W], [H,W,1], or [H,W,3].\n\n Args:\n tag: str: label for this data\n image: ndarray: [H,W], [H,W,1], [H,W,3], [K,H,W], [K,H,W,1], [K,H,W,3]\n Save image in greyscale or colors.\n Pixel values could be either uint8 or float.\n Floating point values should be in range [0, 1).\n step: int: training step\n max_outputs: At most this many images will be emitted at each step.\n Defaults to 3.\n \"\"\"\n image = np.array(image)\n # tf.summary.image expects image to have shape [k, h, w, c] where,\n # k = number of samples, h = height, w = width, c = number of channels.\n if len(np.shape(image)) == 2:\n image = image[np.newaxis, :, :, np.newaxis]\n elif len(np.shape(image)) == 3:\n # this could be either [k, h, w] or [h, w, c]\n if np.shape(image)[-1] in (1, 3):\n image = image[np.newaxis, :, :, :]\n else:\n image = image[:, :, :, np.newaxis]\n if np.shape(image)[-1] == 1:\n image = np.repeat(image, 3, axis=-1)\n\n # Convert to tensor value as tf.summary.image expects data to be a tensor.\n image = tf.convert_to_tensor(image)\n with self._event_writer.as_default():\n tf.summary.image(name=tag, data=image, step=step, max_outputs=max_outputs)\n\n def audio(self, tag, audiodata, step, sample_rate=44100, max_outputs=3):\n \"\"\"Saves audio as wave.\n\n NB: single channel only right now.\n\n Args:\n tag: str: label for this data\n audiodata: ndarray [Nsamples, Nframes, Nchannels]: audio data to\n be saved as wave. The data will be clipped to [-1.0, 1.0].\n step: int: training step\n sample_rate: sample rate of passed in audio buffer\n max_outputs: At most this many audio clips will be emitted at each\n step. Defaults to 3.\n \"\"\"\n # tf.summary.audio expects the audio data to have floating values in\n # [-1.0, 1.0].\n audiodata = np.clip(np.array(audiodata), -1, 1)\n\n # Convert to tensor value as tf.summary.audio expects data to be a tensor.\n audio = tf.convert_to_tensor(audiodata, dtype=tf.float32)\n with self._event_writer.as_default():\n tf.summary.audio(\n name=tag, data=audio, sample_rate=sample_rate, step=step,\n max_outputs=max_outputs, encoding='wav')\n\n def histogram(self, tag, values, step, bins=None):\n \"\"\"Saves histogram of values.\n\n Args:\n tag: str: label for this data\n values: ndarray: will be flattened by this routine\n step: int: training step\n bins: number of bins in histogram\n \"\"\"\n values = np.array(values)\n values = np.reshape(values, -1)\n with self._event_writer.as_default():\n tf.summary.histogram(name=tag, data=values, step=step, buckets=bins)\n\n def text(self, tag, textdata, step):\n \"\"\"Saves a text summary.\n\n Args:\n tag: str: label for this data\n textdata: string\n step: int: training step\n Note: markdown formatting is rendered by tensorboard.\n \"\"\"\n if not isinstance(textdata, (str, bytes)):\n raise ValueError('`textdata` should be of the type `str` or `bytes`.')\n with self._event_writer.as_default():\n tf.summary.text(name=tag, data=tf.constant(textdata), step=step)\n\n def write(self, tag, tensor, step, metadata=None):\n \"\"\"Saves a arbitrary tensor summary.\n\n Useful when working with custom plugins or constructing a summary directly.\n\n Args:\n tag: str: label for this data\n tensor: ndarray: tensor data to save.\n step: int: training step\n metadata: Optional SummaryMetadata, as a proto or serialized bytes.\n Note: markdown formatting is rendered by tensorboard.\n \"\"\"\n with self._event_writer.as_default():\n tf.summary.write(\n tag=tag,\n tensor=tensor,\n step=step,\n metadata=metadata)\n\n def hparams(self, hparams):\n \"\"\"Saves hyper parameters.\n\n Args:\n hparams: Flat mapping from hyper parameter name to value.\n \"\"\"\n\n with self._event_writer.as_default():\n hparams_api.hparams(hparams=_flatten_dict(hparams))\n", "path": "flax/metrics/tensorboard.py"}]} | 3,869 | 347 |
gh_patches_debug_37954 | rasdani/github-patches | git_diff | dask__distributed-3854 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support different usernames when starting an SSH cluster
I wanted to start an SSH cluster where my username is not same on all the nodes. It would be great if this was supported. The most obvious way to configure this would be to allow specifying hosts as `user@hostname`, either on the command line, or the hosts file.
</issue>
<code>
[start of distributed/deploy/ssh.py]
1 import logging
2 import sys
3 from typing import List
4 import warnings
5 import weakref
6
7 import dask
8
9 from .spec import SpecCluster, ProcessInterface
10 from ..utils import cli_keywords
11 from ..scheduler import Scheduler as _Scheduler
12 from ..worker import Worker as _Worker
13 from ..utils import serialize_for_cli
14
15 logger = logging.getLogger(__name__)
16
17
18 class Process(ProcessInterface):
19 """ A superclass for SSH Workers and Nannies
20
21 See Also
22 --------
23 Worker
24 Scheduler
25 """
26
27 def __init__(self, **kwargs):
28 self.connection = None
29 self.proc = None
30 super().__init__(**kwargs)
31
32 async def start(self):
33 assert self.connection
34 weakref.finalize(
35 self, self.proc.kill
36 ) # https://github.com/ronf/asyncssh/issues/112
37 await super().start()
38
39 async def close(self):
40 self.proc.kill() # https://github.com/ronf/asyncssh/issues/112
41 self.connection.close()
42 await super().close()
43
44 def __repr__(self):
45 return "<SSH %s: status=%s>" % (type(self).__name__, self.status)
46
47
48 class Worker(Process):
49 """ A Remote Dask Worker controled by SSH
50
51 Parameters
52 ----------
53 scheduler: str
54 The address of the scheduler
55 address: str
56 The hostname where we should run this worker
57 worker_module: str
58 The python module to run to start the worker.
59 connect_options: dict
60 kwargs to be passed to asyncssh connections
61 remote_python: str
62 Path to Python on remote node to run this worker.
63 kwargs: dict
64 These will be passed through the dask-worker CLI to the
65 dask.distributed.Worker class
66 """
67
68 def __init__(
69 self,
70 scheduler: str,
71 address: str,
72 connect_options: dict,
73 kwargs: dict,
74 worker_module="distributed.cli.dask_worker",
75 remote_python=None,
76 loop=None,
77 name=None,
78 ):
79 super().__init__()
80
81 self.address = address
82 self.scheduler = scheduler
83 self.worker_module = worker_module
84 self.connect_options = connect_options
85 self.kwargs = kwargs
86 self.name = name
87 self.remote_python = remote_python
88
89 async def start(self):
90 import asyncssh # import now to avoid adding to module startup time
91
92 self.connection = await asyncssh.connect(self.address, **self.connect_options)
93
94 result = await self.connection.run("uname")
95 if result.exit_status == 0:
96 set_env = 'env DASK_INTERNAL_INHERIT_CONFIG="{}"'.format(
97 serialize_for_cli(dask.config.global_config)
98 )
99 else:
100 result = await self.connection.run("cmd /c ver")
101 if result.exit_status == 0:
102 set_env = "set DASK_INTERNAL_INHERIT_CONFIG={} &&".format(
103 serialize_for_cli(dask.config.global_config)
104 )
105 else:
106 raise Exception(
107 "Worker failed to set DASK_INTERNAL_INHERIT_CONFIG variable "
108 )
109
110 cmd = " ".join(
111 [
112 set_env,
113 self.remote_python or sys.executable,
114 "-m",
115 self.worker_module,
116 self.scheduler,
117 "--name",
118 str(self.name),
119 ]
120 + cli_keywords(self.kwargs, cls=_Worker, cmd=self.worker_module)
121 )
122
123 self.proc = await self.connection.create_process(cmd)
124
125 # We watch stderr in order to get the address, then we return
126 while True:
127 line = await self.proc.stderr.readline()
128 if not line.strip():
129 raise Exception("Worker failed to start")
130 logger.info(line.strip())
131 if "worker at" in line:
132 self.address = line.split("worker at:")[1].strip()
133 self.status = "running"
134 break
135 logger.debug("%s", line)
136 await super().start()
137
138
139 class Scheduler(Process):
140 """ A Remote Dask Scheduler controlled by SSH
141
142 Parameters
143 ----------
144 address: str
145 The hostname where we should run this worker
146 connect_options: dict
147 kwargs to be passed to asyncssh connections
148 remote_python: str
149 Path to Python on remote node to run this scheduler.
150 kwargs: dict
151 These will be passed through the dask-scheduler CLI to the
152 dask.distributed.Scheduler class
153 """
154
155 def __init__(
156 self, address: str, connect_options: dict, kwargs: dict, remote_python=None
157 ):
158 super().__init__()
159
160 self.address = address
161 self.kwargs = kwargs
162 self.connect_options = connect_options
163 self.remote_python = remote_python
164
165 async def start(self):
166 import asyncssh # import now to avoid adding to module startup time
167
168 logger.debug("Created Scheduler Connection")
169
170 self.connection = await asyncssh.connect(self.address, **self.connect_options)
171
172 result = await self.connection.run("uname")
173 if result.exit_status == 0:
174 set_env = 'env DASK_INTERNAL_INHERIT_CONFIG="{}"'.format(
175 serialize_for_cli(dask.config.global_config)
176 )
177 else:
178 result = await self.connection.run("cmd /c ver")
179 if result.exit_status == 0:
180 set_env = "set DASK_INTERNAL_INHERIT_CONFIG={} &&".format(
181 serialize_for_cli(dask.config.global_config)
182 )
183 else:
184 raise Exception(
185 "Scheduler failed to set DASK_INTERNAL_INHERIT_CONFIG variable "
186 )
187
188 cmd = " ".join(
189 [
190 set_env,
191 self.remote_python or sys.executable,
192 "-m",
193 "distributed.cli.dask_scheduler",
194 ]
195 + cli_keywords(self.kwargs, cls=_Scheduler)
196 )
197 self.proc = await self.connection.create_process(cmd)
198
199 # We watch stderr in order to get the address, then we return
200 while True:
201 line = await self.proc.stderr.readline()
202 if not line.strip():
203 raise Exception("Worker failed to start")
204 logger.info(line.strip())
205 if "Scheduler at" in line:
206 self.address = line.split("Scheduler at:")[1].strip()
207 break
208 logger.debug("%s", line)
209 await super().start()
210
211
212 old_cluster_kwargs = {
213 "scheduler_addr",
214 "scheduler_port",
215 "worker_addrs",
216 "nthreads",
217 "nprocs",
218 "ssh_username",
219 "ssh_port",
220 "ssh_private_key",
221 "nohost",
222 "logdir",
223 "remote_python",
224 "memory_limit",
225 "worker_port",
226 "nanny_port",
227 "remote_dask_worker",
228 }
229
230
231 def SSHCluster(
232 hosts: List[str] = None,
233 connect_options: dict = {},
234 worker_options: dict = {},
235 scheduler_options: dict = {},
236 worker_module: str = "distributed.cli.dask_worker",
237 remote_python: str = None,
238 **kwargs
239 ):
240 """ Deploy a Dask cluster using SSH
241
242 The SSHCluster function deploys a Dask Scheduler and Workers for you on a
243 set of machine addresses that you provide. The first address will be used
244 for the scheduler while the rest will be used for the workers (feel free to
245 repeat the first hostname if you want to have the scheduler and worker
246 co-habitate one machine.)
247
248 You may configure the scheduler and workers by passing
249 ``scheduler_options`` and ``worker_options`` dictionary keywords. See the
250 ``dask.distributed.Scheduler`` and ``dask.distributed.Worker`` classes for
251 details on the available options, but the defaults should work in most
252 situations.
253
254 You may configure your use of SSH itself using the ``connect_options``
255 keyword, which passes values to the ``asyncssh.connect`` function. For
256 more information on these see the documentation for the ``asyncssh``
257 library https://asyncssh.readthedocs.io .
258
259 Parameters
260 ----------
261 hosts: List[str]
262 List of hostnames or addresses on which to launch our cluster.
263 The first will be used for the scheduler and the rest for workers.
264 connect_options: dict, optional
265 Keywords to pass through to ``asyncssh.connect``.
266 worker_options: dict, optional
267 Keywords to pass on to workers.
268 scheduler_options: dict, optional
269 Keywords to pass on to scheduler.
270 worker_module: str, optional
271 Python module to call to start the worker.
272 remote_python: str, optional
273 Path to Python on remote nodes.
274
275 Examples
276 --------
277 >>> from dask.distributed import Client, SSHCluster
278 >>> cluster = SSHCluster(
279 ... ["localhost", "localhost", "localhost", "localhost"],
280 ... connect_options={"known_hosts": None},
281 ... worker_options={"nthreads": 2},
282 ... scheduler_options={"port": 0, "dashboard_address": ":8797"}
283 ... )
284 >>> client = Client(cluster)
285
286 An example using a different worker module, in particular the
287 ``dask-cuda-worker`` command from the ``dask-cuda`` project.
288
289 >>> from dask.distributed import Client, SSHCluster
290 >>> cluster = SSHCluster(
291 ... ["localhost", "hostwithgpus", "anothergpuhost"],
292 ... connect_options={"known_hosts": None},
293 ... scheduler_options={"port": 0, "dashboard_address": ":8797"},
294 ... worker_module='dask_cuda.dask_cuda_worker')
295 >>> client = Client(cluster)
296
297 See Also
298 --------
299 dask.distributed.Scheduler
300 dask.distributed.Worker
301 asyncssh.connect
302 """
303 if set(kwargs) & old_cluster_kwargs:
304 from .old_ssh import SSHCluster as OldSSHCluster
305
306 warnings.warn(
307 "Note that the SSHCluster API has been replaced. "
308 "We're routing you to the older implementation. "
309 "This will be removed in the future"
310 )
311 kwargs.setdefault("worker_addrs", hosts)
312 return OldSSHCluster(**kwargs)
313
314 scheduler = {
315 "cls": Scheduler,
316 "options": {
317 "address": hosts[0],
318 "connect_options": connect_options,
319 "kwargs": scheduler_options,
320 "remote_python": remote_python,
321 },
322 }
323 workers = {
324 i: {
325 "cls": Worker,
326 "options": {
327 "address": host,
328 "connect_options": connect_options,
329 "kwargs": worker_options,
330 "worker_module": worker_module,
331 "remote_python": remote_python,
332 },
333 }
334 for i, host in enumerate(hosts[1:])
335 }
336 return SpecCluster(workers, scheduler, name="SSHCluster", **kwargs)
337
[end of distributed/deploy/ssh.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/distributed/deploy/ssh.py b/distributed/deploy/ssh.py
--- a/distributed/deploy/ssh.py
+++ b/distributed/deploy/ssh.py
@@ -1,6 +1,6 @@
import logging
import sys
-from typing import List
+from typing import List, Union
import warnings
import weakref
@@ -230,7 +230,7 @@
def SSHCluster(
hosts: List[str] = None,
- connect_options: dict = {},
+ connect_options: Union[List[dict], dict] = {},
worker_options: dict = {},
scheduler_options: dict = {},
worker_module: str = "distributed.cli.dask_worker",
@@ -261,8 +261,9 @@
hosts: List[str]
List of hostnames or addresses on which to launch our cluster.
The first will be used for the scheduler and the rest for workers.
- connect_options: dict, optional
+ connect_options: dict or list of dict, optional
Keywords to pass through to ``asyncssh.connect``.
+ If a list it must have the same length as ``hosts``.
worker_options: dict, optional
Keywords to pass on to workers.
scheduler_options: dict, optional
@@ -311,11 +312,19 @@
kwargs.setdefault("worker_addrs", hosts)
return OldSSHCluster(**kwargs)
+ if isinstance(connect_options, list) and len(connect_options) != len(hosts):
+ raise RuntimeError(
+ "When specifying a list of connect_options you must provide a "
+ "dictionary for each address."
+ )
+
scheduler = {
"cls": Scheduler,
"options": {
"address": hosts[0],
- "connect_options": connect_options,
+ "connect_options": connect_options
+ if isinstance(connect_options, dict)
+ else connect_options[0],
"kwargs": scheduler_options,
"remote_python": remote_python,
},
@@ -325,7 +334,9 @@
"cls": Worker,
"options": {
"address": host,
- "connect_options": connect_options,
+ "connect_options": connect_options
+ if isinstance(connect_options, dict)
+ else connect_options[i + 1],
"kwargs": worker_options,
"worker_module": worker_module,
"remote_python": remote_python,
| {"golden_diff": "diff --git a/distributed/deploy/ssh.py b/distributed/deploy/ssh.py\n--- a/distributed/deploy/ssh.py\n+++ b/distributed/deploy/ssh.py\n@@ -1,6 +1,6 @@\n import logging\n import sys\n-from typing import List\n+from typing import List, Union\n import warnings\n import weakref\n \n@@ -230,7 +230,7 @@\n \n def SSHCluster(\n hosts: List[str] = None,\n- connect_options: dict = {},\n+ connect_options: Union[List[dict], dict] = {},\n worker_options: dict = {},\n scheduler_options: dict = {},\n worker_module: str = \"distributed.cli.dask_worker\",\n@@ -261,8 +261,9 @@\n hosts: List[str]\n List of hostnames or addresses on which to launch our cluster.\n The first will be used for the scheduler and the rest for workers.\n- connect_options: dict, optional\n+ connect_options: dict or list of dict, optional\n Keywords to pass through to ``asyncssh.connect``.\n+ If a list it must have the same length as ``hosts``.\n worker_options: dict, optional\n Keywords to pass on to workers.\n scheduler_options: dict, optional\n@@ -311,11 +312,19 @@\n kwargs.setdefault(\"worker_addrs\", hosts)\n return OldSSHCluster(**kwargs)\n \n+ if isinstance(connect_options, list) and len(connect_options) != len(hosts):\n+ raise RuntimeError(\n+ \"When specifying a list of connect_options you must provide a \"\n+ \"dictionary for each address.\"\n+ )\n+\n scheduler = {\n \"cls\": Scheduler,\n \"options\": {\n \"address\": hosts[0],\n- \"connect_options\": connect_options,\n+ \"connect_options\": connect_options\n+ if isinstance(connect_options, dict)\n+ else connect_options[0],\n \"kwargs\": scheduler_options,\n \"remote_python\": remote_python,\n },\n@@ -325,7 +334,9 @@\n \"cls\": Worker,\n \"options\": {\n \"address\": host,\n- \"connect_options\": connect_options,\n+ \"connect_options\": connect_options\n+ if isinstance(connect_options, dict)\n+ else connect_options[i + 1],\n \"kwargs\": worker_options,\n \"worker_module\": worker_module,\n \"remote_python\": remote_python,\n", "issue": "Support different usernames when starting an SSH cluster\nI wanted to start an SSH cluster where my username is not same on all the nodes. It would be great if this was supported. The most obvious way to configure this would be to allow specifying hosts as `user@hostname`, either on the command line, or the hosts file.\n", "before_files": [{"content": "import logging\nimport sys\nfrom typing import List\nimport warnings\nimport weakref\n\nimport dask\n\nfrom .spec import SpecCluster, ProcessInterface\nfrom ..utils import cli_keywords\nfrom ..scheduler import Scheduler as _Scheduler\nfrom ..worker import Worker as _Worker\nfrom ..utils import serialize_for_cli\n\nlogger = logging.getLogger(__name__)\n\n\nclass Process(ProcessInterface):\n \"\"\" A superclass for SSH Workers and Nannies\n\n See Also\n --------\n Worker\n Scheduler\n \"\"\"\n\n def __init__(self, **kwargs):\n self.connection = None\n self.proc = None\n super().__init__(**kwargs)\n\n async def start(self):\n assert self.connection\n weakref.finalize(\n self, self.proc.kill\n ) # https://github.com/ronf/asyncssh/issues/112\n await super().start()\n\n async def close(self):\n self.proc.kill() # https://github.com/ronf/asyncssh/issues/112\n self.connection.close()\n await super().close()\n\n def __repr__(self):\n return \"<SSH %s: status=%s>\" % (type(self).__name__, self.status)\n\n\nclass Worker(Process):\n \"\"\" A Remote Dask Worker controled by SSH\n\n Parameters\n ----------\n scheduler: str\n The address of the scheduler\n address: str\n The hostname where we should run this worker\n worker_module: str\n The python module to run to start the worker.\n connect_options: dict\n kwargs to be passed to asyncssh connections\n remote_python: str\n Path to Python on remote node to run this worker.\n kwargs: dict\n These will be passed through the dask-worker CLI to the\n dask.distributed.Worker class\n \"\"\"\n\n def __init__(\n self,\n scheduler: str,\n address: str,\n connect_options: dict,\n kwargs: dict,\n worker_module=\"distributed.cli.dask_worker\",\n remote_python=None,\n loop=None,\n name=None,\n ):\n super().__init__()\n\n self.address = address\n self.scheduler = scheduler\n self.worker_module = worker_module\n self.connect_options = connect_options\n self.kwargs = kwargs\n self.name = name\n self.remote_python = remote_python\n\n async def start(self):\n import asyncssh # import now to avoid adding to module startup time\n\n self.connection = await asyncssh.connect(self.address, **self.connect_options)\n\n result = await self.connection.run(\"uname\")\n if result.exit_status == 0:\n set_env = 'env DASK_INTERNAL_INHERIT_CONFIG=\"{}\"'.format(\n serialize_for_cli(dask.config.global_config)\n )\n else:\n result = await self.connection.run(\"cmd /c ver\")\n if result.exit_status == 0:\n set_env = \"set DASK_INTERNAL_INHERIT_CONFIG={} &&\".format(\n serialize_for_cli(dask.config.global_config)\n )\n else:\n raise Exception(\n \"Worker failed to set DASK_INTERNAL_INHERIT_CONFIG variable \"\n )\n\n cmd = \" \".join(\n [\n set_env,\n self.remote_python or sys.executable,\n \"-m\",\n self.worker_module,\n self.scheduler,\n \"--name\",\n str(self.name),\n ]\n + cli_keywords(self.kwargs, cls=_Worker, cmd=self.worker_module)\n )\n\n self.proc = await self.connection.create_process(cmd)\n\n # We watch stderr in order to get the address, then we return\n while True:\n line = await self.proc.stderr.readline()\n if not line.strip():\n raise Exception(\"Worker failed to start\")\n logger.info(line.strip())\n if \"worker at\" in line:\n self.address = line.split(\"worker at:\")[1].strip()\n self.status = \"running\"\n break\n logger.debug(\"%s\", line)\n await super().start()\n\n\nclass Scheduler(Process):\n \"\"\" A Remote Dask Scheduler controlled by SSH\n\n Parameters\n ----------\n address: str\n The hostname where we should run this worker\n connect_options: dict\n kwargs to be passed to asyncssh connections\n remote_python: str\n Path to Python on remote node to run this scheduler.\n kwargs: dict\n These will be passed through the dask-scheduler CLI to the\n dask.distributed.Scheduler class\n \"\"\"\n\n def __init__(\n self, address: str, connect_options: dict, kwargs: dict, remote_python=None\n ):\n super().__init__()\n\n self.address = address\n self.kwargs = kwargs\n self.connect_options = connect_options\n self.remote_python = remote_python\n\n async def start(self):\n import asyncssh # import now to avoid adding to module startup time\n\n logger.debug(\"Created Scheduler Connection\")\n\n self.connection = await asyncssh.connect(self.address, **self.connect_options)\n\n result = await self.connection.run(\"uname\")\n if result.exit_status == 0:\n set_env = 'env DASK_INTERNAL_INHERIT_CONFIG=\"{}\"'.format(\n serialize_for_cli(dask.config.global_config)\n )\n else:\n result = await self.connection.run(\"cmd /c ver\")\n if result.exit_status == 0:\n set_env = \"set DASK_INTERNAL_INHERIT_CONFIG={} &&\".format(\n serialize_for_cli(dask.config.global_config)\n )\n else:\n raise Exception(\n \"Scheduler failed to set DASK_INTERNAL_INHERIT_CONFIG variable \"\n )\n\n cmd = \" \".join(\n [\n set_env,\n self.remote_python or sys.executable,\n \"-m\",\n \"distributed.cli.dask_scheduler\",\n ]\n + cli_keywords(self.kwargs, cls=_Scheduler)\n )\n self.proc = await self.connection.create_process(cmd)\n\n # We watch stderr in order to get the address, then we return\n while True:\n line = await self.proc.stderr.readline()\n if not line.strip():\n raise Exception(\"Worker failed to start\")\n logger.info(line.strip())\n if \"Scheduler at\" in line:\n self.address = line.split(\"Scheduler at:\")[1].strip()\n break\n logger.debug(\"%s\", line)\n await super().start()\n\n\nold_cluster_kwargs = {\n \"scheduler_addr\",\n \"scheduler_port\",\n \"worker_addrs\",\n \"nthreads\",\n \"nprocs\",\n \"ssh_username\",\n \"ssh_port\",\n \"ssh_private_key\",\n \"nohost\",\n \"logdir\",\n \"remote_python\",\n \"memory_limit\",\n \"worker_port\",\n \"nanny_port\",\n \"remote_dask_worker\",\n}\n\n\ndef SSHCluster(\n hosts: List[str] = None,\n connect_options: dict = {},\n worker_options: dict = {},\n scheduler_options: dict = {},\n worker_module: str = \"distributed.cli.dask_worker\",\n remote_python: str = None,\n **kwargs\n):\n \"\"\" Deploy a Dask cluster using SSH\n\n The SSHCluster function deploys a Dask Scheduler and Workers for you on a\n set of machine addresses that you provide. The first address will be used\n for the scheduler while the rest will be used for the workers (feel free to\n repeat the first hostname if you want to have the scheduler and worker\n co-habitate one machine.)\n\n You may configure the scheduler and workers by passing\n ``scheduler_options`` and ``worker_options`` dictionary keywords. See the\n ``dask.distributed.Scheduler`` and ``dask.distributed.Worker`` classes for\n details on the available options, but the defaults should work in most\n situations.\n\n You may configure your use of SSH itself using the ``connect_options``\n keyword, which passes values to the ``asyncssh.connect`` function. For\n more information on these see the documentation for the ``asyncssh``\n library https://asyncssh.readthedocs.io .\n\n Parameters\n ----------\n hosts: List[str]\n List of hostnames or addresses on which to launch our cluster.\n The first will be used for the scheduler and the rest for workers.\n connect_options: dict, optional\n Keywords to pass through to ``asyncssh.connect``.\n worker_options: dict, optional\n Keywords to pass on to workers.\n scheduler_options: dict, optional\n Keywords to pass on to scheduler.\n worker_module: str, optional\n Python module to call to start the worker.\n remote_python: str, optional\n Path to Python on remote nodes.\n\n Examples\n --------\n >>> from dask.distributed import Client, SSHCluster\n >>> cluster = SSHCluster(\n ... [\"localhost\", \"localhost\", \"localhost\", \"localhost\"],\n ... connect_options={\"known_hosts\": None},\n ... worker_options={\"nthreads\": 2},\n ... scheduler_options={\"port\": 0, \"dashboard_address\": \":8797\"}\n ... )\n >>> client = Client(cluster)\n\n An example using a different worker module, in particular the\n ``dask-cuda-worker`` command from the ``dask-cuda`` project.\n\n >>> from dask.distributed import Client, SSHCluster\n >>> cluster = SSHCluster(\n ... [\"localhost\", \"hostwithgpus\", \"anothergpuhost\"],\n ... connect_options={\"known_hosts\": None},\n ... scheduler_options={\"port\": 0, \"dashboard_address\": \":8797\"},\n ... worker_module='dask_cuda.dask_cuda_worker')\n >>> client = Client(cluster)\n\n See Also\n --------\n dask.distributed.Scheduler\n dask.distributed.Worker\n asyncssh.connect\n \"\"\"\n if set(kwargs) & old_cluster_kwargs:\n from .old_ssh import SSHCluster as OldSSHCluster\n\n warnings.warn(\n \"Note that the SSHCluster API has been replaced. \"\n \"We're routing you to the older implementation. \"\n \"This will be removed in the future\"\n )\n kwargs.setdefault(\"worker_addrs\", hosts)\n return OldSSHCluster(**kwargs)\n\n scheduler = {\n \"cls\": Scheduler,\n \"options\": {\n \"address\": hosts[0],\n \"connect_options\": connect_options,\n \"kwargs\": scheduler_options,\n \"remote_python\": remote_python,\n },\n }\n workers = {\n i: {\n \"cls\": Worker,\n \"options\": {\n \"address\": host,\n \"connect_options\": connect_options,\n \"kwargs\": worker_options,\n \"worker_module\": worker_module,\n \"remote_python\": remote_python,\n },\n }\n for i, host in enumerate(hosts[1:])\n }\n return SpecCluster(workers, scheduler, name=\"SSHCluster\", **kwargs)\n", "path": "distributed/deploy/ssh.py"}]} | 3,819 | 528 |
gh_patches_debug_15564 | rasdani/github-patches | git_diff | pytorch__ignite-1727 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[search] use algolia docsearch
## 📚 Documentation
While built-in search is enough, searching something and hit enter feels like not a good UX.
To be able to search interactively, we can apply [Algolia Docsearch](https://docsearch.algolia.com/) for that.
PyTorch is also using it when searching from homepage.
</issue>
<code>
[start of docs/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17
18 sys.path.insert(0, os.path.abspath("../.."))
19 import ignite
20 import pytorch_sphinx_theme
21
22 from datetime import datetime
23
24 # -- Project information -----------------------------------------------------
25
26 project = "ignite"
27 author = "PyTorch-Ignite Contributors"
28 copyright = f"{datetime.now().year}, {author}"
29
30 # The short X.Y version
31 try:
32 version = os.environ["code_version"]
33 if "master" in version:
34 version = "master (" + ignite.__version__ + ")"
35 else:
36 version = version.replace("v", "")
37 except KeyError:
38 version = ignite.__version__
39
40 # The full version, including alpha/beta/rc tags
41 release = "master"
42
43
44 # -- General configuration ---------------------------------------------------
45
46 # If your documentation needs a minimal Sphinx version, state it here.
47 #
48 # needs_sphinx = '1.0'
49
50 # Add any Sphinx extension module names here, as strings. They can be
51 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
52 # ones.
53 extensions = [
54 "sphinx.ext.autodoc",
55 "sphinx.ext.autosummary",
56 "sphinx.ext.doctest",
57 "sphinx.ext.intersphinx",
58 "sphinx.ext.todo",
59 "sphinx.ext.coverage",
60 "sphinxcontrib.katex",
61 "sphinx.ext.napoleon",
62 "sphinx.ext.viewcode",
63 "sphinx.ext.autosectionlabel",
64 ]
65
66 # katex options
67 katex_prerender = True
68
69 # Add any paths that contain templates here, relative to this directory.
70 templates_path = ["_templates"]
71
72 # The suffix(es) of source filenames.
73 # You can specify multiple suffix as a list of string:
74 #
75 # source_suffix = ['.rst', '.md']
76 source_suffix = ".rst"
77
78 # The master toctree document.
79 master_doc = "index"
80
81 # The language for content autogenerated by Sphinx. Refer to documentation
82 # for a list of supported languages.
83 #
84 # This is also used if you do content translation via gettext catalogs.
85 # Usually you set "language" from the command line for these cases.
86 language = None
87
88 # List of patterns, relative to source directory, that match files and
89 # directories to ignore when looking for source files.
90 # This pattern also affects html_static_path and html_extra_path .
91 exclude_patterns = []
92
93 # The name of the Pygments (syntax highlighting) style to use.
94 pygments_style = "sphinx"
95
96
97 # -- Options for HTML output -------------------------------------------------
98
99 # The theme to use for HTML and HTML Help pages. See the documentation for
100 # a list of builtin themes.
101 #
102 html_theme = "pytorch_sphinx_theme"
103 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
104
105 html_theme_options = {
106 "canonical_url": "https://pytorch.org/ignite/index.html",
107 "collapse_navigation": False,
108 "display_version": True,
109 "logo_only": True,
110 "navigation_with_keys": True,
111 }
112
113 html_logo = "_templates/_static/img/ignite_logo.svg"
114
115 html_favicon = "_templates/_static/img/ignite_logomark.svg"
116
117 # Theme options are theme-specific and customize the look and feel of a theme
118 # further. For a list of options available for each theme, see the
119 # documentation.
120 #
121 # html_theme_options = {}
122
123 # Add any paths that contain custom static files (such as style sheets) here,
124 # relative to this directory. They are copied after the builtin static files,
125 # so a file named "default.css" will overwrite the builtin "default.css".
126 html_static_path = ["_static", "_templates/_static"]
127
128 html_context = {
129 "css_files": [
130 # 'https://fonts.googleapis.com/css?family=Lato',
131 # '_static/css/pytorch_theme.css'
132 "_static/css/ignite_theme.css"
133 ],
134 }
135
136
137 # -- Options for HTMLHelp output ---------------------------------------------
138
139 # Output file base name for HTML help builder.
140 htmlhelp_basename = "ignitedoc"
141
142
143 # -- Options for LaTeX output ------------------------------------------------
144
145 latex_elements = {
146 # The paper size ('letterpaper' or 'a4paper').
147 #
148 # 'papersize': 'letterpaper',
149 # The font size ('10pt', '11pt' or '12pt').
150 #
151 # 'pointsize': '10pt',
152 # Additional stuff for the LaTeX preamble.
153 #
154 # 'preamble': '',
155 # Latex figure (float) alignment
156 #
157 # 'figure_align': 'htbp',
158 }
159
160 # Grouping the document tree into LaTeX files. List of tuples
161 # (source start file, target name, title,
162 # author, documentclass [howto, manual, or own class]).
163 latex_documents = [
164 (master_doc, "ignite.tex", "ignite Documentation", "Torch Contributors", "manual"),
165 ]
166
167
168 # -- Options for manual page output ------------------------------------------
169
170 # One entry per manual page. List of tuples
171 # (source start file, name, description, authors, manual section).
172 man_pages = [(master_doc, "ignite", "ignite Documentation", [author], 1)]
173
174
175 # -- Options for Texinfo output ----------------------------------------------
176
177 # Grouping the document tree into Texinfo files. List of tuples
178 # (source start file, target name, title, author,
179 # dir menu entry, description, category)
180 texinfo_documents = [
181 (
182 master_doc,
183 "ignite",
184 "ignite Documentation",
185 author,
186 "ignite",
187 "One line description of project.",
188 "Miscellaneous",
189 ),
190 ]
191
192
193 # -- Extension configuration -------------------------------------------------
194
195 # -- Options for intersphinx extension ---------------------------------------
196
197 # Example configuration for intersphinx: refer to the Python standard library.
198 intersphinx_mapping = {
199 "python": ("https://docs.python.org/3", None),
200 "torch": ("https://pytorch.org/docs/stable/", None),
201 }
202
203 # -- Options for todo extension ----------------------------------------------
204
205 # If true, `todo` and `todoList` produce output, else they produce nothing.
206 todo_include_todos = True
207
208 # -- Type hints configs ------------------------------------------------------
209
210 autodoc_inherit_docstrings = True
211 autoclass_content = "both"
212 autodoc_typehints = "description"
213 napoleon_attr_annotations = True
214
215 # -- A patch that turns-off cross refs for type annotations ------------------
216
217 import sphinx.domains.python
218 from docutils import nodes
219 from sphinx import addnodes
220
221 # replaces pending_xref node with desc_type for type annotations
222 sphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type("", nodes.Text(t))
223
224 # -- Autosummary patch to get list of a classes, funcs automatically ----------
225
226 from importlib import import_module
227 from inspect import getmembers, isclass, isfunction
228 import sphinx.ext.autosummary
229 from sphinx.ext.autosummary import Autosummary
230 from docutils.parsers.rst import directives
231 from docutils.statemachine import StringList
232
233
234 class BetterAutosummary(Autosummary):
235 """Autosummary with autolisting for modules.
236
237 By default it tries to import all public names (__all__),
238 otherwise import all classes and/or functions in a module.
239
240 Options:
241 - :autolist: option to get list of classes and functions from currentmodule.
242 - :autolist-classes: option to get list of classes from currentmodule.
243 - :autolist-functions: option to get list of functions from currentmodule.
244
245 Example Usage:
246
247 .. currentmodule:: ignite.metrics
248
249 .. autosummary::
250 :nosignatures:
251 :autolist:
252 """
253
254 # Add new option
255 _option_spec = Autosummary.option_spec.copy()
256 _option_spec.update(
257 {
258 "autolist": directives.unchanged,
259 "autolist-classes": directives.unchanged,
260 "autolist-functions": directives.unchanged,
261 }
262 )
263 option_spec = _option_spec
264
265 def run(self):
266 for auto in ("autolist", "autolist-classes", "autolist-functions"):
267 if auto in self.options:
268 # Get current module name
269 module_name = self.env.ref_context.get("py:module")
270 # Import module
271 module = import_module(module_name)
272
273 # Get public names (if possible)
274 try:
275 names = getattr(module, "__all__")
276 except AttributeError:
277 # Get classes defined in the module
278 cls_names = [
279 name[0]
280 for name in getmembers(module, isclass)
281 if name[-1].__module__ == module_name and not (name[0].startswith("_"))
282 ]
283 # Get functions defined in the module
284 fn_names = [
285 name[0]
286 for name in getmembers(module, isfunction)
287 if (name[-1].__module__ == module_name) and not (name[0].startswith("_"))
288 ]
289 names = cls_names + fn_names
290 # It may happen that module doesn't have any defined class or func
291 if not names:
292 names = [name[0] for name in getmembers(module)]
293
294 # Filter out members w/o doc strings
295 names = [name for name in names if getattr(module, name).__doc__ is not None]
296
297 if auto == "autolist":
298 # Get list of all classes and functions inside module
299 names = [
300 name for name in names if (isclass(getattr(module, name)) or isfunction(getattr(module, name)))
301 ]
302 else:
303 if auto == "autolist-classes":
304 # Get only classes
305 check = isclass
306 elif auto == "autolist-functions":
307 # Get only functions
308 check = isfunction
309 else:
310 raise NotImplementedError
311
312 names = [name for name in names if check(getattr(module, name))]
313
314 # Update content
315 self.content = StringList(names)
316 return super().run()
317
318
319 # Patch original Autosummary
320 sphinx.ext.autosummary.Autosummary = BetterAutosummary
321
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -32,8 +32,8 @@
version = os.environ["code_version"]
if "master" in version:
version = "master (" + ignite.__version__ + ")"
- else:
- version = version.replace("v", "")
+ # else:
+ # version = version.replace("v", "")
except KeyError:
version = ignite.__version__
@@ -129,7 +129,8 @@
"css_files": [
# 'https://fonts.googleapis.com/css?family=Lato',
# '_static/css/pytorch_theme.css'
- "_static/css/ignite_theme.css"
+ "_static/css/ignite_theme.css",
+ "https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.css",
],
}
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -32,8 +32,8 @@\n version = os.environ[\"code_version\"]\n if \"master\" in version:\n version = \"master (\" + ignite.__version__ + \")\"\n- else:\n- version = version.replace(\"v\", \"\")\n+ # else:\n+ # version = version.replace(\"v\", \"\")\n except KeyError:\n version = ignite.__version__\n \n@@ -129,7 +129,8 @@\n \"css_files\": [\n # 'https://fonts.googleapis.com/css?family=Lato',\n # '_static/css/pytorch_theme.css'\n- \"_static/css/ignite_theme.css\"\n+ \"_static/css/ignite_theme.css\",\n+ \"https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.css\",\n ],\n }\n", "issue": "[search] use algolia docsearch\n## \ud83d\udcda Documentation\r\nWhile built-in search is enough, searching something and hit enter feels like not a good UX.\r\nTo be able to search interactively, we can apply [Algolia Docsearch](https://docsearch.algolia.com/) for that.\r\nPyTorch is also using it when searching from homepage.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\nimport ignite\nimport pytorch_sphinx_theme\n\nfrom datetime import datetime\n\n# -- Project information -----------------------------------------------------\n\nproject = \"ignite\"\nauthor = \"PyTorch-Ignite Contributors\"\ncopyright = f\"{datetime.now().year}, {author}\"\n\n# The short X.Y version\ntry:\n version = os.environ[\"code_version\"]\n if \"master\" in version:\n version = \"master (\" + ignite.__version__ + \")\"\n else:\n version = version.replace(\"v\", \"\")\nexcept KeyError:\n version = ignite.__version__\n\n# The full version, including alpha/beta/rc tags\nrelease = \"master\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinxcontrib.katex\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n]\n\n# katex options\nkatex_prerender = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n \"canonical_url\": \"https://pytorch.org/ignite/index.html\",\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"navigation_with_keys\": True,\n}\n\nhtml_logo = \"_templates/_static/img/ignite_logo.svg\"\n\nhtml_favicon = \"_templates/_static/img/ignite_logomark.svg\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\", \"_templates/_static\"]\n\nhtml_context = {\n \"css_files\": [\n # 'https://fonts.googleapis.com/css?family=Lato',\n # '_static/css/pytorch_theme.css'\n \"_static/css/ignite_theme.css\"\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"ignitedoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"ignite.tex\", \"ignite Documentation\", \"Torch Contributors\", \"manual\"),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"ignite\", \"ignite Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"ignite\",\n \"ignite Documentation\",\n author,\n \"ignite\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Type hints configs ------------------------------------------------------\n\nautodoc_inherit_docstrings = True\nautoclass_content = \"both\"\nautodoc_typehints = \"description\"\nnapoleon_attr_annotations = True\n\n# -- A patch that turns-off cross refs for type annotations ------------------\n\nimport sphinx.domains.python\nfrom docutils import nodes\nfrom sphinx import addnodes\n\n# replaces pending_xref node with desc_type for type annotations\nsphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type(\"\", nodes.Text(t))\n\n# -- Autosummary patch to get list of a classes, funcs automatically ----------\n\nfrom importlib import import_module\nfrom inspect import getmembers, isclass, isfunction\nimport sphinx.ext.autosummary\nfrom sphinx.ext.autosummary import Autosummary\nfrom docutils.parsers.rst import directives\nfrom docutils.statemachine import StringList\n\n\nclass BetterAutosummary(Autosummary):\n \"\"\"Autosummary with autolisting for modules.\n\n By default it tries to import all public names (__all__),\n otherwise import all classes and/or functions in a module.\n\n Options:\n - :autolist: option to get list of classes and functions from currentmodule.\n - :autolist-classes: option to get list of classes from currentmodule.\n - :autolist-functions: option to get list of functions from currentmodule.\n\n Example Usage:\n\n .. currentmodule:: ignite.metrics\n\n .. autosummary::\n :nosignatures:\n :autolist:\n \"\"\"\n\n # Add new option\n _option_spec = Autosummary.option_spec.copy()\n _option_spec.update(\n {\n \"autolist\": directives.unchanged,\n \"autolist-classes\": directives.unchanged,\n \"autolist-functions\": directives.unchanged,\n }\n )\n option_spec = _option_spec\n\n def run(self):\n for auto in (\"autolist\", \"autolist-classes\", \"autolist-functions\"):\n if auto in self.options:\n # Get current module name\n module_name = self.env.ref_context.get(\"py:module\")\n # Import module\n module = import_module(module_name)\n\n # Get public names (if possible)\n try:\n names = getattr(module, \"__all__\")\n except AttributeError:\n # Get classes defined in the module\n cls_names = [\n name[0]\n for name in getmembers(module, isclass)\n if name[-1].__module__ == module_name and not (name[0].startswith(\"_\"))\n ]\n # Get functions defined in the module\n fn_names = [\n name[0]\n for name in getmembers(module, isfunction)\n if (name[-1].__module__ == module_name) and not (name[0].startswith(\"_\"))\n ]\n names = cls_names + fn_names\n # It may happen that module doesn't have any defined class or func\n if not names:\n names = [name[0] for name in getmembers(module)]\n\n # Filter out members w/o doc strings\n names = [name for name in names if getattr(module, name).__doc__ is not None]\n\n if auto == \"autolist\":\n # Get list of all classes and functions inside module\n names = [\n name for name in names if (isclass(getattr(module, name)) or isfunction(getattr(module, name)))\n ]\n else:\n if auto == \"autolist-classes\":\n # Get only classes\n check = isclass\n elif auto == \"autolist-functions\":\n # Get only functions\n check = isfunction\n else:\n raise NotImplementedError\n\n names = [name for name in names if check(getattr(module, name))]\n\n # Update content\n self.content = StringList(names)\n return super().run()\n\n\n# Patch original Autosummary\nsphinx.ext.autosummary.Autosummary = BetterAutosummary\n", "path": "docs/source/conf.py"}]} | 3,740 | 200 |
gh_patches_debug_33270 | rasdani/github-patches | git_diff | Flexget__Flexget-83 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't log in into myepisodes
When usin the myepisodes plugin you get the warning "Login to myepisodes.com failed, please check your account data or see if the site is down."
It seems that they have changed so you have to access their site with www.myepisodes.com.
I made a copy of the plugin and changed all the urls to www.myepisodes.com and it worked
</issue>
<code>
[start of flexget/plugins/services/myepisodes.py]
1 from __future__ import unicode_literals, division, absolute_import
2 import logging
3 import urllib
4 import urllib2
5 import re
6 import cookielib
7 from datetime import datetime
8
9 from sqlalchemy import Column, Integer, String, DateTime
10
11 from flexget import db_schema
12 from flexget.plugin import register_plugin, DependencyError, PluginWarning
13
14 try:
15 from flexget.plugins.api_tvdb import lookup_series
16 except ImportError:
17 raise DependencyError(issued_by='myepisodes', missing='api_tvdb',
18 message='myepisodes requires the `api_tvdb` plugin')
19
20
21 log = logging.getLogger('myepisodes')
22 Base = db_schema.versioned_base('myepisodes', 0)
23
24
25 class MyEpisodesInfo(Base):
26 __tablename__ = 'myepisodes'
27
28 id = Column(Integer, primary_key=True)
29 series_name = Column(String, unique=True)
30 myepisodes_id = Column(Integer, unique=True)
31 updated = Column(DateTime)
32
33 def __init__(self, series_name, myepisodes_id):
34 self.series_name = series_name
35 self.myepisodes_id = myepisodes_id
36 self.updated = datetime.now()
37
38 def __repr__(self):
39 return '<MyEpisodesInfo(series_name=%s, myepisodes_id=%s)>' % (self.series_name, self.myepisodes_id)
40
41
42 class MyEpisodes(object):
43 """
44 Marks a series episode as acquired in your myepisodes.com account.
45
46 Simple Example:
47
48 Most shows are recognized automatically from their TVDBname.
49 And of course the plugin needs to know your MyEpisodes.com account details.
50
51 tasks:
52 tvshows:
53 myepisodes:
54 username: <username>
55 password: <password>
56 series:
57 - human target
58 - chuck
59
60 Advanced Example:
61
62 In some cases, the TVDB name is either not unique or won't even be discovered.
63 In that case you need to specify the MyEpisodes id manually using the set plugin.
64
65 tasks:
66 tvshows:
67 myepisodes:
68 username: <username>
69 password: <password>
70 series:
71 - human target:
72 set:
73 myepisodes_id: 5111
74 - chuck
75
76 How to find the MyEpisodes id: http://matrixagents.org/screencasts/myep_example-20110507-131555.png
77 """
78
79 schema = {
80 'type': 'object',
81 'properties': {
82 'username': {'type': 'string'},
83 'password': {'type': 'string'}
84 },
85 'required': ['username', 'password'],
86 'additionalProperties': False
87 }
88
89 def on_task_exit(self, task, config):
90 """Mark all accepted episodes as acquired on MyEpisodes"""
91 if not task.accepted:
92 # Nothing accepted, don't do anything
93 return
94
95 username = config['username']
96 password = config['password']
97
98 cookiejar = cookielib.CookieJar()
99 opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
100 baseurl = urllib2.Request('http://myepisodes.com/login.php?')
101 loginparams = urllib.urlencode({'username': username,
102 'password': password,
103 'action': 'Login'})
104 try:
105 logincon = opener.open(baseurl, loginparams)
106 loginsrc = logincon.read()
107 except urllib2.URLError as e:
108 log.error('Error logging in to myepisodes: %s' % e)
109 return
110
111 if str(username) not in loginsrc:
112 raise PluginWarning(('Login to myepisodes.com failed, please check '
113 'your account data or see if the site is down.'), log)
114
115 for entry in task.accepted:
116 try:
117 self.mark_episode(task, entry, opener)
118 except PluginWarning as w:
119 log.warning(str(w))
120
121 def lookup_myepisodes_id(self, entry, opener, session):
122 """Populates myepisodes_id field for an entry, and returns the id.
123
124 Call will also set entry field `myepisode_id` if successful.
125
126 Return:
127 myepisode id
128
129 Raises:
130 LookupError if entry does not have field series_name
131 """
132
133 # Don't need to look it up if we already have it.
134 if entry.get('myepisodes_id'):
135 return entry['myepisodes_id']
136
137 if not entry.get('series_name'):
138 raise LookupError('Cannot lookup myepisodes id for entries without series_name')
139 series_name = entry['series_name']
140
141 # First check if we already have a myepisodes id stored for this series
142 myepisodes_info = session.query(MyEpisodesInfo).\
143 filter(MyEpisodesInfo.series_name == series_name.lower()).first()
144 if myepisodes_info:
145 entry['myepisodes_id'] = myepisodes_info.myepisodes_id
146 return myepisodes_info.myepisodes_id
147
148 # Get the series name from thetvdb to increase match chance on myepisodes
149 if entry.get('tvdb_series_name'):
150 query_name = entry['tvdb_series_name']
151 else:
152 try:
153 series = lookup_series(name=series_name, tvdb_id=entry.get('tvdb_id'))
154 query_name = series.seriesname
155 except LookupError as e:
156 log.warning('Unable to lookup series `%s` from tvdb, using raw name.' % series_name)
157 query_name = series_name
158
159 baseurl = urllib2.Request('http://myepisodes.com/search.php?')
160 params = urllib.urlencode({'tvshow': query_name, 'action': 'Search myepisodes.com'})
161 try:
162 con = opener.open(baseurl, params)
163 txt = con.read()
164 except urllib2.URLError as e:
165 log.error('Error searching for myepisodes id: %s' % e)
166
167 matchObj = re.search(r'&showid=([0-9]*)">' + query_name + '</a>', txt, re.MULTILINE | re.IGNORECASE)
168 if matchObj:
169 myepisodes_id = matchObj.group(1)
170 db_item = session.query(MyEpisodesInfo).filter(MyEpisodesInfo.myepisodes_id == myepisodes_id).first()
171 if db_item:
172 log.info('Changing name to `%s` for series with myepisodes_id %s' %
173 (series_name.lower(), myepisodes_id))
174 db_item.series_name = series_name.lower()
175 else:
176 session.add(MyEpisodesInfo(series_name.lower(), myepisodes_id))
177 entry['myepisodes_id'] = myepisodes_id
178 return myepisodes_id
179
180 def mark_episode(self, task, entry, opener):
181 """Mark episode as acquired.
182
183 Required entry fields:
184 - series_name
185 - series_season
186 - series_episode
187
188 Raises:
189 PluginWarning if operation fails
190 """
191
192 if 'series_season' not in entry or 'series_episode' not in entry or 'series_name' not in entry:
193 raise PluginWarning(
194 'Can\'t mark entry `%s` in myepisodes without series_season, series_episode and series_name fields' %
195 entry['title'], log)
196
197 if not self.lookup_myepisodes_id(entry, opener, session=task.session):
198 raise PluginWarning('Couldn\'t get myepisodes id for `%s`' % entry['title'], log)
199
200 myepisodes_id = entry['myepisodes_id']
201 season = entry['series_season']
202 episode = entry['series_episode']
203
204 if task.manager.options.test:
205 log.info('Would mark %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))
206 else:
207 baseurl2 = urllib2.Request(
208 'http://myepisodes.com/myshows.php?action=Update&showid=%s&season=%s&episode=%s&seen=0' %
209 (myepisodes_id, season, episode))
210 opener.open(baseurl2)
211 log.info('Marked %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))
212
213
214 register_plugin(MyEpisodes, 'myepisodes', api_ver=2)
215
[end of flexget/plugins/services/myepisodes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/services/myepisodes.py b/flexget/plugins/services/myepisodes.py
--- a/flexget/plugins/services/myepisodes.py
+++ b/flexget/plugins/services/myepisodes.py
@@ -97,7 +97,7 @@
cookiejar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
- baseurl = urllib2.Request('http://myepisodes.com/login.php?')
+ baseurl = urllib2.Request('http://www.myepisodes.com/login.php?')
loginparams = urllib.urlencode({'username': username,
'password': password,
'action': 'Login'})
@@ -156,7 +156,7 @@
log.warning('Unable to lookup series `%s` from tvdb, using raw name.' % series_name)
query_name = series_name
- baseurl = urllib2.Request('http://myepisodes.com/search.php?')
+ baseurl = urllib2.Request('http://www.myepisodes.com/search.php?')
params = urllib.urlencode({'tvshow': query_name, 'action': 'Search myepisodes.com'})
try:
con = opener.open(baseurl, params)
@@ -205,7 +205,7 @@
log.info('Would mark %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))
else:
baseurl2 = urllib2.Request(
- 'http://myepisodes.com/myshows.php?action=Update&showid=%s&season=%s&episode=%s&seen=0' %
+ 'http://www.myepisodes.com/myshows.php?action=Update&showid=%s&season=%s&episode=%s&seen=0' %
(myepisodes_id, season, episode))
opener.open(baseurl2)
log.info('Marked %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))
| {"golden_diff": "diff --git a/flexget/plugins/services/myepisodes.py b/flexget/plugins/services/myepisodes.py\n--- a/flexget/plugins/services/myepisodes.py\n+++ b/flexget/plugins/services/myepisodes.py\n@@ -97,7 +97,7 @@\n \n cookiejar = cookielib.CookieJar()\n opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))\n- baseurl = urllib2.Request('http://myepisodes.com/login.php?')\n+ baseurl = urllib2.Request('http://www.myepisodes.com/login.php?')\n loginparams = urllib.urlencode({'username': username,\n 'password': password,\n 'action': 'Login'})\n@@ -156,7 +156,7 @@\n log.warning('Unable to lookup series `%s` from tvdb, using raw name.' % series_name)\n query_name = series_name\n \n- baseurl = urllib2.Request('http://myepisodes.com/search.php?')\n+ baseurl = urllib2.Request('http://www.myepisodes.com/search.php?')\n params = urllib.urlencode({'tvshow': query_name, 'action': 'Search myepisodes.com'})\n try:\n con = opener.open(baseurl, params)\n@@ -205,7 +205,7 @@\n log.info('Would mark %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))\n else:\n baseurl2 = urllib2.Request(\n- 'http://myepisodes.com/myshows.php?action=Update&showid=%s&season=%s&episode=%s&seen=0' %\n+ 'http://www.myepisodes.com/myshows.php?action=Update&showid=%s&season=%s&episode=%s&seen=0' %\n (myepisodes_id, season, episode))\n opener.open(baseurl2)\n log.info('Marked %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))\n", "issue": "Can't log in into myepisodes\nWhen usin the myepisodes plugin you get the warning \"Login to myepisodes.com failed, please check your account data or see if the site is down.\"\nIt seems that they have changed so you have to access their site with www.myepisodes.com. \n\nI made a copy of the plugin and changed all the urls to www.myepisodes.com and it worked\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nimport logging\nimport urllib\nimport urllib2\nimport re\nimport cookielib\nfrom datetime import datetime\n\nfrom sqlalchemy import Column, Integer, String, DateTime\n\nfrom flexget import db_schema\nfrom flexget.plugin import register_plugin, DependencyError, PluginWarning\n\ntry:\n from flexget.plugins.api_tvdb import lookup_series\nexcept ImportError:\n raise DependencyError(issued_by='myepisodes', missing='api_tvdb',\n message='myepisodes requires the `api_tvdb` plugin')\n\n\nlog = logging.getLogger('myepisodes')\nBase = db_schema.versioned_base('myepisodes', 0)\n\n\nclass MyEpisodesInfo(Base):\n __tablename__ = 'myepisodes'\n\n id = Column(Integer, primary_key=True)\n series_name = Column(String, unique=True)\n myepisodes_id = Column(Integer, unique=True)\n updated = Column(DateTime)\n\n def __init__(self, series_name, myepisodes_id):\n self.series_name = series_name\n self.myepisodes_id = myepisodes_id\n self.updated = datetime.now()\n\n def __repr__(self):\n return '<MyEpisodesInfo(series_name=%s, myepisodes_id=%s)>' % (self.series_name, self.myepisodes_id)\n\n\nclass MyEpisodes(object):\n \"\"\"\n Marks a series episode as acquired in your myepisodes.com account.\n\n Simple Example:\n\n Most shows are recognized automatically from their TVDBname.\n And of course the plugin needs to know your MyEpisodes.com account details.\n\n tasks:\n tvshows:\n myepisodes:\n username: <username>\n password: <password>\n series:\n - human target\n - chuck\n\n Advanced Example:\n\n In some cases, the TVDB name is either not unique or won't even be discovered.\n In that case you need to specify the MyEpisodes id manually using the set plugin.\n\n tasks:\n tvshows:\n myepisodes:\n username: <username>\n password: <password>\n series:\n - human target:\n set:\n myepisodes_id: 5111\n - chuck\n\n How to find the MyEpisodes id: http://matrixagents.org/screencasts/myep_example-20110507-131555.png\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'username': {'type': 'string'},\n 'password': {'type': 'string'}\n },\n 'required': ['username', 'password'],\n 'additionalProperties': False\n }\n\n def on_task_exit(self, task, config):\n \"\"\"Mark all accepted episodes as acquired on MyEpisodes\"\"\"\n if not task.accepted:\n # Nothing accepted, don't do anything\n return\n\n username = config['username']\n password = config['password']\n\n cookiejar = cookielib.CookieJar()\n opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))\n baseurl = urllib2.Request('http://myepisodes.com/login.php?')\n loginparams = urllib.urlencode({'username': username,\n 'password': password,\n 'action': 'Login'})\n try:\n logincon = opener.open(baseurl, loginparams)\n loginsrc = logincon.read()\n except urllib2.URLError as e:\n log.error('Error logging in to myepisodes: %s' % e)\n return\n\n if str(username) not in loginsrc:\n raise PluginWarning(('Login to myepisodes.com failed, please check '\n 'your account data or see if the site is down.'), log)\n\n for entry in task.accepted:\n try:\n self.mark_episode(task, entry, opener)\n except PluginWarning as w:\n log.warning(str(w))\n\n def lookup_myepisodes_id(self, entry, opener, session):\n \"\"\"Populates myepisodes_id field for an entry, and returns the id.\n\n Call will also set entry field `myepisode_id` if successful.\n\n Return:\n myepisode id\n\n Raises:\n LookupError if entry does not have field series_name\n \"\"\"\n\n # Don't need to look it up if we already have it.\n if entry.get('myepisodes_id'):\n return entry['myepisodes_id']\n\n if not entry.get('series_name'):\n raise LookupError('Cannot lookup myepisodes id for entries without series_name')\n series_name = entry['series_name']\n\n # First check if we already have a myepisodes id stored for this series\n myepisodes_info = session.query(MyEpisodesInfo).\\\n filter(MyEpisodesInfo.series_name == series_name.lower()).first()\n if myepisodes_info:\n entry['myepisodes_id'] = myepisodes_info.myepisodes_id\n return myepisodes_info.myepisodes_id\n\n # Get the series name from thetvdb to increase match chance on myepisodes\n if entry.get('tvdb_series_name'):\n query_name = entry['tvdb_series_name']\n else:\n try:\n series = lookup_series(name=series_name, tvdb_id=entry.get('tvdb_id'))\n query_name = series.seriesname\n except LookupError as e:\n log.warning('Unable to lookup series `%s` from tvdb, using raw name.' % series_name)\n query_name = series_name\n\n baseurl = urllib2.Request('http://myepisodes.com/search.php?')\n params = urllib.urlencode({'tvshow': query_name, 'action': 'Search myepisodes.com'})\n try:\n con = opener.open(baseurl, params)\n txt = con.read()\n except urllib2.URLError as e:\n log.error('Error searching for myepisodes id: %s' % e)\n\n matchObj = re.search(r'&showid=([0-9]*)\">' + query_name + '</a>', txt, re.MULTILINE | re.IGNORECASE)\n if matchObj:\n myepisodes_id = matchObj.group(1)\n db_item = session.query(MyEpisodesInfo).filter(MyEpisodesInfo.myepisodes_id == myepisodes_id).first()\n if db_item:\n log.info('Changing name to `%s` for series with myepisodes_id %s' %\n (series_name.lower(), myepisodes_id))\n db_item.series_name = series_name.lower()\n else:\n session.add(MyEpisodesInfo(series_name.lower(), myepisodes_id))\n entry['myepisodes_id'] = myepisodes_id\n return myepisodes_id\n\n def mark_episode(self, task, entry, opener):\n \"\"\"Mark episode as acquired.\n\n Required entry fields:\n - series_name\n - series_season\n - series_episode\n\n Raises:\n PluginWarning if operation fails\n \"\"\"\n\n if 'series_season' not in entry or 'series_episode' not in entry or 'series_name' not in entry:\n raise PluginWarning(\n 'Can\\'t mark entry `%s` in myepisodes without series_season, series_episode and series_name fields' %\n entry['title'], log)\n\n if not self.lookup_myepisodes_id(entry, opener, session=task.session):\n raise PluginWarning('Couldn\\'t get myepisodes id for `%s`' % entry['title'], log)\n\n myepisodes_id = entry['myepisodes_id']\n season = entry['series_season']\n episode = entry['series_episode']\n\n if task.manager.options.test:\n log.info('Would mark %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))\n else:\n baseurl2 = urllib2.Request(\n 'http://myepisodes.com/myshows.php?action=Update&showid=%s&season=%s&episode=%s&seen=0' %\n (myepisodes_id, season, episode))\n opener.open(baseurl2)\n log.info('Marked %s of `%s` as acquired.' % (entry['series_id'], entry['series_name']))\n\n\nregister_plugin(MyEpisodes, 'myepisodes', api_ver=2)\n", "path": "flexget/plugins/services/myepisodes.py"}]} | 2,948 | 438 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.